Information Systems and Technologies to Support Learning

This book features a selection of articles from the second edition of the conference Europe Middle East & North Africa Information Systems and Technologies to Support Learning 2018 (EMENA-ISTL'18), held in Fez, Morocco between 25th and 27th October 2018. EMENA-ISTL’18 was a global forum for researchers and practitioners to present and discuss recent findings and innovations, current trends, professional experiences and challenges in information systems & technologies to support learning.The main topics covered are: A) information systems technologies to support education; B) education in science, technology, engineering and Mathematics; C) emerging technologies in education learning innovation in the digital age; D) software systems, architectures, applications and tools; E) multimedia systems and applications; F) computer communications and networks; G) IOT, smart cities and people, wireless, sensor and ad-hoc networks; H) organizational models and information systems and technologies; I) human–computer Interaction; J) computers & security, ethics and data-forensic; K) health informatics, and medical informatics security; l) information and knowledge management; m) big data analytics and applications, intelligent data systems, and machine learning; n) artificial intelligence, high performance computing; o) mobile, embedded and ubiquitous systems; p) language and image processing, computer graphics and vision; and q) the interdisciplinary field of fuzzy logic and data mining.


117 downloads 3K Views 94MB Size

Recommend Stories

Empty story

Idea Transcript


Smart Innovation, Systems and Technologies 111

Álvaro Rocha Mohammed Serrhini Editors

Information Systems and Technologies to Support Learning Proceedings of EMENA-ISTL 2018

123

Smart Innovation, Systems and Technologies Volume 111

Series editors Robert James Howlett, Bournemouth University and KES International, Shoreham-by-sea, UK e-mail: [email protected] Lakhmi C. Jain, University of Technology Sydney, Broadway, Australia; University of Canberra, Canberra, Australia; KES International, UK e-mail: [email protected]; [email protected]

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles.

More information about this series at http://www.springer.com/series/8767

Álvaro Rocha Mohammed Serrhini •

Editors

Information Systems and Technologies to Support Learning Proceedings of EMENA-ISTL 2018

123

Editors Álvaro Rocha Departamento de Engenharia Informática, Faculdade de Ciências e Tecnologia Universidade de Coimbra Coimbra, Portugal

Mohammed Serrhini Departement informatique, Faculté des Sciences Université Mohammed Ier Oujda, Morocco

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-3-030-03576-1 ISBN 978-3-030-03577-8 (eBook) https://doi.org/10.1007/978-3-030-03577-8 Library of Congress Control Number: 2018960635 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book contains a selection of papers accepted for presentation and discussion at the second edition of international conference Europe, Middle East, and North Africa Conference on Information Systems and Technologies to Support Learning 2018 (EMENA-ISTL’18). This Conference had the support of the University Mohamed First Oujda, Morocco, AISTI (Iberian Association for Information Systems and Technologies/Associação Ibérica de Sistemas e Tecnologias de Informação), and Private University of Fez, Morocco. It took place in Fez, Morocco, from October 25 to 27, 2018. EMENA-ISTLL’18 conference has two aims: First, it provides the ideal opportunity to bring together professors, researchers, and high education students of different disciplines, discuss new issues, and discover the most recent developments, researches, and trends on information and communication technologies, emerging technologies, and security to support in learning. Second goal is focusing on to boost future collaboration and cooperation between researchers and academicians from Europe Middle East and North Africa universities (EMENA). The Program Committee of EMENA-ISTL’18 was composed of a multidisciplinary group of experts and those who are intimately concerned with information and communication technologies, artificial intelligence, and security. They have had the responsibility for evaluating, in a “blind review” process, the papers received for each of the main themes proposed for the Conference: (A) Information systems Technologies to support Education; (B) Education in Science, Technology, Engineering and Mathematics; (C) Emerging Technologies in Education Learning innovation in digital age; (D) Software Systems, Architectures, Applications and Tools; (E) Multimedia Systems and Applications; (F) Computer Communications and Networks; (G) IOT, Smart Cities and People, Wireless, Sensor and Ad-Hoc Networks; (H) Organizational Models and Information Systems and Technologies; (I) Human-Computer Interaction; (J) Computers & Security, Ethics and Data-forensic; (K) Health Informatics, and Medical Informatics Security; (L) Information and Knowledge Management; (M) Big Data Analytics and Applications, Intelligent Data Systems, and Machine Learning; (N) Artificial Intelligence, High Performance Computing; (O) Mobile, Embedded and Ubiquitous v

vi

Preface

Systems; (P) Language and Image Processing, Computer Graphics and vision; (Q) Interdisciplinary Field of Fuzzy Logic and Data Mining. EMENA-ISTL’18 received 150 contributions from 44 countries around the world. The papers accepted for presentation and discussion at the conference are published by Springer (this book) and by EMENA-ISTL’18 (another e-book) and will be submitted for indexing by ISI, EI-Compendex, SCOPUS, DBLP, and/or Scholar Google, among others. Extended versions of selected best papers will be published in relevant journals, including SCI/SSCI and Scopus indexed journals. We acknowledge all those who contributed to the staging of EMENA-ISTL’18 (authors, committees, and sponsors); their involvement and support are very much appreciated. October 2018

Álvaro Rocha Mohammed Serrhini

Organization

Conference General Chair Mohammed Serrhini

University Mohammed First Oujda, Morocco

Conference Co-chairs Álvaro Rocha Latif Ladid

University of Coimbra, Portugal Université du Luxembourg

Local Chairs Mohammed Ouazzani Jamil (Dean) El-Mostafa Daoudi Ahmed Laaroussi (Head of ESMAB) Abdelaziz Ait Moussa Abdelillah Monir Tarik Hajji Ahmed Tahiri El Miloud Jaara Zakaria Itahriouan Noura Ouerdi

Private University of Fez, Morocco University Mohamed First Oujda, Morocco Private University of Fez, Morocco University Mohamed First Oujda, Morocco University Mohamed First Oujda, Morocco Private University of Fez, Morocco University Mohamed First Oujda, Morocco University Mohamed First Oujda, Morocco Private University of Fez, Morocco University Mohamed First Oujda, Morocco

Advisory Committee Mohammed aziz lahlou

President of the Private University of Fez

vii

viii

Saâd Daoudi Antonio J. Jara Mohamed Salim Bouhlel

Organization

Director of the Private University of Fez University of Applied Sciences Western Switzerland University of Sfax Tunisia

Program Committee Olaf Maennel Gustavo Alves Mohsine Eleuldj Houda Hakim Guermaz Ounsa Roudies Álvaro Rocha Ernest Cachia (Dean) Roza Dumbraveanu Raúl Cordeiro Correia Ronan Champagnat Rita Francese Naceur Ben Hadj Braiek Fernando Moreira Maria José Angélico Gonçalves Maria José Sousa James Uhomoibhi Jarno Limnéll Esteban Vázquez Cano Juarez Bento Silva Anouar Belahcen Peter Mikulecky Katherine Maillet Rafael Valencia-Garcia Luis Anido Rifon Mraoui Hamid Carla Silva Rolou Lyn Rodriguez Maata Ali Shaqour Ahmed Tahiri Abdullah Al-Hamdani Muzafer Saracevic

Tallinn University of Technology, Estonia School of Engineering, Polytechnic of Porto, Portugal Mohammadia School of Engineering, Morocco Manouba University, Tunisia Mohammadia School of Engineering, Morocco University of Coimbra, Portugal Faculty of ICT University of Malta, Malta Roza Dumbraveanu, University,Chisinau, Moldova Instituto Politécnico de Setúbal, Portugal Universite de La Rochelle, France University of Salerno, Italy Polytechnic School of Tunis, Tunisia Oporto Global University, Portugal ISCAP/Polytechnic Institute of Porto, Portugal Universidade Europeia de Lisboa, Portugal University of Ulster, UK Aalto University, Finland Universidad Nacional de Educación a Distancia, Spain Universidade Federal de Santa Catarina, Brasil Aalto University, Finland University of Hradec Kralove, Czech Institut Mines-Télécom Paris, France Universidad de Murcia, Spain Universidade de Vigo, Spain Faculty of Sciences Oujda, Morocco University Lusófona de Humanidades e Tecnologias Lisbone, Portugal Faculty of Computing Sciences - Gulf College Oman, Oman An-Najah National University, Palestine University Mohamed First Oujda, Morocco Sultan Qaboos University, Muscat, Oman International University of Novi Pazar, Serbia

Organization

Manuel Caeiro Rodríguez Rafik Zitouni Utku ZKose Noura Ouerdi Tajullah Sky-Lark Otmane Ait Mohamed Mohammad Hamdan Wail Mardini Francesca Pozzi Filipe Cardoso Abdel-Badeeh Salem Mohammad Al-Smadi Mohamad Badra Amal Zouaq Pedro Guerreiro El Bekkay Mermri Martin Llamas-Nistal Camille Salinesi Jorge Pires Ali Jaoua Osama Shata Abdelkarim Erradi Mohammed Gabli Osama Halabi Rachid Nourine Abdelhafid Bessaid Lehsaini Mohamed Carla Silva John Sahalos Lebbah Yahia Kashif Saleem Amjad Gawanmeh Abdulmalik Al-Salman Olivier Markowitch Ladan Mohammad Tolga Ensari David Baneres Yahya Tashtoush Alberto Cardoso StephanieTeufel

ix

Universidade de Vigo, Spain Ecole d’ingénieur généraliste et high-tech à Paris, France Usak University, Turkey University Mohamed First Oujda, Morocco Sustainable Knowledge Global Solutions, USA Concordia University, Canada Yarmouk University, Jordan Jordan University of Science and Technology, Jordan Istituto Tecnologie Didattiche - CNR, Italy Polytechnic Institute of Setubal, Portugal Ain Shams University, Egypt Jordan University of Science and Technology, Jordan Zayed University, United Arab Emirates Royal Military College of Canada, Canada Universidade do Algarve, Portugal University Mohamed First Oujda, Morocco University of Vigo, Spain CRI, Université de Paris 1 Panthéon-Sorbonne, France Polytechnic Institute of Castelo Branco, Portugal Qatar University, Qatar Qatar University, Qatar Qatar University, Qatar University Mohammed Premier, Oujda, Morocco Qatar University, Qatar Oran University, Algieria University of Tlemcen, Algieria University of Tlemcen, Algieria University Lusófona de Lisbone, Portugal University of Nicosia, Cyprus University of Oran 1, Algieria King Saud University, Saudi Arabia Khalifa University, United Arab Emirates King Saud University, Saudi Arabia Université Libre de Bruxelles, Belgium Rafik Hariri University, Lebanon Istanbul University, Turkey Universitat Oberta de Catalunya, Spain Jordan University of Science and Technology, Jordan University of Coimbra, Portugal University of Fribourg, Switzerland

x

Associate Dean Majida Ali Abed Alasady Pierre Manneback Mohammed Benjelloun

Organization

Tikrit University, Iraq Faculté Polytechnique de Mons, Belgium Faculté Polytechnique de Mons, Belgium

Contents

Categorization of Types of Internautes Based on Their Navigation Preferences Within Educational Environments . . . . . . . . . . . . . . . . . . . Hector F. Gomez A, Susana A. Arias T, Carlos E. Martinez, Miguel A. Martínez V, Natalia Bustamante Sanchez, and Estefania Sanchez-Cevallos Metaheuristic Approaches for Solving University Timetabling Problems: A Review and Case Studies from Middle Eastern Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manar Hosny

1

10

Ontology-Based Modeling for a Personalized MOOC Recommender System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sara Assami, Najima Daoudi, and Rachida Ajhoun

21

A New Model of Learner Experience in Online Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yassine Safsouf, Khalifa Mansouri, and Franck Poirier

29

Hybrid Recommendation Approach in Online Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammed Baidada, Khalifa Mansouri, and Franck Poirier

39

Performance Scoring at University: Algorithm of Student Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adnane Rahhou and Mohammed Talbi

44

Introducing Mobile Technology for Enhancing Teaching and Learning at the College of Business Education in Tanzania: Teachers and Students’ Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . Godfrey Mwandosya, Calkin Suero Montero, and Esther Rosinner Mbise

56

xi

xii

Contents

Information System of Performance Management and Reporting at University: Example of Student Scorecard . . . . . . . . . . . . . . . . . . . . . Adnane Rahhou and Mohammed Talbi

67

Mobile Learning Oriented Towards Learner Stimulation and Engagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Samir Achahod, Khalifa Mansouri, and Franck Poirier

77

A Learning Style Identification Approach in Adaptive E-Learning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hanaa El Fazazi, Abderrazzak Samadi, Mohamed Qbadou, Khalifa Mansouri, and Mouhcine Elgarej Towards Smart Innovation for Information Systems and Technology Students: Modelling Motivation, Metacognition and Affective Aspects of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . James Ngugi and Leila Goosen

82

90

Key Elements of Educational Augmented and Virtual Reality Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Houda Elkoubaiti and Radouane Mrabet Teaching and Learning How to Program Without Writing Code . . . . . 106 Michel Adam, Moncef Daoud, and Patrice Frison Towards a Dynamics of Techno-Pedagogical Innovation Within the University: Case Study Hassan II University of Casablanca . . . . . . . . . 118 Nadia Chafiq, Mohamed Housni, and Mohamed Moussetad Towards the Design of an Innovative and Social Hybrid Learning Based on the SMAC Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Nadia Chafiq and Mohamed Housni Information Systems and Technologies Opening New Worlds for Learning to Children with Autism Spectrum Disorders . . . . . . . . . . 134 Leila Goosen Technologies to Inspire Education in Science, Engineering and Technology Through Community Engagement in South Africa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Patricia Gouws and Leila Goosen Promoting Pro-environmental Practices About Trees in Children Using Infrared Thermography Technology . . . . . . . . . . . . . . . . . . . . . . 154 Maria Eduarda Ferreira, João Crisóstomo, and Rui Pitarma Emerging Technologies and Learning Innovation in the New Learning Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Helene Fournier, Heather Molyneaux, and Rita Kop

Contents

xiii

Exploring the Acceptance of Mobile Technology Application for Enhancing Teaching and Learning at the College of Business Education in Tanzania . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Godfrey Mwandosya, Calkin Suero Montero, and Esther-Rosinner Mbise Analysis of Atmospheric Monitoring Data Through Micro-meteorological Stations, as a Crowdsourcing Tool for Technology Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Maritza Aguirre-Munizaga, Katty Lagos-Ortiz, Vanessa Vergara-Lozano, Karina Real-Avilés, Mitchell Vásquez-Bermudez, Andrea Sinche-Guzmán, and José Hernández-Rosas Infra SEN: Intelligent Information System for Real Time Monitoring of Distributed Infrastructures and Equipments in Rural Areas . . . . . . . 188 Bala Moussa Biaye, Khalifa Gaye, Cherif Ahmed Tidiane Aidara, Amadou Coulibaly, and Serigne Diagne Product-BPAS, A Software Tool for Designing Innovative and Modular Products for Agriculture and Crafts . . . . . . . . . . . . . . . . . 194 Chérif Ahmed Tidiane Aidara, Bala Moussa Biaye, Serigne Diagne, Khalifa Gaye, and Amadou Coulibaly An Efficient Next Hop Selection Scheme for Enhancing Routing Performance in VANETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Driss Abada, Abdellah Massaq, and Abdellah Boulouz Comparative Performance Study of QoS Downlink Scheduling Algorithms in LTE System for M2M Communications . . . . . . . . . . . . . 216 Mariyam Ouaissa, Abdallah Rhattoy, and Mohamed Lahmer Uberisation Business Model Based on Blockchain for Implementation Decentralized Application for Lease/Rent Lodging . . . . . . . . . . . . . . . . . 225 Saleh Hadi, Alexandrov Dmitry, and Dzhonov Azamat New Failure Detection Approach for Real Time for Hydraulic Networks Using the Non-acoustic Method . . . . . . . . . . . . . . . . . . . . . . . 233 Bala Moussa Biaye, Cherif Ahmed Tidiane Aidara, Amadou Coulibaly, Khalifa Gaye, Serigne Diagne, and Edouard Ngor Sarr Fault-Tolerant Communication for IoT Networks . . . . . . . . . . . . . . . . . 245 Abdelghani Boudaa and Hocine Belouadah Emergency Navigation Approach Using Wireless Sensor Networks and Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Najla Alnabhan, Nadia Al-Aboody, and Hamed Al-Rawishidy Security Scheme for IoT Environments in Smart Grids . . . . . . . . . . . . . 269 Sebastián Cruz-Duarte, Marco Sastoque-Mahecha, Elvis Gaona-García, and Paulo Gaona-García

xiv

Contents

Dynamic Airspace Sectorization Problem Using Hybrid Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Mohammed Gabli, El Miloud Jaara, and El Bekkaye Mermri A Semantic Framework to Improve Model-to-Model Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Mohamed Elkamel Hamdane, Karima Berramla, Allaoua Chaoui, and Abou El Hasan Benyamina An Embedded Prototype System for People with Disabilities Using Google’s Speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Maritza Aguirre-Munizaga, Vanessa Vergara-Lozano, Carlota Delgado, Joel Ramirez-Yela, and Néstor Vera Lucio Measuring Semantic Coverage Rates Provided by Cached Regions in Mediation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Ouafa Ajarroud, Ahmed Zellou, and Ali Idri Using Probabilistic Direct Multi-class Support Vector Machines to Improve Mental States Based-Brain Computer Interface . . . . . . . . . . 321 Mounia Hendel and Fatiha Hendel Tracking Attacks Data Through Log Files Using MapReduce . . . . . . . . 331 Yassine Azizi, Mostafa Azizi, and Mohamed Elboukhari Toward a New Integrated Approach of Information Security Based on Governance, Risk and Compliance . . . . . . . . . . . . . . . . . . . . . 337 Mounia Zaydi and Bouchaib Nassereddine A Novel Steganography Algorithm Based on Alpha Blending Technique Using Discrete Wavelet Transform (ABT-DWT) . . . . . . . . . . 342 Ayidh Alharbi and Tahar M. Kechadi A Comparison of American and Moroccan Governmental Security Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Rabii Anass, Assoul Saliha, Ouazzani Touhami Khadija, and Roudiès Ounsa Polyvalent Fingerprint Biometric System for Authentication . . . . . . . . . 361 Mohamed El Beqqal, Mostafa Azizi, and Jean Louis Lanet Bitcoin Difficulty, A Security Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Abdenaby Lamiri, Kamal Gueraoui, and Gamal Zeggwagh Result Oriented Time Correlation Between Security and Risk Assessments, and Individual Environment Compliance Framework . . . . 373 Dimo Dimov and Yuliyan Tsonev

Contents

xv

Classification of Ransomware Based on Artificial Neural Networks . . . . 384 Noura Ouerdi, Tarik Hajji, Aurelien Palisse, Jean-Louis Lanet, and Abdelmalek Azizi On the Efficiency of Scalar Multiplication on the Elliptic Curves . . . . . . 393 Siham Ezzouak and Abdelmalek Azizi Patients Learning Process Supporting Change in Identities and Life Styles - A Heart Failure Self-care Scenario . . . . . . . . . . . . . . . 400 Linda Askenäs and Jan Aidemark Privacy Preserving Requirements for Sharing Health Data in Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 Insaf Boumezbeur and Karim Zarour Using IoT and Social Networks for Enhanced Healthy Practices in Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 Gonçalo Marques and Rui Pitarma IOT System for Self-diagnosis of Heart Diseases Using Mathematical Evaluation of Cardiac Dynamics Based on Probability Theory . . . . . . . 433 Juan Piedrahita-Gonzalez, Juan Cubillos-Calvachi, Carlos Gutiérrez-Ardila, Carlos Montenegro-Marin, and Paulo Gaona-García Modeling the OWASP Most Critical WEB Attacks . . . . . . . . . . . . . . . . 442 Yassine Ayachi, El Hassane Ettifouri, Jamal Berrich, and Bouchentouf Toumi Measurement of Co-deployment of IT Quality Standard: Application to ISO9001, CMMI and ITIL . . . . . . . . . . . . . . . . . . . . . . . 451 Hind Dahar and Ounsa Roudies Rating Microfinance Products Consumers Using Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Tarik Hajji and Ouazzani Mohammed Jamil Applying Agile Procedure Model to Improve ERP Systems Implementation Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Majda El Mariouli and Jalal Laassiri Clustering Strategy for Scientific Workflow Applications in IaaS Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 Sid Ahmed Makhlouf and Belabbas Yagoubi Open Government Data: Problem Assessment of Machine Processability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Hanae Elmekki, Dalila Chiadmi, and Hind Lamharhar

xvi

Contents

Toward an Evaluation Model for Open Government Data Portals . . . . 502 Kawtar Younsi Dahbi, Hind Lamharhar, and Dalila Chiadmi NoSQL Scalability Performance Evaluation over Cassandra . . . . . . . . . 512 Maryam Abbasi, Filipe Sá, Daniel Albuquerque, Cristina Wanzeller, Filipe Caldeira, Paulo Tomé, Pedro Furtado, and Pedro Martins A Novel Filter Approach for Band Selection and Classification of Hyperspectral Remotely Sensed Images Using Normalized Mutual Information and Support Vector Machines . . . . . . . . . . . . . . . . 521 Hasna Nhaila, Asma Elmaizi, Elkebir Sarhrouni, and Ahmed Hammouch Differences Between Clusterings as Distances in the Covering Graph of Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 Giovanni Rossi A New Supervised Learning Based Ontology Matching Approach Using Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 Meriem Ali Khoudja, Messaouda Fareh, and Hafida Bouarfa VACIT: Tool for Consumption, Analysis and Machine Learning for LOD Resources on CKAN Instances . . . . . . . . . . . . . . . . . . . . . . . . 552 Álvaro Varón-Capera, Paulo Alonso Gaona-García, Jhon Francined Herrera-Cubides, and Carlos Montenegro-Marín Selecting Best Machine Learning Techniques for Breast Cancer Prediction and Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Youness Khourdifi and Mohamed Bahaj Identification of Human Behavior Patterns Based on the GSP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572 Hector F. Gomez A, Edwin Fabricio Lozada T., Luis Antonio Llerena, Jorge Alonso Benitez Hurtado, Richard Eduardo Ruiz Ordoñez, Freddy Giancarlo Salazar Carrillo, Joselito Naranjo-Santamaria, and Teodoro Alvarado Barros Matchstick Games: On Removing a Matchstick Without Disturbing the Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Godfried T. Toussaint Multi-cloud Resources Optimization for Users Applications Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588 Anas Mokhtari, Mostafa Azizi, and Mohammed Gabli An Algorithm of Conversion Between Relational Data and Graph Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594 Zakariyaa Ait El Mouden, Abdeslam Jakimi, and Moha Hajar

Contents

xvii

Self-calibration of the Fundus Camera Using the Genetic Algorithm . . . 603 Mostafa Taibi, Rabha Allaoui, and Raja Touahni Utilizing Faults and Time to Finish Estimating the Number of Software Test Workers Using Artificial Neural Networks and Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Alaa Sheta, Sultan Aljahdali, and Malik Braik A Comparative Analysis of Control Strategies for Stabilizing a Quadrotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625 Moussa Labbadi, Mohamed Cherkaoui, Yassine El Houm, and M’hammed Guisser A New Approach Based on Bat Algorithm for Inducing Optimal Decision Trees Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 Ikram Bida and Saliha Aouat A New Parallel Method for Medical Image Segmentation Using Watershed Algorithm and an Improved Gradient Vector Flow . . . . . . . 641 Hayat Meddeber and Belabbas Yagoubi Object Detecting on Light Field Imaging: An Edge Detection Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652 Yessaadi Sabrina and Mohamed Tayeb Laskri Arab Handwriting Character Recognition by Curvature . . . . . . . . . . . . 662 Aissa Kerkour Elmiad Language Identification for User Generated Content in Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672 Randa Zarnoufi, Hamid Jaafar, and Mounia Abik Vision-Based Distance Estimation Method Using Single Camera: Application to the Assistance of Visually Impaired People . . . . . . . . . . . 679 Wafa Saad Al-shehri, Salma Kammoun Jarraya, and Manar Salama Ali 1D Signals Descriptors for 3D Shape Recognition . . . . . . . . . . . . . . . . . 687 Kaoutar Baibai, Mohamed Emharraf, Wafae Mrabti, Youssef Ech-choudani, Khalid Hachami, and Benaissa Bellach Dynamic Textures Segmentation and Tracking Using Optical Flow and Active Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 Ikram Bida and Saliha Aouat Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705

Categorization of Types of Internautes Based on Their Navigation Preferences Within Educational Environments Hector F. Gomez A1(&), Susana A. Arias T1, Carlos E. Martinez2, Miguel A. Martínez V2, Natalia Bustamante Sanchez3, and Estefania Sanchez-Cevallos3 1

Facultad de Ciencias Humanas y de la Educacion, Universidad Técnica de Ambato, Ambato, Ecuador [email protected] 2 Universidad Regional Autonoma de los Andes-UNIANDES, Km. 5 1/2 via a Baños, Ambato, Ecuador {ua.carlosmartinez,ua.miguelmartinez}@uniandes.edu.ec 3 Sec. Deptal. Hoteleria y Turismo, Universidad Tecnica Particular de Loja, Loja, Ecuador {ncbustamante,resanchez}@utpl.edu.ec

Abstract. In this article, the state of the art of data mining applied to obtaining frequent navigation behaviors in an educational environment is described. The procedure used by the data mining algorithms chosen to classify Internet users based on their browsing preferences which is explained. An explanation of the records that are used for the training of the algorithms is made, and finally a comparison of the efficiency of the categorization is made. Keywords: Sequential patterns  Behavior  Data mining  Internet models States  Frequent sequences  Itemsets  Patterns  Human behavior

1 Introduction Some of the main applications of data mining techniques in educational environments are the personalization systems [1, 2] the recommender systems [3], the modification systems [4], the irregularity detection systems [5], among others, since they offer capacities for the discovery of common behaviors [6] (navigation patterns) of regular and irregular navigation, the adaptive construction of teaching plans, the discovery of relationships between activities, the incremental diagnosis of the performance of the students, etc. According to Romero [7] et al. Cristóbal, data mining applies to education from two perspectives: The first, oriented towards the authors, whose aims are to obtain data that serve to feed the teaching, establish methods to know how students learn using the Internet, determine the student browsing patterns, determine techniques for restructuring the contents of the website, classify students into groups, etc.

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 1–9, 2019. https://doi.org/10.1007/978-3-030-03577-8_1

2

H. F. Gomez A et al.

The second, oriented towards students, whose purpose is to make recommendations to Internet users during their interaction with virtual learning systems. In this research, the result of applying data mining algorithms in the categorization of Internet users is evaluated, depending on their browsing preferences. These results can be used in the personalized design of a learning environment, giving the user tools or information related to their individual interests [8–10].

2 Description of the Problem Internet users access web pages sequentially, with the possibility of re-entering those that are of interest. When observing a set of Internet users, university students, grouped by affinity, who access freely to web pages, it is possible to verify that at a certain moment they will access the same pages, depending on the area of interest [11]. The first step of this research is to discover the surfing patterns of Internet users, grouped by affinity. The next step is to compare the browsing preferences of new Internet users with the browsing patterns obtained in the previous step. If the result of the comparison is true, then the new surfer is grouped in the corresponding category. Otherwise, a new navigation behavior is detected, whose treatment is explained through the GSP_M algorithm, which is detailed below. In the following paragraphs, the definitions for state, itemset and sequences are presented. Later, the algorithms that were applied will be explained, in order to obtain common navigation patterns. Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar [12, 13]: 2.1

GSP_M Algorithm

The GSP_M algorithm (GSP_Memory) [14] is based on the GSP algorithm, differing from this in that it includes a software structure that allows: (a) taking into account repeated itemsets, (b) memorizing sequences that could not become a pattern, for based on them update the database of patterns and, (c) obtain the weight of the states, since according to the records of access to educational environments by Internet users, the number of times a user accesses the same page becomes a repetitive state, which helps distinguish between one type of Internet user and another. The components of the software structure included in GSP_M are described below: Modification of the input sequences to allow repeated itemsets: Given BD as the sequence database, and, itemsets I1 = {e1, e2, e3}, I2 = {e4, e5, e6} where In Є I, it is possible to construct a sequence s where s = . In this context, s = for GSP, since repeating itemsets is not allowed [12, 13]. For GSP_M, which does allow the repetition of itemsets, s is divided into subsequences s1 = and s2 = . As the first step of GSP is to select the 1-sequences, the modification of GSP_M is valid, since the sub-sequences and their groupings would be taken into account.

Categorization of Types of Internautes Based on Their Navigation Preferences

3

For example, if a group of mathematicians are asked to access educational environments on the Internet, they probably access pages of information relevant to mathematics, but give more or less meaning to visit specialized pages, reports, exercises, tasks, etc. If the group of mathematicians is given one hour to access the pages of their choice in educational environments, they will generate access sequences such as the following.

Table 1. Example of access to the Internet by a surfer by profession M1 M2 Información General Ejercicios Información General Reportes Ejercicios Reportes

M3 Ejercicios Páginas especializadas Páginas especializadas

Table 1 shows the accesses of the mathematical internauts. The mathematician M1 produced the sequence M1 = . This sequence, according to the GSP_M algorithm, contains two sub-sequences M11 = and M12 = . The mathematicians M2 and M3, will also generate sub-sequences depending on the repetition of their states. Module to memorize the sequences that could not become a pattern: Based on the records of access to educational environments by Internet users, a DB Database of sequences was built, and, GSP was applied as proposed by Srikant [12], discarding those sequences that could not become patterns by not exceeding the minimum support threshold (threshold). To classify a new surfer it is necessary to determine that a new sequence generated by a new user who is accessing the educational environment, is included in the sequence set of the BDP pattern database obtained by GSP. The negative discrimination of GSP to that did not exceed the threshold, becomes a problem, since these Snuevas can be a new pattern of behavior. To correct this situation, when applying the GSP_M algorithm, with the sequences discarded by GSP, a new database called SNEU is constructed (sequences that do not exceed the threshold) according to what is proposed in Algorithms 1 and 2. When entering a New sequence is verified if it is included in BDP, if the result is affirmative it is categorized to the surfer. If is not included in BDP, its inclusion in SNEU is verified. If the result of this verification is affirmative, the frequency of is increased by 1. If the new frequency assigned to exceeds the threshold, then a new pattern has been generated (see Algorithm 2), so that is eliminated from SNEU and included in BDP. If the result of the previous verification is negative, is included in SNEU with frequency 1.

4

H. F. Gomez A et al.

1

As an example of the operation of Algorithm 1, an experiment was proposed in which sequences whose characteristics are described in Table 2 were generated: Table 2. Example input data for GSP ID 10 20 30

S

Executing GSP with a minimum support of 0.5, 5 sequences were registered in the BDP database: , , , and . While in the SNEU database were recorded as discarded sequences: , , , , , , . The sequences contained in SNEU will be taken into account, to verify if a sequence is being included in SNEU, exceeds the minimum support threshold and thus becomes a new behavior pattern. This process is done by applying Algorithm 2.

Categorization of Types of Internautes Based on Their Navigation Preferences

5

Algorithm 2, uses the memory capacity of GSP_M, and, the inclusion characteristics of one sequence in another to obtain the new patterns. As examples of the application of this algorithm, two experiments were proposed. In both experiments the sequences S1 and S2 described in the second row of Tables 3 and 4 were compared. The results of the comparison are included in the same tables. Table 3. Comparison of two sequences: Sequence 1 is included in sequence 2. Sequence 1 S1 a1 = Itemset(3) a2 = Itemset(4 5) a3 = Itemset(8)

a1 está en b21 a2 = (4 5) por lo que a21 a22 = {(5)} a21 está en b41 = {(4)} a22 está en b42 = {(5)} a31 está en b51

Sequence 2 S2 b1 = Itemset(7) b2 = Itemset(3 8) b3 = Itemset(9) b4 = Itemset(4 5 6) b5 = Itemset(8) b11 = {(7)} b12 = {} b21 = {(3)} b22 = {(8)} = {(4)} b41 = {(4)} b42 = {(5)} b43 = {(6)} b51 = {(8)}

As a main result of the experiment described in Table 3, the algorithm divided the itemset b2 of the sequence 2, into two sub-sequences in such a way that it could be identified that the itemset a1 of the sequence 1, corresponded with the itemset b21 in the sequence 2.

6

H. F. Gomez A et al.

Table 4. Application of the sequence inclusion algorithm: The sequence 1 is not included in the sequence 2. Sequence 1 Sequence 2 S1 = S2 = a1 = (3 5) en donde a11 = (3) b11 = {(3)} b12 = {} a12 = (5) b21 = {(5)} b22 = {} a11 está en b11 a12 no está en b12

In the second experiment, although sequence 1 was divided into sub-sequences, it was not possible to determine position-value correspondence in sequence 2, as shown in Table 4. Module to memorize the Weight of each State: As a first step, the weight of each state E in BDP is determined, for which, sequence by sequence, the states are extracted and it places the repetition number of each state in a list. As a second step, we obtain the average P of the repetitions of the states located in the list, for which we obtain the geometric mean of the set of repetitions of each state. The geometric mean prevents the exaggerated repetition of a state in a sequence s from affecting the final average. Then, to generate alerts, we compare the number of repetitions of each state in the new sequence with the average P obtained. As an example of the execution of this module, let s1 be the sequence containing the states E1 = {search pages, search pages, search pages, specialized pages, reports}. Let s2 be the sequence containing the states E2 = {search pages, search pages, reports}. The average for the state search pages is 2.23, for the state specialized pages is 1 and for the state reports the average is 2.

3 Experimentation A test group of 150 volunteer Internet users was formed. The Internet users were grouped by affinity and they were allowed to surf for 10 h in educational web environments, at a rate of 2 h a day for five days. The interest affinity groups selected were Business Administration, Mathematics and Information Technology [15]. Each group had access to educational web environments, in which 1180 navigation resources were offered. Examples of these resources are: search pages, specialized magazines, pages of solved exercises, etc. Each resource was given a diffuse value, related to the number of times the Internet user accessed it. In Fig. 1, you can see the resources and fuzzy values assigned. The training data set was generated by 150 Internet users, categorized into three groups of each 50 members (Table 5). With the data generated by each of the groups, the GSP algorithm was trained to obtain the behavior patterns of each of the categories of Internet users. Once the algorithm was trained, as a test, the data generated by the 150 Internet users was analyzed again, obtaining the results shown in Table 6. The procedure was repeated for GSP_M.

Categorization of Types of Internautes Based on Their Navigation Preferences

7

página {poco, medio, muy} menú {poco, medio, muy} introduccion {poco, medio, muy} economia {poco, medio, muy} microeconomia {poco, medio, muy} revisión {poco, medio, muy} economía {poco, medio, muy} felix {poco, medio, muy} lobo {poco, medio, muy} aleu {poco, medio, muy} elección {poco, medio, muy} diplomatura {poco, medio, muy} estadística {poco, medio, muy} gestión {poco, medio, muy} administración {poco, medio, muy} pública {poco, medio, muy} laborales {poco, medio, muy} plan {poco, medio, muy} ingeniería {poco, medio, muy} telecomunicación {poco, medio, muy}

dirección {poco, medio, muy} técnicas {poco, medio, muy} estadísticas {poco, medio, muy} humanidades {poco, medio, muy} licenciatura {poco, medio, muy} derecho {poco, medio, muy} discutir {poco, medio, muy}

……..

Fig. 1. Resources accessed categorization Table 5. Conformation of affinity groups for training Companies 50 Computing 50 Maths 50

Table 6. Results of the analysis of the behavior of Internet users with the trained algorithms. Companies Computing Maths GSP 36 41 37 GSP_M 43 48 49

8

H. F. Gomez A et al.

The final experimentation was carried out, including the group of new volunteers, in each of the affinity groups, as shown in Table 7. Table 7. New Internet users by affinity groups Companies 16 Computing 46 Maths 17

The data generated by these new Internet users were processed by GSP and GSP_M, obtaining the results shown in Table 8: Table 8. Results of the application of the algorithms on the behavior of new Internet users. Companies Computing Maths GSP 11 27 13 GSP_M 15 44 15

Analysis of the effectiveness rates of the GSP and GSP_M algorithms. The results of the effectiveness rates, related to the application of the GSP and GSP_M algorithms on data from Internet users who access educational environments, are shown in Table 9.

Table 9. Effectiveness rates of the GSP_ M and GSP algorithms on Internet user behavior. I. Algorithm Empr Math Com GSP 68% 58% 76% GSP_M 93% 95% 88%

The analysis of the results obtained allows us to conclude that the effectiveness rate of the GSP_M algorithm exceeds the effectiveness rate of the GSP algorithm. The explanation why GSP_M is more efficient is related to the fact that it allows the handling of repeated itemsets in its sequences and to the determination of the weight of the states. These permissions to access are very important in the domain in which Internet users access educational environments, as they generally repeat and combine their states.

Categorization of Types of Internautes Based on Their Navigation Preferences

9

4 Future Research The next step is to take into account this categorization in order to be used by a recommender system to individualize the access pages of each user and to provide individuality in virtual learning environments collaborative and recommender systems that use data mining for the continuous improvement of e-learning courses are presented, in order that teachers of similar profile, share their research as a result of applying mining locally on their own sources.

References 1. Ventura, S.: Minería de Datos en sistemas educativos, Presentación para el Departamento de Computing y Análisis Numérico de la Universidad de Córdoba 2. Srivastava, J., Mobasher, B., Cooley, R.: Automatic personalization based on web usage mining. Commun. Assoc. Comput. Machin. 43, 142–151 (2000) 3. Li, J., Zaiane, O.: Combining usage, content and structure data to improve web site recommendation. In: International Conference on Electronic Commerce and Web Technologies, Spain (2004) 4. Perkowitz, M., Etzioni, O.: Adaptive web sites: automatically synthesizing web pages. In: National Conference on Artificial Intelligence, WI (1998) 5. Barnett, V., Lewis, T.: Outliers in Statistical Data. John Wiley & Sons, Chichester (1994) 6. Zaïane, O.: Web usage mining for a better web-based learning environment. In: Conference on Advanced Technology for Education, pp. 60–64 (2001) 7. Soto, S.V., Martínez, C.H., Morales, C.R.: Estado actual de la aplicación de la minería de datos a los sistemas de enseñanza basada en web. In: Roberto Ruiz, T. (ed.) III Taller de Minería de Datos y Aprendizaje (TAMIDA 2005) (2005) 8. Mobasher, B., Srivastava, J., Cooley, R.: Data preparation for mining world wide web browsing patterns. Knowl. Inf. Syst. 1, 5–32 (1999) 9. Mobasher, B., Srivastava, J., Cooley, R.: Web mining: information and pattern discovery on the world wide web. In: 9th IEEE International Conference on Tools with Artificial Intelligence, pp. 558–567 (1997) 10. Xin, M., Han, J., Zaiane, O.R.: Discovering web access patterns and trends by applying OLAP and data mining technology on web logs. In: Proceedings of Advances in Digital Libraries ADL 1998, Santa Barbara, CA, USA, pp. 19–29, April 1998 11. Rodríguez, J.: Influencia de Internet sobre la profesión enfermera. Hospital Clínico Universitario de Santiago de Compostela, Revista de enfermería cardiovascular, 25 Septiembre del 2003 12. Agrawal, R., Srikant R.: Mining sequential patterns: generalizations and performance improvements. In: Proceedings of the 5th International Conference on Extending Database Technology, EDBT, vol. 1057, pp. 3–17, February-May—February-September 1996 13. Pujari, A.: Data Mining Techniques, p. 284. Universities Press (India) (2005) 14. Gómez, H.: Interpretación de alto nivel de Sequence de video conducida por ontologías, Tesis UNED-UTPL 15. E. d. a. a. i. simbólico: “scalab.uc3m.es,” 9 Agosto 2010. http://scalab.uc3m.es/*dborrajo/ cursos/ejercicios-aa.html

Metaheuristic Approaches for Solving University Timetabling Problems: A Review and Case Studies from Middle Eastern Universities Manar Hosny(&) College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia [email protected]

Abstract. University timetabling problems are concerned with the assignment of events and tasks that occur frequently in universities, like exams, courses, projects, and faculty load. These problems are difficult and consume a lot of time and effort if done manually. Automating such tasks will save time and cost, and increase the satisfaction of the stakeholders. Since university timetabling problems are mostly NP-hard, heuristics and metaheuristics are often used for solving them. In this survey, we review different university timetabling problems, such as: Examination Timetabling, Course Timetabling, and Staff Timetabling. We also propose a new problem, which is Project Timetabling. In addition, we discuss some case studies that successfully tackled these problems using metaheuristic algorithms. However, due to the huge number of papers published worldwide in this research area, we focus in this survey on papers published in the Middle Eastern region. The findings of this survey indicate that there are many challenges that are still open for further investigation. Focusing on the convenience of the stakeholders and adopting hybrid search methods are among the promising research directions in this field. Project timetabling which has been introduced in this survey is also another promising area that is open for further investigation by the interested researchers. Keywords: Scheduling  Heuristics University timetabling problems

 Metaheuristics

1 Introduction Producing and managing different schedules are among the frequent tasks that almost every educational institution needs to handle periodically, in order to plan their courses, exams, staff schedules, students’ projects, etc. These problems are referred to in the literature as timetabling or scheduling problems, which generally require allocating certain events to specific resources, such as timeslots, rooms, and lecturers. It is indeed one of the important and difficult administrative issues that arise in academic institutions. Most of these problems are subject to a large number of constraints that make them even harder to solve. In general, researchers in the field have classified the © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 10–20, 2019. https://doi.org/10.1007/978-3-030-03577-8_2

Metaheuristic Approaches for Solving University Timetabling Problems

11

constraints into two types: hard constraints and soft constraints [1, 2]. Hard constraints are those that are mandatory and must be strictly enforced in the solution, in the sense that their violation produces infeasible schedules. Soft constraints, on the other hand, are desirable but their violation does not make the schedule infeasible. The task of generating a timetable manually is a time consuming and tedious process, which in turn usually fails to satisfy all essential constraints. Therefore, designing university timetables automatically has great benefits in reducing the workload of the scheduler and satisfying the preferences of the staff members and the constraints of the administration. Over the past decades, university scheduling problems have attracted the attention of the scientific community and have been subject to extensive research [1], particularly in computer science and operations research. Many timetabling problems are classified under the category of combinatorial optimization problems. Solving combinatorial optimization problems with large instances using exact methods is usually unsatisfactory in terms of the computational time or effort involved; hence why, using (meta) heuristics, i.e., approximate solution methods, is a sound alternative [3]. Among the most famous metaheuristic techniques are: Hill Climbing (HC), Simulated Annealing (SA), Genetic Algorithms (GAs), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Tabu Search (TS) and Variable Neighborhood Search (VNS). Due to the huge amount of research on timetabling problems, we limit this survey to timetabling problems tackled in educational institutions within the Middle East, focusing mainly on those applying heuristic or metaheuristic approaches. We aim to shed light on these important scheduling problems, by providing an overview of different university timetabling problem types and giving examples of successful research dealing with these problems. We aspire to motivate researchers to develop new solution methods that can address the underlying challenges and help in the automation of this tedious process for real life applications. This surveys covers selected articles published during the period 2011–2018 in the Middle East region. We conclude our survey with a discussion on some open challenges and promising areas for further research directions in university timetabling problems. The rest of this paper is organized as follows: Sect. 2 defines the different university scheduling problems and models. The problems are classified into the well-known problems: Examination Timetabling (ET), Course Timetabling (CT), and Staff Timetabling (ST). In addition, we propose a new problem which is Project Timetabling (PT). Section 3 is a brief survey of selected case studies that tackled these problems in Middle Eastern universities. Section 4 is a discussion and analysis of the most important findings of the survey. Finally, Sect. 5 concludes with some important challenges and open research directions that may be followed in further studies by the interested researchers.

12

M. Hosny

2 University Timetabling Problems: Classification and Models Among the different problem types that can be recognized within the context of academic institutions are: (1) Examination Timetabling (ET), (2) Course Timetabling (CT), (3) Staff Timetabling (ST), and (4) Project Timetabling (PT). The first two are the most popular among research tackling university timetabling problems, while staff timetabling is less common. On the other hand, project timetabling is a new model that we propose in this paper, since we have not encountered a similar problem model in the literature that we surveyed up to date. In what follows we provide a brief overview of each of these problems. Specifically, we address the problem statement, constraints and objective function. 2.1

Examination Timetabling (ET) Problem

The problem addressed here is concerned with assigning exams to certain timeslots, rooms, proctors, etc., such that a number of constraints should be satisfied [4, 5]. As usual, the constraints can be divided into hard constraints and soft constraints. Some examples of hard and soft constraints for this problem are listed below: Hard Constraints 1. A student cannot take more than one exam at the same time. 2. The room capacity should be enough to hold the number of students taking the exam. 3. A room should not hold more than one exam at the same time. 4. A proctor cannot supervise two exams at the same time. 5. The exam timeslot should be admissible (i.e., within weekdays and within certain working hours). Soft Constraints 1. 2. 3. 4.

A A A A

student should not have more than one exam on the same day. student’s exams should be placed as far apart as possible. proctor should have a maximum number of exams to supervise. room cannot hold more than one exam at the same time.

The objective function for examination timetabling usually tries to find a feasible assignment, i.e., satisfying all hard constraints, while minimizing the violations of soft constraints. Other objectives may also be considered, like minimizing the total examination period, or the number of allocated exam periods, or optimizing the allocation of resources, such as rooms and proctors.

Metaheuristic Approaches for Solving University Timetabling Problems

2.2

13

Course Timetabling (CT) Problem

This problem generally addresses assigning courses to student groups, timeslots and rooms. For this problem type, some of the observed hard and soft constraints are as follows [1]: Hard Constraints 1. No student group can have more than one class at the same time (i.e., lectures of courses taught for the same student group must be scheduled at different periods). 2. The number of students in any lecture should be less than or equal to the maximum capacity of the classroom. 3. No more than one course is allowed to be assigned to the same timeslot in the same room. 4. The room features should match the requirements of the lecture. 5. The number of courses assigned to certain timeslots should not be more than the number of rooms available. 6. The lectures of each course should be spread across a given number of days. Soft Constraints 1. The courses and their prerequisites are scheduled so that students can make their registration easily. 2. Students should not have more than a certain number of consecutive classes. 3. Students should not have consecutive classes in different and far distant places. 4. Students should not have long free time between lectures. 5. Restrict the number of classes after a specific hour. 6. Balance the distribution of courses through all timeslots. Similar to other timetabling problems, the objective function is mainly concerned with satisfying all hard constraints and minimizing the violations of soft constraints. In addition, optimization of resources might also be added as part of the objective function, for example, minimizing the number of rooms or timeslots used. 2.3

Staff Timetabling (ST) Problem

This problem involves assigning courses to teachers at certain timeslots. Some of the considered hard and soft constraints for this problem are as follows [6, 7]: Hard Constraints 1. A faculty member cannot teach two courses at the same time. 2. The workload of a faculty member cannot exceed the maximum allowed workload. 3. The assigned timeslot should conform to the availability of the faculty member (for example in case of part time faculty). Soft Constraints 1. Assign courses to faculty members based on their specialty/rank. 2. Each faculty member is assigned a section at the timeslot that he/she prefers. 3. Instructors should not have very late classes daily.

14

M. Hosny

4. The time gap between different courses assigned to the same faculty should not exceed a maximum time gap. 1. Each instructor should have one day off. 2. The number of instructors per course should be minimized. 3. The workload should be balanced between faculty members of the same rank. Besides satisfying hard constraints, the final soft constraint, which is balancing the workload, is usually a main consideration of the objective function for this problem. 2.4

Project Timetabling (PT) Problem

Project timetabling is a new problem that has not been formally defined in the literature yet. We propose here a new formulation that may be adopted by the interested researchers. The project timetabling problem may be encountered while assigning undergraduate students’ teams to certain project topics and supervisors. Each team works on a chosen project topic under a supervisor, who is a member of the teaching staff. Project presentations are then organized at the end of the semester, and each team must present their project before a committee of academic staff, who will grade the students’ presentation. Thus, we can divide this problem into two main problems: project assignment and project presentation. The constraints of both problems are described below: (1) Project assignment: Hard Constraints 1. Each student team should be assigned only one topic. 2. Each topic should be assigned to a certain/maximum number of supervisors (e.g. one or two). 3. Each supervisor should be assigned a minimum and/or maximum number of projects. Soft Constraints 1. Assign topics to students according to their preference/track. 2. In case of conflict in preferences, priority should be given to teams with higher average GPA of the members. 3. Assign supervisors to topics/students according to the preferences of the supervisors. 4. Assign supervisors to topics based on specialty. 5. The number of projects assigned to all supervisors should be balanced. (2) Project Presentation [8]: Hard Constraints 1. 2. 3. 4.

Each presentation is scheduled only once. There is a maximum number of presentations for each room and for each timeslot. No examiner or observer can attend more than one presentation at the same time. The presentations should be assigned to available rooms in the required timeslots.

Metaheuristic Approaches for Solving University Timetabling Problems

15

5. The supervisors should attend presentations for the projects they supervise. Soft Constraints 1. The total number of presentations attended by staff members should be balanced, i.e., a fair distribution of presentations among staff members. 2. Fair distribution of the number of “undesirable” sessions per staff member, e.g. those that are held at a late time of the day. 3. Staff members should attend project presentations within their research interest. Again, besides satisfying hard constraints, a fair distribution of topics among students and a balanced distribution of supervision and presentation attendance should be considered in the objective function. In case the students are not already assigned to groups, another optimization problem that may be tackled on top of the project assignment and project presentation problems is Students’ Assignment. This problem concerns assigning students to groups in such a way that a certain diversity measure is maximized within the group, and minimizing the differences between groups [9]. The opposite is also possible, i.e., maximizing the similarity with respect to some measure within each group, and maximizing the differences between groups, which becomes a classical clustering problem. In the following section, we review some case studies from Middle Eastern universities that tackled variants of the above mentioned problems. As before, we have classified them into ET, CT, and ST, while PT is not included since no relevant papers are currently found in the literature.

3 A Survey of Case Studies on University Timetabling Problems 3.1

Examination Timetabling (ET) Case Studies

Larabi Mari-Sainte [10] recently proposed a hybrid Particle Swarm Optimization (PSO) algorithm for solving examination timetabling for the IT department at King Saud University (KSU), Saudi Arabia. Two hard constraints were considered: scheduling each exam in exactly one timeslot, and ensuring that the number of students taking the exam does not exceed the capacity of the room. In addition, the soft constraints considered are spreading the exams’ periods for each student as far apart as possible and minimizing the total number of exam periods. The proposed approach is a Binary Free-Parameter PSO (BFPSO) that is inspired from the concept of tribes, where the swarm consists of a number of tribes and each particle is a timetable. The experimental results showed that the approach achieved better results than the manually designed schedule at the IT department, where 10 time slots are scheduled instead of 14 and most exams are well-separated. Hosny and Al-Olayan [4] have developed a mutation-based Genetic Algorithm (GA) for solving examination scheduling for the MSc courses in the computer science department at KSU. The problem is divided into two parts, where the first part is

16

M. Hosny

concerned with proper room assignment based on the room type and required number of seats, and the second part assigns proctors to exams, based on pre-specified constraints. Chmait and Challita [11] presented an approach to solve the examination scheduling problem using Simulated Annealing (SA) and Ant-Colony Optimization (ACO). The problem addressed concerns assigning a set of periods to exams, in such a way that no student has more than one exam at the same time, and the room capacity is observed. Regarding the soft constraints, minimizing the total examination period, and the comfort of students are taken into consideration during the solution process. Mansour and Sleiman-Haidar [5] proposed a Scatter Search (SS) technique for producing good sub-optimal solutions for the final exams timetabling problem. The problem addressed is assigning the total number of exams to a number of exam periods (equivalent to number of exam days multiplied by number of exam periods per day) and classrooms. 3.2

Course Timetabling (CT) Case Studies

Alhuwaishel and Hosny [12] proposed a Hybrid Bees/Demon Algorithm (HBD) to solve the CT Problem. The problem involves assigning a list of events to specific timeslots during the day, satisfying hard and soft constraints based on Lewis and Paechter [1] framework, which was developed for the International Timetabling competition (ITC-2007). The methodology used is a Hybrid Bees/Demon (HBD) Algorithm, which incorporates a two-step intensification process. In the first step, the best-fit group from the Bees population is intensified using a simple local search algorithm. The second intensification involves the elite group of the best-fit, using a Demon algorithm. Shaker et al. [13] investigated a combination of two metaheuristics, namely Great Deluge and Tabu search with a set of neighborhood structures for solving the CT problem. The problem involves assigning a set of courses to suitable rooms and timeslots, where certain room features are required by the course. Al-Hegami et al. [14] presented a Nested Guided Search Genetic Algorithm (NTMGA) approach to solve the problem of university coursework timetabling with the object-relational database taking into account the advantages of nested tables and mutation in genetic algorithms. Alsmadi et al. [15] proposed a Genetic Algorithm to solve CT problem for the faculty of engineering and technology at the University of Jordan (JU). Besides the regular hard constraints, the algorithm ensures assigning courses to certain teachers who usually teach these courses, assigning lectures to three separate hours in the week, and assigning labs to three consecutive hours in the week. The soft constraints take into account the preferences of instructors regarding courses and rooms, and also considers balancing the load of the instructors. Bolaji et al. [16] proposed an Artificial Bee Colony (ABC) algorithm for tackling curriculum-based course timetabling problem. The problem concerns scheduling of a set of lectures of courses to a set of rooms and periods on a weekly basis. The methodology used is an improved ABC algorithm, which incorporates two

Metaheuristic Approaches for Solving University Timetabling Problems

17

neighborhood structures: NL−Move, which randomly moves a selected course to a feasible timeslot and room, and NL−Swap, which swaps two randomly selected courses. 3.3

Staff Timetabling (ST) Case Studies

Al-negheimish et al. [7] recently proposed a Bees Algorithm (BA) hybridized with hill climbing and simulated annealing for solving the faculty scheduling problem in the computer science department at KSU. Besides ensuring the hard constraint of no clashes between courses assigned to each faculty, a number of soft constraints were considered, including balancing the workload, having a day off, and minimizing the number of instructors per course. The results indicate the superiority of the proposed method over the manually generated version of the schedule. In addition, the hybrid version produced better results than the standard BA without hybridization. Hosny [6] developed a heuristic-based algorithm to solve the faculty assignment problem for the Information Technology department, at KSU. The problem involves assigning Teaching Assistants (TAs), both full-time and part-time, to suitable lab sessions that conform to their preferences, their available times, and their permitted workloads.

Table 1. Summary of timetabling problems research Reference

Country Algorithm

Saudi Arabia Saudi Arabia Saudi Arabia Saudi Arabia Lebanon

6

Alneghimish et al. [7] Larabi MariSainte [10] Alhuwaishel and Hosny [12] Hosny and AlOlayan [4] Chmait and Challita [11] Shaker et al. [13]

7

Hosny [6]

1 2 3 4 5

8

Iraq

Saudi Arabia Al-Hegami et al. Yemen [14]

Hybrid bees, HC, and Demon Algorithm hybrid Particle Swarm Optimization (PSO) Hybrid Bees/Demon Algorithm (HBD) Mutation based genetic algorithm Simulated Annealing and Ant Colony Optimization Great deluge and Tabu Search Heuristic-based algorithm Nested Guided Search Genetic algorithm (NTMGA)

Exam time tabling

Course time tabling

Staff time tabling ✓

✓ ✓ ✓ ✓ ✓ ✓ ✓

(continued)

18

M. Hosny Table 1. (continued) Reference

9

Country Algorithm

Mansour and Lebanon Scatter Search (SS) Sleiman-Haidar [5] 10 Bolaji et al. [16] Jordan Artificial Bee Colony (ABC) 11 Alsmadi et al. Jordan Genetic Algorithm (GA) [15]

Exam time tabling ✓

Course time tabling

Staff time tabling

✓ ✓

4 Discussion Table 1 shows a summary of the references that have been discussed in detail in Sect. 3 above. The table indicates the country of the study, the solution approach used, and the type of problem handled. In general, it can be observed that course timetabling problems have gained the greatest attention from the research community in the Middle Eastern region. Examination timetabling comes next. On the other hand, only two papers deal with faculty scheduling, and no papers handled project scheduling. It can also be observed from Table 1 that evolutionary algorithms (and especially genetic algorithms) are the most popular approach among researchers to handle the different university timetabling problems. New forms of metaheuristics, such as the different variants of Bees algorithms and PSO, are increasingly being adopted for solving these problems as well, especially in recent research. Other kinds of metaheuristics, like Tabu search, SA, Scatter Search and Demon Algorithms, are also used, either independently or hybridized with other algorithms. In addition, many algorithms use simple local search (hill climbing) at some stage during the search to improve the performance and increase the intensification around good solutions.

5 Conclusions and Future Research Directions This survey covered important and challenging problems in timetabling and scheduling in educational institutions, namely course timetabling, examination timetabling, staff timetabling and project timetabling. Although these problems have gained a lot of popularity among the research community in the last few decades, there are several open research directions that the researchers can investigate or elaborate on. Among the challenges and promising research areas we can identify the following: 1. Considering the convenience of the stakeholders, such as teachers, students and administration as soft constraints while building the schedules. For example, by balancing the load of the faculty, satisfying their preferences in courses and times,

Metaheuristic Approaches for Solving University Timetabling Problems

2.

3.

4.

5.

19

and distributing exams conveniently so that students would have enough rest periods between them. The quality of the initial solution(s) is critical to the performance of the metaheuristic and may affect the quality of the final solution. Using another metaheuristic to generate initial solutions of good quality or improving already existing manual schedules seems to be something worth investigation. Difficulty in handling both hard and soft constraints in the objective function allows for exploring different and more sophisticated frameworks like multi-objective optimization. Some new metaheuristic approaches that have gained popularity recently are yet to be explored for solving university timetabling problems. These include, among others, the Firefly Algorithm, the Bee Mating Algorithm, the Bat Algorithm, The Big-Bang Big-Crunch Algorithm, Cuckoo Search, League Championship, the Frog Leaping Algorithm, etc. Finally, project scheduling, which has been introduced in this survey is another new university scheduling problem that did not gain the attention of researchers up to date, despite its prevalence in academia. We believe that this is an attractive research direction that scholars can approach, in order to automate this important and difficult task in universities.

To sum up, we believe that university timetabling and scheduling problems are active and dynamic problems. There are many variants and many challenges that are yet to be investigated. We encourage researchers, especially in the Middle East, to pay more attention to such problems and unleash their potential in handling them using novel and interesting solution methods. Acknowledgment. The author would like to extend thanks to Mrs. Shameem Fatima for her great efforts in collecting and categorizing the references presented in this survey.

References 1. Lewis, R., Paechter, B.: Finding feasible timetables using group-based operators. IEEE Trans. Evol. Comput. 11, 397–413 (2007) 2. Hosny, M., Fatima, S.: A survey of genetic algorithms for the university timetabling problem. In: International Proceedings of Computer Science and Information Technology, pp. 34–39 (2011) 3. Dammak, A., Elloumi, A., Kamoun, H., Ferland, J.A.: Course timetabling at a Tunisian university: a case study. J. Syst. Sci. Syst. Eng. 17, 334–352 (2008) 4. Hosny, M., Al-Olayan, M.: A mutation-based genetic algorithm for room and proctor assignment in examination scheduling. In: Proceedings of 2014 Science and Information Conference, SAI 2014 (2014) 5. Mansour, N., Sleiman-Haidar, G.: Parallel scatter search algorithms for exam timetabling. Int. J. Appl. Metaheuristic Comput. 2, 27–44 (2011) 6. Hosny, M.I.: A heuristic algorithm for solving the faculty assignment problem. J. Commun. Comput. 10, 287–294 (2013)

20

M. Hosny

7. Al-negheimish, S., Alnuhait, F., Albrahim, H., Al-mogherah, S., Alrajhi, M., Hosny, M.: An intelligent bio-inspired algorithm for the faculty scheduling problem. Int. J. Adv. Comput. Sci. Appl. 9, 151–159 (2018) 8. Cowling, P., Kendall, G., Soubeiga, E.: Hyperheuristics: a tool for rapid prototyping in scheduling and optimisation. Appl. Evol. Comput. 2279, 269–287 (2002) 9. Weitz, R.R., Jelassi, M.T.: Assigning students to groups: a multi-criteria decision support system approach. Decis. Sci. 23, 746–757 (1992) 10. Marie-Sainte, S.L.: A new hybrid particle swarm optimization algorithm for real-world university examination timetabling problem. In: 2017 Computing Conference, pp. 157–163 (2017) 11. Chmait, N., Challita, K.: Using simulated annealing and ant-colony optimization algorithms to solve the scheduling problem. Comput. Sci. Inf. Technol. 1, 208–224 (2013) 12. Alhuwaishel, N., Hosny, M.: A Hybrid Bees/Demon Optimization Algorithm for Solving the University Course Timetabling Problem, pp. 371–378 (2015) 13. Shaker, K., Abdullah, S., Alqudsi, A., Jalab, H.: Hybridizing meta-heuristics approaches for solving university course timetabling problems. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), LNAI, vol. 8171, pp. 374–384 (2013) 14. Al-qubati, W., Zahary, A., Al-hegami, A.: Using nested tables and mutation in genetic algorithms (NTMGA) to solve timetabling problem in object- relational model, pp. 215–221 (2012) 15. Alsmadi, O.M.K., Abo-Hammour, Z.S., Abu-Al-Nadi, D.I., Algsoon, A.: A novel genetic algorithm technique for solving university course timetabling problems. In: IEEE International Workshop on Systems, Signal Processing and their Applications, WOSSPA, pp. 195–198 (2011) 16. Bolaji, A.L., Khader, A.T., Al-betar, M.A., Awadallah, M.A.: An improved artificial bee colony for course timetabling. In: 2011 Sixth International Conference on Bio-Inspired Computing Theories and Applications, pp. 9–14 (2011)

Ontology-Based Modeling for a Personalized MOOC Recommender System Sara Assami1(&)

, Najima Daoudi2

, and Rachida Ajhoun1

1

2

Laboratoire SSL, ENSIAS (National School of Computer Science and Systems), Mohammed Vth University, Rabat, Morocco [email protected], [email protected] Département Génie de données et de connaissances, Information Science School, Rabat, Morocco [email protected]

Abstract. Technology has revolutionized information access and influenced our learning habits. In this context, Massive Open Online Courses (MOOC) platforms emerged to satisfy the web user’s need for a lifelong learning. These platforms include research filters to help the learner find the right learning content, but the high dropout rates suggest the inadequacy of recommended MOOCs to learner needs. Hence, MOOC’s recommendation should reconsider the learner profile modeling to enlarge the scope of recommendation parameters. In this paper, we aim to choose the proper modeling technique for the personalization criteria used in a MOOC recommender system, such as the pace of learning and the cognitive learning style. For this purpose, an ontology-based modeling is used to structure the common concepts deduced from the learner profile and MOOC content. It is also a trace-based approach since it will take into consideration the learning history of a learner profile for an accurate MOOC recommendation. Keywords: MOOC recommendation  Adaptive MOOC Personalization criterion  Learning ontology  Trace-based approach

1 Introduction Open online educational platforms triggered great ambitions related to lifelong learning and the “education for all” movement, both in educators and learners. These platforms offer a large choice of online courses which are largely called MOOCs (Massive Open Online Courses) to learners from all over the world, not only to facilitate access to education but also to market the image of the educational institutions behind their conception. Independently from their aim, MOOC platforms need to be more of an adaptive and intuitive environment that attract learners by a personalized offer and not a huge catalog of a fit for all courses. Our research general purpose is to conceive a personalized recommender system that considers the learner profile as a representation of various criteria influencing his choices and motivations for MOOCs. In our previous publications [1], we have identified the main personalization criteria for MOOCs and presented a new approach © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 21–28, 2019. https://doi.org/10.1007/978-3-030-03577-8_3

22

S. Assami et al.

to MOOCs’ recommendation. In this paper, we shed the light on learner modeling to select from literature review the right modeling technique for these criteria. So, we describe in Sect. 2 the related research to learner modeling approaches, the criteria and application domains that it involved and the ontology-based approach as the key modeling method for our solution. In Sect. 3, we suggest a combined modeling approach and a generic ontology conceptualization to model the learning characteristics that link between a learner’s needs and preferences, and the MOOC content viability for learners. In conclusion, we will trace the next steps for our research development.

2 State of the Art and Related Works To address the issue of learner modeling, we need to distinguish between two main concepts to represent a learner for an adaptive hypermedia system or a recommender system. The concepts are: “learner profile” and “learner model”. Although, “user profile” and “user model” are alternatively used by some authors in e-learning and MOOC context [2–4] and recommender systems [5], a learner profile only translates the characteristics that describe a user in a learning situation. However, it shouldn’t be mistaken for a learner model, which is “an abstract representation of relevant information about a learner” [6] where the data generated is the sum of the learner model attribute values in a designated time [6, 7]. Generally in a user modeling process, the system attributes for each user a representation of knowledge and preferences. It uses this representation to provide adaptivity either by a direct intervention or in a collaborative way by the user consent [8]. Therefore, as an application of [7]’s definition to user modeling, learner modeling becomes the process of creating and updating a learner model, by deriving learner characteristics from the learner data. 2.1

Learner Modeling for Personalized Learning Systems

Firstly, identifying the main user modeling structures is prior to the selection of a learner modeling approach that influences the recommendation process. In a recent publication, [7] distinguishes between five user modeling structures: the flat model which models “a simple collection of variables and associated values” [7], in contrast with the hierarchical model where “user characteristics and relations between these characteristics” are modeled. Then, the stereotype models, a broader structure than the “domain overlay” structure. At last, the logic-based structure based on a “representation and reasoning with first-order predicate logic” [7]. To define the various modeling approaches adapted for learning needs, we had to look at literature work about hypermedia adaptive systems and Intelligent Tutoring Systems since it is rarely elaborated in research about MOOCs. Overall, the most commonly used learner modeling techniques for hypermedia systems are the overlay approach [7, 9] and stereotype approach [7, 9, 10]. Still, we refer to [9] rich literature review to describe the eight main approaches of learner modeling for our research context, in addition to other research papers [10, 11] who’s modeling approaches fall in one or more of the following approaches:

Ontology-Based Modeling for a Personalized MOOC Recommender System

23

Stereotype Approach and Overlay Approach (Domain Overlay). Stereotypes are groups where the affiliated individuals share the same characteristics. “They were extensively used in early adaptive systems (between 1989 and 1994)” [7] and are efficient for “the problem of initializing the student model by assigning the student to a certain group of students” [9]. For example, the learner is strictly identified in one of the clusters: “novice”, “advanced” or “expert” [11] with an identical adaptation for all learners who belong to the same cluster [10]. However, in an overlay approach: the model represents the level of knowledge of a system user. As a result, “the student model is a subset of the domain model” [9, 10] that is conceived by an expert [10, 11] in pairs of concept-value where the concept is an element of knowledge and the value is a Boolean value or a qualitative measure of the learner assimilation of the concept [9, 11]. Perturbation Model or Buggy Model. Along with the overlay model, it is an analytical approach that uses the learner traces and knowledge models to deduce the main characteristics of learners [6]. In fact, a perturbation model extends the overlay model where the student’s knowledge includes “possible misconceptions as well as a subset of the expert’s knowledge” [9]. So the perturbation model includes a list of mistakes called the “bug library” [9] that helps in the detection of errors in concept understanding by learners. Machine Learning Techniques. Rather than being a formal approach, machine learning can be used as a technique for data collection to model a learner [10]. It forms models from the user’s behavior observations, which provide the training subset for the machine learning algorithm and by that engenders predictive models for the users future actions [9]. In the context of Technology-Enhanced Learning Systems, it is a synthetic approach based on the traces and interactions of the learner and uses particularly data mining techniques to deduce his characteristics [6]. Constraint-Based Model. Constraint-based modeling (CBM) is based on “Ohlsson’s theory of learning from performance errors” [9, 12] that firstly detects the user’s errors, which are a result of a missing or lacking procedural knowledge and secondly, helps to correct these errors [12]. Hence, the CBM model supports the adequate feed-back to correct the mistakes of learners and is “used to represent both domain and student knowledge” [9]. Fuzzy Student Modeling and Bayesian Networks. The fuzzy logic method was introduced to manage uncertainty and subjectivity in human thinking [9, 10], and also natural language processing [2]. In an adaptation system, the fuzzy logic techniques improve the performance of the advices and feedback provided to a learner in an educational situation [9]. Additionally, it becomes useful when learner data is unavailable or not precise enough for the recommendation filtering tasks. Likewise, the Bayesian networks are also used to express uncertainty in learner models. It finds probabilistic relationships between the variables of a system [13]. Each node of the Bayesian network can be a student dimension/component (his knowledge, learning style, misconceptions, emotions, etc.) [9]. This approach is beneficial to find the dependencies between the learner’s characteristics and his chosen learning preferences to infer the other learners’ preferences.

24

S. Assami et al.

Ontology-Based Modeling. Ontologies offer a dynamic dimension to the evolvement of the learner model in MOOC platforms. This is because an ontology is a common understanding and agreement upon a particular domain by a number of experts to make communications of meaning easier, accurate and effective [14]. The main advantage of ontology-based modeling is its reusability and possible extension [9] from generic ontologies to application ontologies or task ontologies, used for specific needs, or to domain ontologies that model solely the concepts of a given discipline or field of knowledge. Thus, it’ll be useful to provide a common ground for MOOC platforms’ recommendations. Obviously, all the previous approaches can bring an added value to learner modeling. For example, the stereotype approach can solve the issue of the cold start in our recommender system, while the overlay approach introduces a basic estimation of the learner knowledge level. However, they do not model the misunderstanding of concepts learned like the perturbation model or the constraint-based modeling. Moreover, the modeling of learner errors, skills and different types of knowledge was enriched by cognitive theories [15] that explain the causal relationships in a learning intellectual process. Furthermore, the Bayesian networks and fuzzy logic modeling add the probabilistic and uncertainty aspects to make the assumptions about learners’ characteristics more human-like. Nonetheless, a learner modeling approach should also guarantee the interoperability needed for a multi-platforms’ MOOC recommendation [1]. 2.2

Ontology Modeling and Deployment in E-learning and MOOCs

Generally, ontologies model knowledge in a set of concepts and their relationships and represent them in terms of classes, relations and instances. For a personalized recommender system user modeling, “ontologies have been increasingly used for the last decade” [14] and particularly used for student modeling since 2011 [9]. Table 1 summarizes the main use cases related to learner modeling in e-learning and MOOC recommender systems and shows the predominance of ontology principles in creating and maintaining the learner model. To conclude the state of the art, we need to draw the line on deduced research practices impacting the orientations of our research: (1) learner information is modeled into two main categories: personal information and learning activities information; (2) the learner data should be obtained by both ways: explicit enunciation from the learner and implicit deduction from learner navigation traces; (3) the semantic annotations and machine learning techniques enhance the recommender engine pertinence for learners. After all, we need to have a broader vision of MOOC recommendation that favors all these aspects, so the following part will explain our modeling approach for a learner centered recommender system in MOOCs.

Ontology-Based Modeling for a Personalized MOOC Recommender System

25

Table 1. Ontology principles’ use in elearning and MOOC’ research. [6]

[11]

[16]

[5]

[2]

[17]

[18]

Study objective(s) A Trace-Based Learner Modelling Framework for e-learning

The conception of an adaptive digital environment for e-learning by using semantic annotations for learning resources The conception of an e-elearning system architecture that uses ontologies to model users A MOOC recommender system prototype based on the learner expected learning outcomes Learner demotivation prevention by trajectory analysis and personalized learning in MOOCs

An adaptive system for MOOCs based on initial learner knowledge and preferences A new approach to learner’s skills modeling in MOOC platforms

[19]

A Learner’s Characteristics Ontology based on creating interconnections between learning style models

[20]

A context-aware learner model for a distance learning system

Learner modeling approach Learner model vocabulary with 5 categories: Learner information, Aggregated Data, learning Strategy, Knowledge state of the learner; Knowledge assessment Ontology-based modeling in a 4 facets concept ontology (user identity, history, preference and capacity) The ontology used for learner modeling includes: goals, knowledge, favorites, learning activity types and learning style of the learner A domain taxonomy to model learning outcomes for learners and MOOCs Use of Ant colony algorithm for trace based analysis and 5 sections for learner profile, going from general to specific characteristics: learner Information, knowledge, behavior, etc. A concept map representing learner knowledge An overlay approach to model the learner skills, represented by their title, life cycle and scale An ontology-based model for learner characteristics using the On-toknowledge ontology development methodology and linking between learning style dimensions A learner model ontology with 4 information classes: personal data, cognitive data, context and learner activities, using the “Methontology” method

3 Proposed Approach: A Metaontology for Recommendation Criteria First of all, our system aims to give a personalized response to each individual in order to adapt their learning with MOOCs. It should give an instant feedback to orient the learner navigation between MOOC platforms.

26

S. Assami et al.

Clearly, we need a global approach that models the relationships among concepts and their definition using adequate metrics for each learning aspect to provide such feedback. This led us to choose ontologies as a modeling approach that could easily be updated according to the rapid evolution of the web platforms such as MOOC platforms. Moreover, the ontological modeling is a frame that structures the data obtained by a trace-based approach and translates in each of its elements the description of the learner’s characteristics by using an overlay approach. Figure 1 summarizes our mix of approaches to model the learner.

Fig. 1. Learner data modeling combined approaches

In addition, the system’s recommendation engine focuses on seven personalization criteria in its recommendation model to adapt the list of MOOCs relevant to the needs and preferences of learners. In Fig. 2 a metaontology that models the learning content and by that, resumes its characteristics that influence the recommendation parameters, i.e., the recommendation criteria. The modeling tool used for the visualization of the metaontology is the editor G-MOT, that “allows the representation of knowledge… in the form of a graphic network with different types of knowledge…” [21]. The “R links” link between classes (rectangles) and their properties, whereas the hanging points in a rectangle indicate that the class is defined by enumeration. Learning Pace and Learning Style are enumeration classes because their individuals can be defined (for LearningStyle class, the attributes can be: dimension, strategy…). Moreover, the cardinality restrictions are defined for each relationship in the design scheme to specify the possible occurrences of each relationship.

Ontology-Based Modeling for a Personalized MOOC Recommender System

27

Fig. 2. Recommender system metaontology for learning content modeling

4 Conclusion and Future Work The main goal of this paper was to find the adequate approach (es) to model a learner, so that our recommender system could adapt its suggested MOOC’ list. The recommended MOOCs should follow the progression of a learner profile that resumes his preferences, needs and other learning related characteristics. Our approach relies on a trace-based approach to extract user data and content metadata for the matching process. It then structures the obtained data by using the learning ontology databases. Each attribute value is designated in an overlay approach to show the knowledge level and type obtained about the modeled object or user. Although, we modeled in this work the first ontology needed for this process that includes all the recommendation criteria, we need to develop the derived domain ontologies that describe the specifics of each stakeholder. Once the models’ structure is detailed, we will be able to develop the necessary tools for data extraction that consider the personalization features for learners and the MOOC provider intervention in our system.

References 1. Assami, S., Daoudi, N., Ajhoun, R.: Personalization criteria for enhancing learner engagement in MOOC platforms. In: 2018 IEEE Global Engineering Education Conference (EDUCON), Santa Cruz de Tenerife, Spain, pp. 1271–1278. IEEE (2018) 2. Clerc, F., Lefevre, M., Guin, N., Marty, J.-C.: Mise en place de la personnalisation dans le cadre des MOOCs. In: 7ème Conférence sur les Environnements Informatiques pour l’Apprentissage Humain - EIAH 2015, Agadir, Morocco (2015)

28

S. Assami et al.

3. Gutiérrez-Rojas, I., Alario-Hoyos, C., Pérez-Sanagustín, M., Leony, D., Delgado-Kloos, C.: Towards an Outcome-based discovery and filtering of MOOCs using moocrank. In: European MOOC Stakeholder Summit, pp. 50–57 (2014) 4. Sonwalkar, N.: The first adaptive MOOC: a case study on pedagogy framework and scalable cloud architecture - Part I. In: MOOCs Forum, pp. 22–29 (2013) 5. Manouselis, N., Drachsler, H., Verbert, K., Duval, E.: Recommender Systems for Learning: An introduction. Springer, New York (2012) 6. Settouti, L., Guin, N., Mille, A., Luengo, V.: A trace-based learner modelling framework for technology-enhanced learning systems. In: 10th IEEE International Conference on Advanced Learning Technologies, pp. 73–77. IEEE (2010) 7. Herder, E.: User Modeling and Personalization 3: User Modeling – Techniques (2016). https://www.eelcoherder.com/images/teaching/usermodeling/03_user_modeling_techniques. pdf 8. Benyon, D., Murray, D.: Applying user modeling to human-computer interaction design. Artif. Intell. Rev. 7, 199–225 (1993) 9. Chrysafiadi, K., Virvou, M.: Student modeling approaches: a literature review for the last decade. Expert Syst. Appl. 40, 4715–4729 (2013) 10. Cocea, M., Magoulas, G.D.: Participatory learner modelling design: a methodology for iterative learner models development. Inf. Sci. 321, 48–70 (2015) 11. Behaz, A.: Environnement Numérique de Travail de type Hypermédia Adaptatif Dynamique. Université hadj Lakhdar Batna Faculté, Faculté des Sciences, Algeria (2012) 12. Mitrovic, A.: Modeling domains and students with constraint-based modeling, Chap. 4. Advances in Intelligent Tutoring Systems. SCI, vol. 308, pp. 63–80 (2010) 13. Bashir, A., Khan, L., Awad, M.: Bayesian networks. In: Encyclopedia of Data Warehousing and Mining, USA, pp. 89–93 (2006) 14. Porcel, C., Martinez-Cruz, C., Bernabé-Moreno, J., Tejeda-Lorente, A., Herrera-Viedma, E.: Integrating ontologies and fuzzy logic to represent user-trustworthiness in recommender systems. Procedia Comput. Sci. 55, 603–612 (2015) 15. Downes, S.: Free Learning Essays on Open Educational Resources (2011) 16. Fani Sani, M.R., Mohammedian, N., Hoseini, M.: Ontological learner modeling. Procedia Soc. Behav. Sci. 46, 5238–5243 (2012) 17. Onah, D.F.O., Sinclair, J.: Massive open online courses: an adaptive learning framework. In: 9th International Technology, Education and Development Conference, Madrid, Spain, pp. 2–4 (2015) 18. Maalej, W., Pernelle, P., Ben Amar, C., Carron, T., Kredens, E.: Identification and modeling of the skills within MOOCs. In: AICCSA 2016 - 13th ACS/IEEE International Conference on Computer Systems and Applications, Agadir, Morocco. IEEE (2016) 19. Labib, A.E., Canos, J.H., Penadés, M.C.: On the way to learning style models integration: a learner’s characteristics ontology. Comput. Hum. Behav. 73, 433–445 (2017) 20. Akharraz, L., El Mezouary, A., Mahani, Z.: To context-aware learner modeling based on ontology. In: 2018 IEEE Global Engineering Education Conference (EDUCON), Santa Cruz de Tenerife, Spain, pp. 1332–1340. IEEE (2018) 21. Portail Québec. http://pedagogie.uquebec.ca/portail/repertoire/approche-programme/logicielde-modelisation-des-connaissances-g-mot

A New Model of Learner Experience in Online Learning Environments Yassine Safsouf1,2(&), Khalifa Mansouri2(&), and Franck Poirier1(&) 1

Lab-STICC, University Bretagne Sud, Lorient, France {yassine.safsouf,franck.poirier}@univ-ubs.fr 2 Laboratory SSDIA, ENSET of Mohammedia, University Hassan II of Casablanca, Casablanca, Morocco [email protected]

Abstract. The flexibility, availability and functionality of online learning environments (OLEs) open up new possibilities for classroom teaching. However, although these environments are becoming increasingly popular, many users stop learning online after their initial experience. This paper aims to develop a new multi-dimensional research model allowing to categorize and to identify the factors that could affect the learning experience (LX) in order to decrease the failure and dropout rate in OLEs. This new model is based on the combination of the major models of user satisfaction and continuity of use (ECM, TAM3, D&M ISS, SRL). The proposed research model consists of 38 factors classified according to 5 dimensions: learner characteristics, instructor characteristics, system characteristics, course characteristics and social aspects. Keywords: Continuance of use intention  Learner experience Learner satisfaction  Learner success  Online learning environments

1 Introduction For many years now, an increasing number of universities and schools have adopted the use of online learning environments (OLEs) in educational pathways. These environments, accessible anywhere via the Web, associate the educational presentation of contents with a set of interaction tools specifically designed to support distance teaching and learning. The advantages of OLEs are diverse: a flexibility of access, a quality content, a training which adapts itself to the rate of the learner, an availability outside class hours, a cost reduction, etc. But, even though e-learning is practical, it can sometimes be a little bit solitary. Furthermore, some learners require personal contact with their educators or trainers to learn better. This, unfortunately, leads to a very high failure and dropout rate [1]. Several questions still remain: What factors enhance the learning experience (LX)? What characteristics (of the learner, the instructor, the system, or the course) contribute to positive learning? How can the dropout rate in OLEs be reduced?

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 29–38, 2019. https://doi.org/10.1007/978-3-030-03577-8_4

30

Y. Safsouf et al.

During the last years, the researchers proposed several theories and models to measure, evaluate and improve the quality of the user experience (UX) in OLEs [2]. In this respect, our article aims at identifying the factors which could affect the satisfaction, the intention of use or the intention of continuity of use and learner success in OLEs. This document is organized as follows. The most important previous research on learner success factors in OLEs is presented in the next section. Section 3 develops a new framework that organizes the success factors into five dimensions learner, instructor, system, course and social. Section 4 presents our research model that shows the links between these success factors. Finally, Sect. 5 concludes and presents future research works.

2 Background This section reviews previous studies, and identifies the main factors which could influence positively or negatively LX in OLEs. 2.1

Technology Acceptance Model (TAM)

The Technology Acceptance Model (TAM) was developed from the theory of reasoned action [3]. The four variables of this model are perceived ease of use, perceived usefulness, behavioral intent to use, and actual system use. In 2000, Davis and Venkatesh proposed an extension of the basic model (TAM2) [4], They thus identified and theorized the determining factors of perceived usefulness, that is, subjective norm, selfimage, job relevance, output quality, result demonstrability and perceived ease of use. In 2008, Venkatesh and Bala supplied a complete version of the model (TAM3) [5]. In this last version, they integrated several determining factors, such as computer selfefficacy, perception of external control, computer anxiety, computer playfulness, perceived enjoyment and objective usability. Several previous studies have used the TAM3 as a study framework for the field of e-learning. Wook and Yusof [2] adapted the constructions chosen in the TAM3 to test the hypotheses of this last version of the model in a public higher education environment. The results of the study indicate clearly that all of the proposed assumptions are supported in the OLEs. Agudo and Hernández [6] also proposed a model based on TAM 3 with the inclusion of personal innovation capacity and perceived interaction. Their results support the relationships of TAM3, with the exception of the relationship between the intention to continue using the system and the student’s behaviour. 2.2

Expectations Confirmation Model (ECM)

The expectation confirmation model (ECM) was developed by Bhattacherjee (in 2001) [7] to explain user satisfaction and reveal variables which affect intentions to continue using information systems. This model contains four variables: perceived usefulness, confirmation, satisfaction and continuance intention.

A New Model of Learner Experience in Online Learning Environments

31

Recent studies in e-learning contexts have used the ECM model alone or in combination with others models (such as TAM) to investigate student satisfaction and continuance intention to using OLEs [8–14]. According to the authors, users’ motivation to continue their courses online results from the degree of satisfaction followed by perceived usefulness, learner attitude, concentration and subjective norm. 2.3

DeLone & McLean Information Systems Success Model (D&M ISS)

Many models have been developed to evaluate the efficiency of information systems (IS) in various contexts. The most dominant model is the DeLone and McLean Information success Systems model of 2003 (D&M ISS) (updated model) [15]. This model consists of six variables: system quality, information quality, service quality, system usage, user satisfaction and net benefits. Numerous are the studies which considered appropriate and useful to apply this model to evaluate the success of e-Learning systems. Mohammadi [16] combined the TAM and D&M ISS model with the aim of studying learners’ perceptions and, consequently, of analyzing the quality characteristics which influence learners’ satisfaction and the intentions regarding the use of e-learning systems as well as the effects of perceived usefulness and usability. The results concluded that system, service and content quality are defined as the main factors that influence satisfaction and intentions to use OLEs. Ozkan [17] has extended the D&M ISS model by dividing its constructions into two categories: technical and social factors. The model was considered appropriate and useful to estimate the success of OLEs. In fact, for Ozkan, two quality variables were retained in the D&M ISS model: system quality and service quality, while adding other factors. 2.4

Self-Regulated Learning Theory (SRL)

Self-regulated learning (SRL) theory perceives learning as “an activity that students do for themselves in a proactive way” [18]. It is a dynamic process by which individuals plan, monitor and evaluate their learning, by applying appropriate strategies to achieve objectives. Zimmerman was one of the first authors who proposed models to explain the phases of SRL [18]. Many researchers have recognized the importance of selfregulated learning as a predictor of academic achievement in OLEs, as it allows students to regulate their learning by improving their knowledge and experience [19, 20]. These studies also led to show that teachers used social media as educational ways to help support and encourage students’ independent learning. The factors discussed by previous researchers are summarized in Table 1.

32

Y. Safsouf et al. Table 1. References to factors influencing LX in OLEs

Author(s) Wook and Zawiyah [2] Agudo and Hernández [6]

Chow and Shi [8] Lee [9]

Alraimi, Zo and Ciganek [10] Halilovic, Cicic [11] Hong, Hwang, Szeto [12] Oghuma, Saenz and Chang [13] Stone and BakerEveleth [14] Mohammadi [16]

Al-Samarraie, Teng [21] Ozkan and Koseler [17]

Factors (Variables) Perceived usefulness, Perceived ease of use, Subjective norm, Image, Computer self-efficacy, Computer anxiety, Perceived enjoyment Perceived usefulness, Perceived ease of use, Subjective norm, Relevance for learning, Self-efficacy, Perceived interaction, Computer anxiety, Perceived playfulness, Facilitating conditions, Behavioral intention Confirmation, Tutor-interaction, Peers-interaction, Course quality, Learner satisfaction, Continuance intention Attitude, Perceived behavior control, Perceived usefulness, Perceived ease of use, Subjective norm, concentration, Perceived enjoyment Perceived usefulness, Confirmation, Perceived openness, Perceived reputation, Perceived enjoyment, Satisfaction, MOOC continuance intention Perceived usefulness, Confirmation, Conditions of support, Satisfaction, continuance intention Self-efficacy, Learning satisfaction, Interest in learning, Internet cognitive failure Service quality, Confirmation, Perceived usefulness, Perceived enjoyment, User interface, Perceived Security, Satisfaction, Continuance intention Perceived usefulness, Confirmation, Satisfaction, continuance intention Educational Quality, Service quality, Technical system quality, Information quality, Perceived usefulness, Perceived ease of use, Intention to use Attitude Confirmation, Intrinsic value, Information quality, System quality, Ease of use, Usefulness, Social influence Attitudes toward LMS, Self-efficiency, Interaction with other students and teacher, Instructor responsiveness, informativeness, Fairness, Control over Technology, Communication Ability, Course Flexibility, Perceived usefulness, Perceived ease of use, system Security, Reliability, Maintenance, Personalization, Interactivity

3 Our Research Framework for the Learner Experience The researches described in the previous section showed that the TAM and ECM models have revealed factors which have significant effects on a learner’s satisfaction and behavioural intent to use OLEs. However, other factors may interfere, such as the quality factors proposed by the D&M ISS model, or even the individual learners’ factors proposed by the SRL theory.

A New Model of Learner Experience in Online Learning Environments

33

For the proposal of our research framework, several factors were taken into account according to the models mentioned above, such as: Perceived ease of use, computer anxiety, perceived enjoyment, attitude towards the OLE, subjective norm and the selfimage of TAM3; perceived usefulness of ECM; quality of service, quality of system and quality of content of the D&M ISS model and self-regulation, self-efficacy and individual effort resulting from the SRL theory. Afterwards, other factors derived from previous research were added (these factors are outside the boundaries of the different models in the Fig. 2), such as: social interactions, design quality, course flexibility, technology control, communication capacity, quality of connection to the system, perceived security, diversity of assessments and independence of devices and context. Figure 1 presents our research framework.

Fig. 1. Multi-dimensional research framework for LX in OLEs.

34

Y. Safsouf et al.

Our research framework consists of 38 factors which were selected and classified according to 5 dimensions indicated by colours. These dimensions are: learner characteristics, instructor characteristics, system characteristics, course characteristics and social aspects.

4 Proposed Model and Research Hypotheses The main contribution of this study is the examination of the integration of the ECM, TAM3, D&M ISS and SRL theory in explaining satisfaction and the intention to continue to use OLEs in the long term. It should be noted that no previous research had combined these different models. Based on the findings of the background on the models previously presented, we propose a research model which identifies several factors as likely predictors of satisfaction and intention to continue using OLEs. The relationships between these factors are integrated into the multidimensional model described in Fig. 2. The TAM model defined “Perceived ease of use” as the measure in which the person believes that using a particular system would be with no effort [3]. According to the complete version of the TAM model (TAM3) [5], computer self-efficacy, computer anxiety and perceived enjoyment are identified as the major factors which come into play and influence perceived ease of use [2, 6]. Thus, the following hypotheses are proposed: H1. Computer self-efficacy has a positive effect on perceived ease of use. H2. Computer anxiety has a negative effect on perceived ease of use. H3. Perceived enjoyment has a positive effect on perceived ease of use. Davis [3] defined “perceived usefulness” as the measure in which the person believes that the use of a particular system would improve their job performance. From the first version of the TAM model [3], ease of use was represented as one of the main determinants of perceived usefulness. But it was not until the second version of TAM2 [4] that the authors expanded the list of all predictors by adding it two factors of the social influence process, namely the subjective norm and the self-image [2, 6]. So, we can assume that: H4. Self-image has a positive effect on perceived usefulness. H5. Subjective norm has a positive effect on perceived usefulness. H6. Perceived ease of use has a direct influence on the perceived usefulness of the system. Davis [3] also hypothesized that a user’s attitude toward the system was a determining factor in the use or the rejection of the system. The user’s attitude, in turn, was considered to be influenced by two major factors, namely perceived usefulness and perceived ease of use [16, 17]. We therefore assume that: H7. Perceived ease of use is positively related to the learner’s attitude towards OLEs. H8. Perceived usefulness is positively related to the learner’s attitude towards OLEs. H10. The learner’s attitude towards OLEs has a positive influence on their intention to continue using the same system.

A New Model of Learner Experience in Online Learning Environments

35

Fig. 2. Proposed research model for LX in OLEs.

Bhattacherjee [7] assumes in the ECM model that perceived usefulness and satisfaction influence the decisions later of the learners to continue or not to use the system. These assumptions have been confirmed by recent studies in the field of e-learning [8–14, 21]. Consequently, we suppose that: H9. Perceived usefulness has a positive effect on the intention to continue using OLEs. H11. Perceived usefulness has a positive effect on learner satisfaction. H12. Learners’ satisfaction with OLEs has a positive effect on their intentions to continue using the same systems.

36

Y. Safsouf et al.

In marketing literature, customer satisfaction means, proposing to your customers products or services that meet their needs, in their expectations and in their requirements. The same concept also applies in the field of online education. Previous research asserted that a very well-designed, flexible, secure system and proposing a diversity of assessments, offers a pleasant learner experience with positive results, which leads to the satisfaction [8, 10, 13, 16, 17]. Thus, we make these assumptions: H13. Perceived security has a positive effect on learner satisfaction. H14. Design quality has a positive effect on learner satisfaction. H17. Course flexibility has a positive effect on learner satisfaction. H21. Diversity of assessments has a positive effect on learner satisfaction. H22. Device and context independence have a positive effect on learner satisfaction. According to D&M ISS [15], the quality of information, the quality of service and the quality of the system are the determining factors of the satisfaction when using an information system. The same report applies in the field of the distance education. OLEs which have no technical problems with good services and perfectly designed courses that meet the expected learning outcomes, will help learners to obtain better grades, consequently, to be satisfied regarding the system [16, 17, 21]. We thus assume that: H15. The quality of the internet connection has a positive effect on the quality of the system. H16. The quality of the system has a positive effect on learner satisfaction. H18. The quality of the service provided has a positive effect on learner satisfaction. H19. The quality of the course has a positive effect on learner satisfaction. The OLEs organize teaching by bringing students together in a virtual classroom in which rich and varied interactions take place. Some of these interactions are designed and organized by the teacher, while others are more informal. The control of the technology will make capable teachers and learners to design and to participate courses, to facilitate the support and to create direct discussions or even in the form of group, which contributes to greater satisfaction [6, 8, 17]. We shall thus assume the following assumptions: H20. Social interaction has a positive effect on learner satisfaction. H23. Control over technology has a positive effect on social interaction. H24. Control over technology has a positive effect on course quality. H25. Control over technology has a positive effect on service quality. H26. The communication ability has a positive effect on the quality of service provided. According to Zimmerman [18], self-regulation of learning is defined as the dynamic process by which the individual plans, monitors and evaluates his or her learning, by applying appropriate strategies to achieve fixed goals. The access to the online courses in a flexible way, the devices and context independence as well as the diversity in

A New Model of Learner Experience in Online Learning Environments

37

assessments offered by the system, contribute strongly to the development of selfregulation [17, 19, 20]. We will therefore assume that: H27. H28. H29. H30.

Course flexibility has a positive effect on self-regulation. The diversity of assessments has a positive effect on self-regulation. Device and context independence has a positive effect on self-regulation. Self-effort has a positive effect on self-regulation.

In the field of e-learning, a learner’s success or failure is essentially based on continuity of use of the system [8, 10–12, 14], learners’ satisfaction [2, 8–11, 13, 14], and their self-regulation [17, 19, 20, 22]. Therefore, the following hypotheses are proposed: H31. Intention to continue using OLEs has a positive effect on learners’ success. H32. Learner satisfaction has a positive effect on learners’ success in OLEs. H33. Self-regulation has a positive effect on learners’ success in OLEs.

5 Conclusion and Future Work The main objective of OLEs is to produce results equal to or even better than traditional learning methods. In our research, we examined the integration of ECM, TAM3, D&M ISS models and SRL theory in explaining satisfaction and intention to continue using OLEs over the long term to combat failure and dropout rates. This paper proposes a research framework with a promising new theoretical model for evaluating LX in OLEs. In this model, we emitted the hypothesis that the determinants of learner success should take into account individual learner characteristics, system characteristics, teacher characteristics, course characteristics, and social characteristics. For future studies, we will have to continue to explore these hypotheses, furthermore, we suggest to make more in-depth studies to validate this model in the context of higher studies in Morocco.

References 1. Castillo-Merino, D., Serradell-López, E.: An analysis of the determinants of students’ performance in e-learning. Comput. Hum. Behav. 30, 476–484 (2014) 2. Wook, M., Yusof, Z.M., Nazri, M.Z.A.: The acceptance of educational data mining technology among students in public institutions of higher learning in Malaysia. Int. J. Futur. Comput. Commun. 4, 112 (2015) 3. Davis, F.D.: A technology acceptance model for empirically testing new end-user information systems: Theory and results. Doctoral dissertation. MIT Sloan School of Management, Cambridge (1986) 4. Venkatesh, V., Davis, F.D.: A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag. Sci. 46, 186–204 (2000) 5. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 39, 273–315 (2008)

38

Y. Safsouf et al.

6. Agudo-Peregrina, Á.F., Hernández-García, Á., Pascual-Miguel, F.J.: Behavioral intention, use behavior and the acceptance of electronic learning systems: differences between higher education and lifelong learning. Comput. Hum. Behav. 34, 301–314 (2014). https://doi.org/ 10.1016/j.chb.2013.10.035 7. Bhattacherjee, A.: Understanding information systems continuance: an expectationconfirmation model. MIS Q. 25, 351 (2001). https://doi.org/10.2307/3250921 8. Chow, W.S., Shi, S.: Investigating students’ satisfaction and continuance intention toward elearning: an extension of the expectation – confirmation model. Procedia Soc. Behav. Sci. 141, 1145–1149 (2014). https://doi.org/10.1016/j.sbspro.2014.05.193 9. Lee, M.-C.: Explaining and predicting users’ continuance intention toward e-learning: an extension of the expectation–confirmation model. Comput. Educ. 54, 506–516 (2010). https://doi.org/10.1016/j.compedu.2009.09.002 10. Alraimi, K.M., Zo, H., Ciganek, A.P.: Understanding the MOOCs continuance: the role of openness and reputation. Comput. Educ. 80, 28–38 (2015). https://doi.org/10.1016/j. compedu.2014.08.006 11. Halilovic, S., Cicic, M.: Antecedents of information systems user behaviour-extended expectation-confirmation model. Behav. Inf. Technol. 32, 359–370 (2013). https://doi.org/ 10.1080/0144929X.2011.554575 12. Hong, J.C., Hwang, M.Y., Szeto, E., et al.: Internet cognitive failure relevant to self-efficacy, learning interest, and satisfaction with social media learning. Comput. Hum. Behav. 55, 214–222 (2016). https://doi.org/10.1016/j.chb.2015.09.010 13. Oghuma, A.P., Libaque-Saenz, C.F., Wong, S.F., Chang, Y.: An expectation-confirmation model of continuance intention to use mobile instant messaging. Telemat. Inform. 33, 34–47 (2016). https://doi.org/10.1016/j.tele.2015.05.006 14. Stone, R.W., Baker-Eveleth, L.: Students’ expectation, confirmation, and continuance intention to use electronic textbooks. Comput. Hum. Behav. 29, 984–990 (2013). https://doi. org/10.1016/j.chb.2012.12.007 15. DeLone, W.H., McLean, E.R.: The DeLone and McLean model of information systems success: a ten-year update. J. Manag. Inf. Syst. 19, 9–30 (2003) 16. Mohammadi, H.: Investigating users’ perspectives on e-learning: an integration of TAM and IS success model. Comput. Hum. Behav. 45, 359–374 (2015). https://doi.org/10.1016/j.chb. 2014.07.044 17. Ozkan, S., Koseler, R.: Multi-dimensional students’ evaluation of e-learning systems in the higher education context: an empirical investigation. Comput. Educ. 53, 1285–1296 (2009). https://doi.org/10.1016/j.compedu.2009.06.011 18. Zimmerman, B.J.: From cognitive modeling to self-regulation: a social cognitive career path. Educ. Psychol. 48, 135–147 (2013) 19. Matzat, U., Vrieling, E.M.: Self-regulated learning and social media – a ‘natural alliance’? Evidence on students’ self-regulation of learning, social media use, and student–teacher relationship. Learn. Media Technol. 41, 73–99 (2016) 20. Dabbagh, N., Kitsantas, A.: Personal learning environments, social media, and self-regulated learning: a natural formula for connecting formal and informal learning. Internet High. Educ. (2011). https://doi.org/10.1016/j.iheduc.2011.06.002 21. Al-Samarraie, H., Teng, B.K., Alzahrani, A.I., Alalwan, N.: E-learning continuance satisfaction in higher education: a unified perspective from instructors and students. Stud. High. Educ., 1–17 (2017). https://doi.org/10.1080/03075079.2017.1298088 22. Müller, N.M., Seufert, T.: Effects of self-regulation prompts in hypermedia learning on learning performance and self-efficacy. Learn. Instr. 58, 1–11 (2018). https://doi.org/10. 1016/j.learninstruc.2018.04.011

Hybrid Recommendation Approach in Online Learning Environments Mohammed Baidada1,3(&), Khalifa Mansouri2, and Franck Poirier1 1

Lab-STICC, Université Bretagne-Sud, Lorient, France [email protected], [email protected] 2 SSDIA, ENSET, Université Hassan II, Casablanca, Morocco [email protected] 3 CRI, Institut Supérieur d’Ingénierie et des Affaires, ISGA Rabat, Rabat, Morocco

Abstract. Online learning environments (OLE) have provided learners with personalized content through the use of Recommendation Systems (RS). Some RS are based on a learner’s profile to offer content that matches his/her preferences, others consider the learner in a collective setting and offer him appreciated or popular content in his/her group. Our contribution consists of proposing a hybrid approach of RS which takes into account the learner’s preferences and the similarity of the learner with his/her group, to improve the relevance of the proposed contents. Keywords: Online learning environments  Personalization Recommendation systems  Item-based filtering  Collaborative filtering

1 Introduction Personalization in OLEs is an important focus of research, it may be the personalization of content to learners [1], adaptive hypermedia [1, 2], as it may be the adaptation of assessments to learners [3]. Some proposed approaches often try to consider the learner from an individual point of view. However, other approaches emphasize collaborative aspects in the learning process [4], and confirm that a learner will be more successful in a group, and that collaborative work can reduce the learner’s feeling of isolation, such as social networks [5] and network learning [6]. Our contribution consists to offer a hybrid approach to a RS that takes into consideration both the learner’s personal preferences and the preferences of other learners to whom he/she is connected as part of a group.

2 State of the Art on Recommendations in OLEs RS are software tools and techniques that provide items suggestions for a user [16]. They are based on filtering techniques, here is a main classification [7, 11, 14, 17]:

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 39–43, 2019. https://doi.org/10.1007/978-3-030-03577-8_5

40

M. Baidada et al.

– Content-based filtering: it offers the user similar items that have the same values for a set of attributes that describe them, it requires knowledge of a description of items and preferences of the user; – Collaborative filtering: it considers the similarities of the user to a group of users to whom he or she can be linked. Note that here the user is considered in the collective context with other users. There are several subtypes, collaborative user-based filtering which considers the user’s similarity to a user group with whom they share interests, and collaborative item-based filtering where the similarities are computed between items evaluated or scored by other users; – Hybrid filtering: approach obtained by mixing two or more other approaches. This hybridization tries to overcome the limitations of the filtering methods when considered alone, and to improve the relevance of the recommended items as well. Some RS are based on the personal characteristics of learners. Leblay [7] proposes a recommendation model that uses the information collected in the system traces to show the learner the activities to be followed. Other RS are based on the social characteristics of the learner. Being isolated, it will be difficult for a learner to progress in his/her learning process, which is why there has been a great deal of interest in integrating the learner into a group as part of activities and collective exchange [12], such as the peer evaluation [13], and the integration of social media modules in learning environments [11] (Facebook, Linkedin). Tadlaoui (Tadlaoui et al. 2015) proposes an approach for recommending educational resources based on social ties [11]. It considers the similarities of a learner with other learners to generate recommendations based on popularity, usefulness and consultation rate.

3 Proposal of Our Hybrid Approach Hybridization of filtering approaches usually attempts to overcome the limitations of each approach or improve the relevance of the recommendations. Some works have tried to exploit the hybridization to solve the problems related to cold start [18].

Fig. 1. Diagram of the proposed recommendation approach.

According to our research, there is no work that deals with the hybridization of filtering approaches in e-learning in order to improve the relevance of the

Hybrid Recommendation Approach in Online Learning Environments

41

recommendations. Our contribution consists to take advantage of the two filtering approaches: content-based filtering and collaborative filtering. We assume that if we take advantage of both approaches by combining the selection criteria, we will come up with better recommendations (Fig. 1). Learners will enrich the pedagogical resource base, through their interactions on the social exchange module. The recommendation system will use, for the choice of resources, criteria such as the number of accesses, preferences, etc. Among the methods used in the RS for calculating similarities (TF-IDF, Pearson, Cosinus, Jaccard, etc.), we decided to use the Euclidean distance since we are interested in evaluating a distance rather than a correlation. The two techniques of filtering will each provide a ranking of items in terms of relevance, and we will calculate for each item the average to provide a final ranking.

4 Proposed Experiment To evaluate our hybrid recommendation approach, we decided to conduct an experiment. The diagram in Fig. 2 shows the architecture of our environment:

Fig. 2. Experimental platform architecture.

We have chosen Moodle as the core of our learning environment, and we have opted for the SocialWall plugin [15] which transforms the platform by modifying the format of the courses and giving them a social network presentation (Post, Like, comments, etc.). For the hybrid recommendation module, we developed it separately and its generated results will be integrated into the Moodle display scripts. The scenario of the experiment will consider a course of advanced programming, spread over one semester, and will take place in face-to-face. Learners will be required to use the online platform to view the course materials. SocialWall will allow them to propose new resources (actions: Add link and Upload), and with each action of adding resources, they will have to give them the necessary description. The experience is planned to be conducted on two groups in the first year of the specialty of 20 and 32 students. We will divide each of the two groups of learners into two sub-groups, for one we will propose a recommendation approach based on individual preferences, and for the other we will propose a hybrid recommendation approach based on the individual and the social. The two subgroups will be selected in a balanced way based on their grades from previous levels.

42

M. Baidada et al.

Finally, for the approach evaluation, we will make a qualitative analysis, considering the reduced size of the groups, by comparing the averages of the final grades of the two subgroups. Comparing the results in both groups can reinforce the conclusions. This will not prevent us from also using a final evaluation questionnaire.

5 Conclusion The purpose of our study is to take advantage of the benefits of taking into account a learner personal preferences and the preferences of other learners in the group. The experiment to be carried out will improve the proposed approach of recommendation. As extensions of the work, and according to the learner-defined PAPI model (Public and Private Information for Learners) defined by IEEE [8–10], we want to take into account the performance of the learner, in particular its shortcomings to improve the relevance of the recommendations.

References 1. Popescu, E.: Dynamic adaptive hypermedia systems for e-learning. Thesis, pp. 26–28 (2008) 2. Brusilovsky, P.: Adaptive Hypermedia, User Modeling and User-Adapted Interaction, p. 4 (2001) 3. Jill-Jênn Vie, M.: Modèles de tests adaptatifs pour le diagnostic de connaissances dans un cadre d’apprentissage à grande échelle. Thesis, pp. 23–24 (2016) 4. Strebelle, A., Depover, C.: Analyse d’activités collaboratives à distance dans le cadre d’un dispositif d’apprentissage de la modélisation scientifique. Distances et médiations des savoirs 2, 6–10 (2013) 5. Mozhaeva, G., Feshchenko, A., Kulikov, I.: E-learning in the evaluation of students and teachers: LMS or social networks? Procedia Soc. Behav. Sci. 152, 130 (2014) 6. Profit, F.: L’apprentissage en réseau: le travail collaboratif. Revue internationale d’éducation de Sèvres 1, 2 (2003) 7. Leblay, J.: Aide à la navigation dans les parcours d’apprentissage par reconnaissance de procédés et recommandations à base de traces. RJC-EIAH 1, 2 (2016) 8. Jean-Daubias, S.: Ingénierie des profils d’apprenants. HDR Report, p. 32 (2011) 9. Learning Technology Standards Committee of the IEEE Computer Society, Draft Standard for Learning Technology-Public and Private Information (PAPI) for Learners (PAPI Learner)-Core Features, pp. 24–39 (2001) 10. Ounnas, A., Liccardi, I., Davis, H., Millard, D., White, S.: Towards a semantic modeling of learners for social networks. In: International Workshop on Applications of Semantic Web Technologies for E-Learning, p. 1 (2006) 11. Tadlaoui, M., George, S., Sehaba, K.: Approche pour recommandation de ressources pédagogiques basée sur les liens sociaux. In: EIAH Agadir, p. 5 (2015) 12. Salihoun, M., Guerouate, F., Sbihi, M.: The exploitation of traces serving tutors for the reconstruction of groups within aCBLE. Procedia Soc. Behav. Sci. 152, 220 (2014) 13. Bouzidi, L., Jaillet, A.: L’évaluation par les pairs pourra-t-elle faire de l’examen une vraie activité pédagogique? In: EIAH Lausane, pp. 2–3 (2007) 14. Alchiekh Haydar, C.: Les systèmes de recommandation à base de confiance. Thèse, pp. 8–18 (2014)

Hybrid Recommendation Approach in Online Learning Environments

43

15. https://moodle.org/plugins/format_socialwall. Accessed Nov 2017 16. Ricci, F., Rokach, L., Shapira, B.: Recommender Systems Handbook. Springer, Boston (2011) 17. Lemdani, R.: Système hybride d’adaptation dans les systèmes de recommandation. Thesis, pp. 23–33 (2016) 18. Benhamdi, S., Babouri, A., Chiky, R.: Personalized recommender system for e-learning environment, pp. 3–4 (2016)

Performance Scoring at University: Algorithm of Student Performance Adnane Rahhou1,2(&) and Mohammed Talbi2 1

2

Multidisciplinary Laboratory in Sciences and Information, Communication, and Education Technology (LAPSTICE), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P. 5366 Maarif, Morocco [email protected] Observatory of Research in Didactics and University Pedagogy (ORDIPU), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P. 5366 Maarif, Morocco [email protected], [email protected]

Abstract. The measurement of the performance in education requires a skill in the management of the evaluation performance. A science known in companies but remains complexly applicable in education, because of the complexity of the variables that govern it on the one hand, but also because of the performance management systems used in universities, relying instead on publications, ranking, quality standardization, access to job market, etc. The performance of the students is well diluted in a context of “measurement”, which is global to the educational institution and not specific to them. This reflection has led us to develop an Algorithm of Student Performance adapted to Bachelor Master Doctorate system (LMD), governed by a database containing only the exam scores and information identifying the students. Thus, if university teaching claims to be successful, then it is obvious that the measure of this performance becomes necessary. Keywords: Algorithm of performance  Student scoring Key performances indicators  Performance management system

1 Introduction Although there are international systems for classifying universities, they often remain limited to the “estimation” and not the “measure” of the overall performance and thus provide few information on student performance [1]. Other classification systems follow quality standards, insertion in the job market after graduation, the number of research articles in journals indexed with an impact factor, or all of these criteria combined at the same time. So many ranking systems followed to finally give a rank to a university. But what about student performance? A relatively recent study shows that rankings could be used to judge the performance of an organization, however none of these rankings should be considered a student performance, because of their general aspect [2]. In other terms, measuring

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 44–55, 2019. https://doi.org/10.1007/978-3-030-03577-8_6

Performance Scoring at University

45

university performance indicators are not necessarily the ones that determine the measure of student performance. From this perspective, we can start from the postulate that the performance rating (ranking) of a university and the measurement of student performance are two distinct variables, Based on this observation, a survey was conducted to highlight key performance indicators (KPIs) to be considered in order to define an algorithm for calculating the performance of the student by scoring, from an automatic processing of a database made up of exam scores. The indicators of this survey are not new or different from those already existing. However, the novelty is in their modes of calculation. In this article, we propose a method of calculation and scoring the variables implemented as an algorithm, convertible into programming languages, in this case Excel spreadsheets under visual basic.

2 Context of Research 2.1

Evolution of Performance Management in Education

Performance management systems are often based on an informal observation that consists in “judging” according to the scores obtained at the evaluations. This observation is not strict enough to define “performance assessment” [4]. Very recent work highlights that in several countries, the evaluation has been oriented towards efficiency and performance, driven by indicators, benchmarks and rankings [5]. Other works raise the question about the usefulness of performance criteria improving student achievement [4]. 2.2

Efficient Operation of Key Performance Indicators in Evaluation

The indicators ensure the interpretation of data leading to the performance of the assessments, and must satisfy quality requirements to avoid initially distorting its measurement. These requirements verify the reliability of their source and how they are collected and analyzed [3]. In the case of this study, university evaluations are considered as the main source of performance data. In the same sense, the information needed to better understand a performance indicator has already been clearly defined [6]. This information includes: the title of the indicator, its role, the objectives and strategies to which it is linked, its calculation formula and the source of these data [3]. The combined operation of this information would favor a method of calculating performance from evaluations, in a database containing all student scores. For a representation close to the student’s skills, the performance calculation formula must contain KPIs assessments, that means significant variables [1]. Moreover, in this perspective of “measurement” of the performance, one of the five criteria, cited in the works of Wilsdon [7], reformulated by Strike [3], define that the use of a wide variety of embedded indicators in a performance calculation would be closer to the student’s skill profile.

46

A. Rahhou and M. Talbi

3 Research Objectives The objective of this research is to set up an algorithm for measuring student performance by scoring the skill profile. Like any algorithm, the variables that verify its employability must be edited beforehand. To achieve it, we searched for the most relevant criteria of the students’ skills through their exam scores.

4 Problem Statement The criteria that define the performance of students according to their exam scores are very numerous and come from the combination of non-exhaustive variables. In this case, we face a problematic of a selective nature: What are the most reliable criteria to follow in order to edit the variables of a the Algorithm of Student Performance? The answer could not follow an experimental way because the number of combinations to consider is high, in order to find the right variables.

5 Methodology Using any performance management system, we face a variety of indicators, as already mentioned, where the choice of those who best represent the student performance means all the engineering of this measure. Thus, one of the methods of validation of the relevance of the performance indicators that we use submits them to test questions [3, 6, 8]: Are we really measuring what we want to measure? Do the selected KPIs correctly represent what we want to measure? Is the method used to measure performance adequate? These questions guided the development methodology of the performance algorithm in three steps. Step 1: Study of existing performance management systems to identify relevant KPIs that could measure student performance through exam scores. Step 2: Edition of a survey questionnaire to validate the relevance of the selected KPIs. Step 3: Classification of validated KPIs according to their priorities to assign them coefficients and embed them in the Algorithm of Student Performance. 5.1

Study of Some Systems of Performance Management and Scoring

The study of “performance evaluation” and the study of “performance through evaluations” are two different concepts. However, starting from the analysis of evaluations it is possible to set up a calculation or algorithm of student performance. Seen otherwise by Isaacs (Isaacs et al. 2013), “an examination (or test) can be defined as an attempt to measure a learner’s knowledge, understanding or skill [9]. By exploiting performance through evaluations, it is more practical to look for known performance criteria in education. These “education criteria are most explicit in the area of student learning, as this requirement is common to all education organizations, regardless of their larger missions” [10]. Moreover, the major objective behind the study of performance is not to “improve the measurement of values” but rather “to

Performance Scoring at University

47

measure values to improve” and understand weaknesses. Indeed “it is not possible to define the perfect performance measure, but if you understand the weaknesses in the measures you utilise you can foresee some of the pitfalls” [3, 11]. In addition, “the measures or indicators you select should best represent the factors that lead to improved student” [12]. There are several systems of performance management that have guided our research methodology. Those that best support our approach are listed in Table 1. Table 1. Examples of performance management systems. Performance management system Bladrige education criteria The balanced scorecard Scoring system of the Baldrige criteria of performance excellence Test scores European frameworks for standards in educational assessment Educational testing service (United States) Joint council for qualification (United Kingdom) State examinations comission (Ireland)

Sources [13] [14] [10, 15] [1] AEA Europe, 2012 [9] Version 2009 Version 2016 Version 2016

The principle of these performance management systems is based on measurement, estimation (subjective judgment) or scoring and scaling. They concern the performance of organizations, students or both at the same time. The disadvantage is that they mainly use the principle of classification. In addition, the databases they use is regulated, strict or difficult to recover and access. 5.2

Required Data for the Variables of the Performance Algorithm

The Algorithm of Student Performance presented is governed by a database that contains only exam scores and information that identifies students. This database can be processed by different database management systems. In our case, Excel was chosen for its simplicity to edit the information systems but also because of the small size of the database chosen as a sample for the calculation test. The data that is embedded in this algorithm are presented in Table 2. These data include two types of variables: discrete, which cannot be divided, and continuous, as numbers [1]. Moreover, it should be noted that the skill profile measured considers only the objective aspect of the evaluation, which is considered sufficient to approximate a representative result.

48

A. Rahhou and M. Talbi Table 2. Variables of the performance algorithm. Student identification data Exam code Student name Date of birth Date of entry Type of baccalaureate Date of obtaining Program/curriculum

5.3

Exam data Module priority Coefficient Normal session Catch-up exam Score after catch-up Average per semester Average per year Average per program Final score Profile

Program/curriculum data Student program Level Semester Subject Title Discipline Module

Statistical Survey, Population and Results

It should be remembered that the student’s performance algorithm is operated in the Licence Master Doctorate (LMD) system, which is based on the principle of modules and semesters. This work is limited to the licence cycle (Bachelor’s degree in three years). Each year of study includes two semesters and each semester includes four modules divided into major, complementary and cultural. At the end of the cycle, the student is required to validate three years of study, so 24 modules. Using the results of this survey, conducted among 23 senior and skilled teachers at the Ben M’Sik Faculty of Sciences, five of the eleven questions were retained in the to determine the indicators to be considered in developing the Algorithm of Student Performance. These questions can be grouped into four categories that generate different KPIs: Category 1: Progression of the student, Category 2: Ranking, Category 3: Catch-up sessions and Category 4: Average scores. Thus, the five questions and the summary of their answers are presented in Table 3.

Table 3. Questions of the survey [16] and findings. Subject of the question Access to data about the evolution of student performance Student’s ranking Frequency of access to the catch-up exam session Usefulness to incorporate an intelligent alert index that predict the catch-up exam session Possibility to consult the average results of the main, complementary and cultural modules distinctly

Answers and findings 90% say they do not have access to the evolution (progression) of students 91% say they do not have access to the ranking 96% say they do not have access to statistics on the likely frequency of Catch-up Sessions 83% encourage the idea of having an exam Catch-up Indicator 96% say they do not have access to module averages separately by priority

Performance Scoring at University

49

The sample taken represents only about 10% of the total teacher. Despite this, the results are very close. This finding indicates, in agreement with the test questions method, that the various key performance indicators to be taken in the four categories, described above, would be adequate and relevant in the algorithm.

6 Research Outcomes: Algorithm of Student Performance 6.1

Definitions of Selected Key Performance Indicators: Algorithm Variables

Using the four categories mentioned, the KPIs that enter the performance algorithm and that represent its variables could be defined in Table 4. Table 4. Key performance indicators of the performance algorithm. Categories Progression of the student Ranking

Catch-up sessions

Average scores

Definition Progression of the student during semesters Ranking of the student in relation to the number of students enrolled in a program Frequency of access to the catchup session and alert

Average scores of modules by priority

KPIs KPI1: Evolution of average scores KPI2: Ranking

KPI3: Catch up session KPI4: Compensation frequency KPI 5: Modules validation KPI 6: Average of major modules KPI 7: Grade

Variables/coefficient Continuous/C1 = 2

Continuous/C2 = 1

Discreet/C3 = 1 Continuous/C4 = 2

Discreet/C5 = 3

Continuous/C6 = 2

Discreet/C7 = 1

Each category has one or more KPIs, which are discrete or continuous variables. Due to the general concept of discrete variables, coefficient of discrete KPIs of this type are assigned a value of 1, while continuous KPIs take 2. Except KPI 5 which takes 3, since it represents the main objective of a student in the LMD system. However, KPI 2 takes a coefficient equal to 1 even if it is a continuous variable since the ranking is a quantitative measure much more than qualitative. These coefficients are adjusted so that the score of the algorithm approaches 20 points, as the marking method in the LMD system.

50

6.2

A. Rahhou and M. Talbi

Calculation Modes1 of Key Performance Indicators

After having identified the KPIs necessary to develop the students’ performance algorithm, we present their methods of calculation. Since some KPIs are discrete, then they are given scores less than or equal to 1 according to the mention they represent. KPI 1: Evolution of Average Scores. A value of +1, 0 or −1 point is fixed respectively for improvement, no progression and regression between two semesters. Its value is from −5 points, as minimum, to +5 as maximum. Its calculation is presented as a formula (1). " # P6 1 n¼2 EntðSn  Sn1 Þ KPI 1 ¼ 1þ 2 5

ð1Þ

The KPI 1 gives the calculation for six semesters by the sum (R) of the integers (Ent) representing the differences between the averages of two semesters (Sn and Sn−1) of each year divided by the maximum number of points (5 points). Since this indicator can take a negative value, its calculation is adjusted (multiplication by 1/2) so that its value is between 0 and 1. KPI 2: Ranking. The calculation of the rank depends on the total number of students enrolled in the same study program. Its calculation is presented as a formula (2). KPI 2 ¼ 1 

Rank of general average of student Total number of students

ð2Þ

The KPI 2 shows the ratio that gives the student’s ranking according to his general average up to date and the number of students enrolled in the same study program. Its maximum value tends to 1. KPI 3: Catch-up Session. This indicator shows the degree of effectiveness of the student and therefore alerts to the probability of access to the catch-up sessions. Its calculation is presented as a formula (3). KPI 3 ¼ 1 

Number of modules unvalidated in normal session Number of modules exames in normal session

ð3Þ

The KPI 3 is given by assigning one of four mentions according to whether the student has to validate four modules during a semester (see programming paragraph). The higher the number of module exams, the more relevant this indicator. It is also an alert index of catch-up session according to the scores under average in normal session. KPI 4: Compensation Frequency. The compensation of a module means the extension of its score by that of another module to reach 10/20 so that it is considered validated. Its calculation is presented as a formula (4).

1

All Formulas and Program Codes Are Developed by Adnane Rahhou.

KPI 4 ¼ 1 

Performance Scoring at University

51

Number of compensated modules after Catch-up session Number of modules exames in Catch-up session

ð4Þ

The KPI 4 is given by the definition of five mentions to which are given threshold scores of 0, 3, 9, 12 and 24. These scores represent respectively proportions of 0, 1/8, 3/8, 1/2 and 1 (see programming paragraph). KPI 5: Modules Validation. This indicator simply gives the number of finally validated modules divided by the total number of modules in a program study. Its calculation is presented as a formula (5). KPI 5 ¼ 1 

Number of unvalidated modules Total number of modules

ð5Þ

The KPI 5 is a discrete binary variable that takes the values: full (all modules validated), and partial (Not all modules validated). It contributes completely only at the end of the curriculum since it concerns the total number of modules in the study program. KPI 6: Average of Major Modules. The scores of the main modules could be reinforced by those of the secondary ones and cultural. For this reason, KPI 6 is considered in the algorithm. Its calculation is presented as a formula (6). KPI 6 ¼

Average of the main modules 16

ð6Þ

The KPI 6 is the ratio of the average of the main modules and the average which defines the highest grade (High honors), thus starting from 16/20. KPI 7: Grade. The grade represents the overall rating given according to the overall average. The LMD system includes four grades (see programming paragraph) that define the student’s performance. Its calculation is presented as a formula (7). KPI 7 ¼

General Average 20

ð7Þ

Four proportions are used, relative to the scores of KPI 7 (see programming paragraph). It contributes completely only at the end of the curriculum since it concerns an overall average. 6.3

The Algorithm of Student Performance (ASP)

After having given all the calculation methods of the KPIs to be embedded in the Algorithm of Student Performance (ASP), they are consolidated in the general formula with the coefficients of each indicator, as presented (8): ASP ¼

7 X i¼1

Ci  KPIi

ð8Þ

52

A. Rahhou and M. Talbi

The score can theoretically go from 0 to 20, However, 0 is very improbable, and 20 unattainable because KPI 3 and KPI 7 can practically not take the total value 1. Finally, the selection of the most appropriate indicators for the performance situation studied, the flexibility of modifying the coefficients, their detailed calculations as well as the multitude of variables that this algorithm includes make it an equation of measurement very representative of student skills. 6.4

Programming the Algorithm in Computer Information System

As any algorithm, it is perfectly programmable in computer programming languages. In this sense, a work is being published, carried out in parallel to the present one, allowed us to edit this algorithm in Excel, included in a Student Scorecard of which one presents the dashboard containing the KPIs of the algorithm. The display is ergonomic and optimal, as shown in Fig. 1.

Fig. 1. This screenshot shows an example of the performance of a student enrolled in the SMC (Science Matter of Chemistry) and who is still continuing his curriculum in semester 5. Regarding the performance score given by the algorithm, it is displayed under the name of “profile performance”.

The program of the Algorithm of Student Performance is detailed in logical functions that can be reproduced in computer programming languages, as described below: KPI 1 – Evolution of average scores: Ideally represented as a trend line KPI 2 – Ranking: Ideally represented as a fraction

Performance Scoring at University

53

KPI 3 – Catch Up Session:

KPI 4 - Compensation frequency

KPI 5 - Modules validation

KPI 6 - Average of major modules

KPI 7 – Grade

Finally, it remains possible to empirically verify the KPIs and algorithm by simulating tests directly in the Information System of Performance Management and Reporting [17], making an approximate comparison of the performance profile obtained with the results entered into the database, while by observing if the significance of the score of each KPI corresponds to the results that feeds it. This is obviously a manual verification but however the easiest to confirm the algorithm is giving well the performance information for which it was developed.

7 Future Scope Understanding and using performance measurement systems often requires statistical skills. This is barrier for a teacher who simply wants to obtain a student profile, avoiding searching deeply for numbers and analytical interpretations.

54

A. Rahhou and M. Talbi

It is in this way that evolves our future research perspective, on the development of autonomous information systems, which can generate relevant and smart performance reporting, easy to interpret by a large audience and without IT prerequisites. This is the actual in performance management at the ben M’Sik Faculty of Science, aiming to extend it to the Moroccan universities. It remains compatible with any educational system by modifying the parameters concerning student scores and the distribution of years of study to obtain scores and averages of student. Thus, student performance management would simply become a matter of reading and interpreting.

8 Conclusion Thanks to the Algorithm of Student Performance, the measure of performance at the university could be made simple and accessible widely, However, the measurement of the performance do not replace the ranking, but could complement it so that the word “performance” becomes as complete as possible in education, including the student and the institution at the same time. Indeed, “ranking results are becoming more and more relative rather than being seen as absolute success or failure for an institution” [2]. In addition, every ranking or performance management system must consider the modification and even the change of the indicators, because skills and profiles are also evolving continuously. After all, no measure of performance is absolute. It remains relatively a transformation of concrete values into simple numbers.

References 1. Mohan, R.: Measurement, Evaluation and Assessment in Education. PHI Learning Pvt. Ltd., Delhi (2016) 2. Downing, K., Ganotice Jr., F.A.: World University Rankings and the Future of Higher Education. IGI Global, Hershey (2016) 3. Strike, T.: Higher Education Strategy and Planning: A Professional Guide. Taylor & Francis, London (2017) 4. Arter, J., McTighe, J.: Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance. Corwin Press, Thousand Oaks (2001) 5. Barats, C., Bouchard, B., Haakenstad, A.B.: Faire et dire l’évaluation: L’enseignement supérieur et la recherche conquis par la performance. Presses des Mines, Paris (2018) 6. Neely, A.: Business Performance Measurement: Theory and Practice. Cambridge University Press, Cambridge (2002) 7. Wilsdon, J.: The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management. CPI Group (UK) Ltd., Croydon (2016) 8. Rogers, T., Davidson, R.: Marketing Destinations and Venues for Conferences, Conventions and Business Events. Routledge, Abingdon (2015) 9. Murchan, D., Shiel, S.: Understanding and Applying Assessment in Education. SAGE Publications, Los Angeles (2017) 10. Hertz, H.S.: Education Criteria for Performance Excellence. DIANE Publishing, Darby (2001) 11. Gray, D., Micheli, P., Pavlov, A.: Measurement Madness: Recognizing and Avoiding the Pitfalls of Performance Measurement. TJ International Ltd., Padstow (2015)

Performance Scoring at University

55

12. Cokeley, S.: Transformation to Performance Excellence: Baldrige Education Leaders Speak Out. ASQ Quality Press, Milwaukee (2006) 13. Evans, J.R., Lindsay, W.M.: Managing for Quality and Performance Excellence. Cengage Learning, Boston (2016) 14. Kaplan, R.S., Norton, D.P.: The Balanced Scorecard: Translating Strategy Into Action. Harvard Business Press, Boston (1996) 15. Hertz, H.S.: Education Criteria for Performance Excellence: Malcolm Baldrige National Quality Award. DIANE Publishing, Darby (1999) 16. Performance of information system (2017). https://docs.google.com/forms/d/1uHO64e9P9z Akn0A-sY-NDgHIS7EkvL-rniTtPjOp_GE/viewform?edit_requested=true 17. Rahhou, A.: Information System of Performance Management and Reporting (ISPMR), Hassan II university of Casablanca, Multidisciplinary Laboratory in Sciences and Information, Communication, and Education Technology (LAPSTICE), Faculty of Sciences Ben M’Sik (2017). https://drive.google.com/open?id=1cRCRZypEZQyJSJ997WDAad47Ch Ox4hxQ

Introducing Mobile Technology for Enhancing Teaching and Learning at the College of Business Education in Tanzania: Teachers and Students’ Perspectives Godfrey Mwandosya1(&), Calkin Suero Montero2, and Esther Rosinner Mbise1 1

2

College of Business Education, ICT and Mathematics, Bibi Titi Mohammed Street, Dar es Salaam, Tanzania [email protected] School of Computing, University of Eastern Finland, Joensuu, Finland

Abstract. Teachers and students in higher education institutions use mobile technologies in teaching and learning. However, in Tanzanian higher education institutions (HEIs), there is little known regarding to perspectives of teachers and students on the usefulness of mobile technologies on teaching and learning. A modified Technology Acceptance Model (TAM) was applied to obtain teachers and students’ views on how mobile technologies enhance teaching and learning. A sample of 80 teachers and 100 students from the College of Business Education (CBE) was used. A survey questionnaire seeking views of teachers and students was used. Qualitative data were analyzed through the thematic content analysis while quantitative data was analyzed by IBM SPSS (Version 23) descriptive statistics. The results indicates that both teachers and students view mobile technologies as an important program in enhancing teaching and learning. Furthermore, awareness, training, and involvement in the mobile technology requirements’ design are crucial for technology intake. Keywords: Mobile technology Teachers and student’s technology use perspectives Teaching and learning enhancement  Higher education institutions Tanzania  TAM

1 Introduction Mobile technologies usage is increasingly and becoming an important in enhancing different sectors both government sector and private sector. In higher education sector where teachers and students own mobile devices, becomes one of the sectors where the impact of mobile technologies is widely use [1]. According to Mtebe and Raisamo, [2], there has been a tremendous growth and penetration of mobile technologies use and mobile services in East Africa where Tanzania is among of those countries. Through

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 56–66, 2019. https://doi.org/10.1007/978-3-030-03577-8_7

Introducing Mobile Technology for Enhancing Teaching and Learning

57

mobile technology, mobile learning (m-learning) is realized where the learning can be done anywhere anytime through mobile tools and devices connected to wireless internet [3]. In Tanzania, particular, education system, teachers and students almost use mobile technology for teaching and learning in classrooms where teachers deliver lectures and supporting materials for the students to copy. After classes, teachers and students do not have a chance to discuss or communicate on educational issues. But now, with the mobile technologies spreading, teachers and students in higher education institutions communicate educational issues anytime whether at the school compound, in classes or at homes [4]. According Mtega et al. [5] in Tanzania higher education systems revealed a good response that large number of students and teachers possess one or more of the mobile devices in which apart from personal business but also used in teaching and learning. The government of Tanzania through the ministry of educational and vocational training and technology under the Tanzania commission of higher education and the national ICT policy encourages the use of technology in the education system to create an innovative society [6]. Despite the knowledge and widespread usage of mobile technologies, the attitude of teachers and students on technology use has not received adequate attention. Teachers and students meet frequently in classrooms. The interaction between teachers and students after classroom may be limited to internet access where each one has a PC or laptop with internet connectivity in the absence of mobile technologies. In this case, views on mobile technology use in teaching and learning from both teachers and students are of crucial importance. Guided by TAM, the study explains and predicts teachers and students’ attitude on the usage of mobile technologies in enhancing teaching and learning. The objectives of this study were: To investigate the views of teachers and students on mobile technology use in higher education institutions; and how mobile technologies enhance teaching and learning in higher education institutions in Tanzania. In order to realize the objectives, the study answered the following research questions: 1. What are the teachers and students’ views on the usage of mobile technologies in teaching and learning at the College of Business Education in Tanzania? 2. How can mobile technologies be used to enhance the teaching and learning in higher education institutions in Tanzania? The study highlights the CBE policy makers the importance of using mobile technologies in teaching and learning outside the classroom or library which is space constrained. In this case the college will be relieved of costs that may be necessitated by space shortage e.g. garbage, electricity etc. The absorption of mobile technology to enhance teaching and learning by both teachers and students will not require the students or lecturers to be physically at the college to deliver or receive the teaching or learning material. This will also reduce transport costs to and from the college for both teachers and students. The rest of the paper is organized as follows: Section 2 presents, a review of literature on mobile technologies in supporting mobile learning followed by a discussion of the research framework for the current study. Next, the research. Section 3 presents the research methodology adopted, Sect. 4 presents the results,

58

G. Mwandosya et al.

Sect. 5 discusses the results while conclusion, recommendations and directions for future research are provided in Sect. 6.

2 Literature Review 2.1

Enhancing Teaching and Learning Through Use of Mobile Technologies

Research in emerging economies points out that although teachers possess one or more mobile devices and are aware of the ICT and mobile technology capability for mobile learning still do not utilize these technologies as the pedagogical tool to enhance teaching and learning [7]. The effective outcome of the technology to be used for improving or enhancing a certain undertaking much depend on the willingness of the individual in embracing that kind of technology [8] and their competencies in using the technologies effectively for teaching and learning [9]. Also, it has also been highlighted that involving them in the design processes of the technological solution by investigating their requirements give them ownership of the technology [3]. 2.2

Mobile Technologies Usage in HEIs in Tanzania

According Tedre et al. [10], had outlined ways where by e-learning is developed in Tanzania HEIs and how information computer technology (ICT) have facilitated the development. Recently, e-learning is slowly being overtaken by mobile learning (mlearning) using wireless technologies and the mobile devices, meaning that teaching and learning can be done anywhere and anytime [11]. Mobile technologies introduction is as important programme to make sure that teaching and learning are achieved positively. According Mtega et al. [5] explains how mobile phones were used for teaching and learning Similarly, another study in Tanzania by [5] also explain that through SMS, lecturers and students were able to use this platform in enhancing teaching and learning at Sokoine University of Agriculture (SUA). In CBE, according to a previous study, teachers and students have at least one or more mobile devices, and that the mobile associated technologies for enhancing teaching and learning are not yet fully utilized to meet the demands of entire mobile technology. 2.3

The Technology Acceptance Model

There are number of studies which have been adopted or extended the technology acceptance model (TAM) to investigate views on information technologies acceptance and absorption in higher education institutions [1, 12, 13]. This is parallel with discover and explain the root causes where by users of particular technologies to either accept or reject the idea of using ICT or mobile technologies. The TAM by Davis [14], outline two important factors for technology acceptance perceived usefulness and perceived ease of use. The author defines the perceived usefulness as “the degree to which a person believes that using a particular system would enhance his or her job performance,” and defined perceived ease of use as, “the degree to which a person believes that using a particular system would be free of effort.” See Fig. 1 below.

Introducing Mobile Technology for Enhancing Teaching and Learning

59

PU (Perceived usefulness)

PEOU

ATT

BI

AU

(Attitude Towards Using)

(Behavioral Intention to Use)

(Actual Use)

(Perceived ease of Use)

Fig. 1. Technology Acceptance Model (TAM) adopted from Davis, [14].

The TAM by Davis, [14] advocates the checking on the technology in terms of perceived usefulness, perceived ease of use, and attitude towards using, behavioral intention to use as the determinant for the actual use. TAM offer a good base to consider the way lectures and students view the introduction of technologies in higher education institutions. However, TAM has been extended to incorporate external variables. For example, Fathema et al. [12] in their study show that TAM by adding three external variables namely; system quality, perceived self-efficacy, and facilitating conditions to the original TAM to examine the faculty use of learning management systems. Although this model explains users’ technological adoption, there are some criticisms of this model. For example, Ward [16] argue that components which relate to human and organizational factors are often considered to be secondary to technological issues when it comes to real world use. Our study emphasizes on the human views as they are the ones who use technologies in either daily teaching or learning. Research Model There is evidence from a number of studies that have adopted and extended the TAM model [1, 12, 13, 15] to mention few. The study by Lwoga [1], explain the TAM by adding external variables like information quality, system quality, service quality, and instructor quality. The study shows that the instructor and system qualities were key predictor of perceived usefulness and user satisfaction. The extended versions of TAM adopted in the study since the study addresses the human factors in addition to mobile technologies to suit CBE’s context or environment. Figure 2 indicate the extended TAM in CBE context on lecturers and students’ perspectives on mobile technologies in enhancing teaching and learning. The extended TAM suits environment in higher education institutions in Tanzania.

Mobile Technologies (MTs) intake at CBE

Perceived Usefulness (PU)

Perceived Ease of Use (PEOU)

Teachers and Students’ Perspectives on MTs

Teachers and students Attitude Towards Mobile Technologies (ATMTs)

Fig. 2. Research model for the study (Extended TAM).

Enhanced Teaching and Learning (ETL)

60

G. Mwandosya et al.

Mobile Technologies Intake at CBE Mobile technologies intake at CBE is an external variable considered in our study. The importance of mobile technologies has been reported by a number of researchers for example, Park, [16] narrates those instructional designers and educators recognize the potential of mobile technologies as a learning tool for students and have incorporated into the distance learning environment. Through the widespread of wireless mobile technologies and an increase number of mobile operators in Tanzania, Lecturers and students exchange learning content using their mobile devices. Both lecturers and students at CBE are able to exchange educational-related contents (course works, final grades, notes, college calendar, etc.) anytime by accessing the SARIS application using mobile technologies. Other mobile technologies available at CBE are KOHA a library management system, online management system and website management system. Perceived Usefulness (PU) Perceived usefulness in this study indicate an extent to which Lecturers and students have on the mobile technologies relating to enhancing the teaching and learning. In particular, the research model checks on how useful are the current mobile technologies being used at CBE. The perceived usefulness of the mobile technologies has impact to both the lecturers and students who constantly meet them on daily teaching and learning activities. Several scholars have acknowledged the essential of perceived usefulness as a factor for determining technology intake [17–20]. Perceived Ease of Use The perceived ease of use in the aspect of our study was geared toward checking from both the lecturers and students on the ease of use of mobile technologies available in Tanzania. Particularly, at CBE. Lecturers and Students’ Perspectives The lecturer and students’ attitudes construct on the research model correspond to a collection of their views on the role played by the mobile technologies available at CBE. It is an important factor as it relating to individual beliefs regarding to the intake and usage of technology facilitates positively or negatively. For example, Venkatesh and Bala [21], argue that “facilitating conditions relate to individuals’ control beliefs regarding the availability of organizational resources and support structures to facilitate the use of a system. In our study, lecturers’ and students’ views to what extent technologies and the related services are conducted, For example, technical help, internet access, hardware, software, training, online help, etc. will impact on the attitudes towards usage for enhancing the teaching (lecturers) and learning (students). This is supported by Teo et al. [22] Who argue that, the effect of facilitating factors on preservice teachers’ attitude towards using learning management systems. Attitudes Towards Mobile Technologies (ATMTs) The attitude towards mobile learning construct is determined from the lecturers and students on how mobile technologies have impacted the way they teach and learn. The attitude towards mobile technologies established at CBE is taken as the factors leading to the enhancement of teaching and learning through the mobile technologies. The attitude towards mobile technologies can be positive (acceptance) or negative (rejection). The following methodology used to carry out the study.

Introducing Mobile Technology for Enhancing Teaching and Learning

61

3 Research Methodology 3.1

Research Design

Research was carried out at the College of Business Education in Dar es Salaam, Tanzania. The survey strategy was conducted to test and use the extended TAM in which, non-probability sampling method were used. 3.2

Population and Sampling Procedure

The population study comprised of two sub-groups- teachers and learners. Whereby, a full semester at CBE consists of 500 bachelor students in all disciplines of studies (e.g. Accountancy. Marketing, ICT etc.) Also, at CBE there are about 173 teaching staff members. Using non-probability sampling method, accommodate to obtain a sample size of 100 students and 80 teachers. The survey questionnaire was distributed to both teachers and students (n = 180) from January to June during the end of the third semester of 2017/2018 academic year. 3.3

The Instrument

The researchers developed the tools based on the objectives of the study and the literature review. Existing tested and verified survey questions from previous research were used to ensure content validity. The first part of the questionnaire consisted of questions about the students’ demographic characteristics, the second part of the questionnaire contained questions about perceived ease of use, perceived usefulness, and attitude towards mobile technologies, the third, contains questions about teachers and students’ perspectives and the last, contains questions about how mobile technologies enhances teaching and learning. A five-point Lakers scale which ranged from “1 = strongly disagree” to “5 = strongly agree” was applied. 3.4

Analysis of Data

The qualitative sections of the data were analyzed through thematic content analysis. The quantitative data - the average score from Liker scale was analyzed using the statistical package for social sciences (SPSS) version 23.

4 Results 4.1

Demographic

The distribution of teachers and students were a total of 102 (56.7%) count from the ICT and Mathematics department, 12 (6.7%) count from the Business administration, 14 (7.8%) were from Marketing, 17(9.4%) count from Accountancy, 17 (9.4%) count from Legal metrology, and 18 (10%) from Procurement and supplies management department. The information from the respondents shows that those who are accustomed to use mobile technologies in their courses are the ones who participated effectively.

62

4.2

G. Mwandosya et al.

Perspectives of Teachers and Students

To answer research question 2: How can mobile technologies be used to enhance the teaching and learning in higher education institutions in Tanzania? results shown in Table 1 below were obtained.

Table 1. Teachers and students’ views from interview data Question What are your general views regarding the intake and usage of mobile technologies at CBE?

4.3

Teachers’ responses 1. Training was needed well before introduction of mobile technologies 2. Mobile technologies and applications are not friendly 3. Teachers were supposed to be involved from the beginning before introducing mobile technologies

Students’ responses 1. Mobile technologies used here are unfriendly 2. We find difficult in accessing materials from the mobile technologies 3. The system is always down during viewing examination results 4. the mobile technology systems should be well designed to allow smooth access and sharing of educational materials

Recommendation themes 1. Requirements of mobile technologies set 2. Involvement of teaching staff and students in requirements design of mobile technologies 3. Training and awareness sensitization of mobile technology intake

Attitude Towards Mobile Technology Use

To answer research question 1: What are the teachers and students’ views on the usage of mobile technologies in teaching and learning at the College of Business Education in Tanzania? results on three scale items on the Lakers scale- 1 = strongly disagree to 5 = strongly agree, shown in Fig. 3 below were obtained. In general, teachers possessed positive attitude towards the use of mobile technologies in teaching and in learning. An interesting result was on item 2, whereby almost 30% of both teachers and students indicated they were undecided (Fig. 2).

Introducing Mobile Technology for Enhancing Teaching and Learning

63

Attitude Towards Mobile Technology Use Strongly agree Agree Undecided Disagree Strongly disagree 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% Item 3 Items Item 1. Item 2. Item 3.

Item 2

Item 1

Statements I would like to use mobile technologies for teaching/learning Using mobile technologies in the teaching and learning is a pleasant experience Using mobile technologies in the teaching and learning is a wise idea

Fig. 3. Responses from teachers and students on Attitude towards mobile technology use.

5 Discussion In the study of two questions which were asked in this study, namely; what are the teachers and students’ views on the usage of mobile technologies in teaching and learning at the College of Business Education in Tanzania? And how can mobile technologies be used to enhance the teaching and learning in higher education institutions in Tanzania? The results have shown that through the TAM, the attitudes of teachers and students can be obtained by revealing some important information regarding technology intake. Furthermore, the results support earlier studies pertaining to strong perceived usefulness and perceived ease of use. Specifically; to teachers and students, strong perceived usefulness and perceived ease of use, these aspects may in light the teachers and students’ views and attitude in the context of technology use [12]. The study has also identified that mobile technology intake as an external variable fits the extended TAM (See Fig. 2) such that teachers and students’ views readily determine their attitudes towards using mobile technologies and finally influencing the teaching and learning process. The findings from this study, of a need for proper mobile technologies designed to suit teachers and students, training, and friendliness of mobile technologies, are in line with the findings from previous studies like Fathema and Sutton [23].

64

G. Mwandosya et al.

The study therefore, contributes to the existing knowledge of technology intake in HEIs by highlighting the importance of combining both the views of teaching staff and learners regarding technology intake. The implication here is that, the views from the teachers and students determines how the policy makers will pursue when reviewing plans for the role of technology in HEIs. Particularly; in teaching and learning environments. The teaching staff and the students or learners are the main participants when it comes to HEIs, therefore, the use of technology for enhancing teaching and learning should involve their views or opinions as well.

6 Conclusion The main objective of the study was to investigate the views of teachers and students on the use of mobile technology for enhancing the teaching and learning at CBE. This study indeed supports the extended to which TAM adopted from Davis, [14] where by teachers and students indicate their views on mobile technology easiness of use while making use of it (required effort) in teaching and learning, and how useful it is in teaching and learning performance of mobile technology. The two aspects led to attitude formation by both teachers and learners. In this case positive attitudes were formed towards the use of mobile technologies. Our results supported by Fathema et al. [12] in their investigation on LMS facilitation in teaching and learning using TAM. Teachers indicate that training and good systems of mobile technologies are good factors for enhancing teaching and learning as they show the similar results have been reported by Lwoga [1] suggested that system quality was one of the critical factors, and Fathema et al. [12]. Therefore, the study recommends that policy makers in HEIs before embarking on acquiring technology for enhancing teaching and learning, the main participants are the one to be consulted. This will provide a room to gainful information on exactly area technology to be applied and the type of technology to be adopted. Furthermore, we recommend that before new technology is adopted there is a need to provide consciousness and awareness to the participants which are teachers and students to prepare themselves psychologically and be ready for the newly technology adoption. Limitations and Future Research There were a few numbers of students (only bachelor III students) at the time the research was being carried out due to long vacation hence few students were involved therefore, hindering to earn variety of information concerning the use of mobile technology in teaching and learning activities. Future studies should focus on developing excellence mobile education systems for mobile learning for more utilization of the mobile technologies. The development of mobile education systems should focus on involving both the teachers and students in the requirements and design processes.

Introducing Mobile Technology for Enhancing Teaching and Learning

65

References 1. Lwoga, E.T.: Critical success factors for adoption of web-based learning management systems in Tanzania. Int. J. Educ. Dev. Using Inf. Commun. Technol. 10(1), 4–21 (2014) 2. Mtebe, J.S., Raisamo, R.: Investigating students’ behavioural intention to adopt and use mobile learning in higher education in East Africa. Int. J. Educ. Dev. Using Inf. Commun. Technol. 10(3), 4–20 (2014) 3. Mwandosya, G.I., Suero Montero, C.: Towards a mobile education tool for higher education teachers: a user requirements definition. In: Proceedings of the 2017 IEEE Science Technology and Innovation Africa Conference, Cape Town (2017) 4. Mtebe, J.S, Kondoro, A.W.: Using mobile Moodle to enhance Moodle LMS accessibility and usage at the University of Dar es Salaam. In: IST-Africa 2016 Conference Proceedings, Dar es Salaam (2016) 5. Mtega, W.P., Bernard, R., Sanare, R.: Using mobile phones for teaching and learning purposes in higher learning institutions: the case of Sokoine University of Agriculture in Tanzania. In: 5th UbuntuNet Alliance Annual Conference, Dar es Salaam (2012) 6. Ministry of Education and Vocational Training: Education and Training Policy. United Republic of Tanzania - Ministry of Education and Vocational Training, Dar es Salaam (2014) 7. Ndibalema, P.: Teachers’ attitudes towards the use of Information Communication Technology (ICT) as a pedagogical tool in secondary schools in Tanzania: the case of Kondoa District. Int. J. Educ. Res. 2(2), 1–16 (2014) 8. Ertmer, P.A., Ottenbreit-Leftwich, A.T., Sadik, O., Sendurur, E., Sendurur, P.: Teacher beliefs and technology integration practices: a critical relationship. Comput. Educ. 59, 423–435 (2012) 9. Oparaocha, G.O., Pokidko, D.H.: Educating the 21st century learners: are educators using appropriate learning models for honing skills in the mobile age? J. Entrep. Educ. 20(2), 1–15 (2017) 10. Tedre, M., Ngumbuke, F., Kemppainen, J.: Infrastructure, human capacity, and high hopes: a decade of development of e-learning in a Tanzanian HEI. Redefining Digit. Divid. High. Educ. 7(1), 1–15 (2010) 11. Virvou, M., Alepis, E.: Mobile education features in authoring tools for personalized tutoring. Comput. Educ. 44(1), 53–68 (2005) 12. Fathema, N., Shannon, D., Ross, M.: Expanding the technology acceptance (TAM) to examine faculty use of learning management systems (LMSs) in higher education institutions. J. Online Learn. Teach. 11(2), 210–232 (2015) 13. Park, S.Y., Nam, M.-W., Cha, S.: University students’ behavioural intention to use mobile learning: Evaluating the technology acceptance model. Br. J. Educ. Technol. 43(4), 592–605 (2012) 14. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–339 (1989) 15. Oye, N.D., Iahad, N.A., Rabin, Z.: A model of ICT acceptance and use for teachers in higher education institutions. Int. J. Comput. Sci. Commun. Netw. 1(1), 22–40 (2011) 16. Park, Y.: A pedagogical framework for mobile learning: Categorizing educational applications of mobile learning into four types. Int. Rev. Res. Open Distrib. Learn. 12(2), 78–102 (2011) 17. Cheng, Y.Y.: Effects of quality antecedents on e-learning acceptance. Internet Res. 22(3), 361–390 (2012)

66

G. Mwandosya et al.

18. Lee, M.: Explaining and predicting users’ continuance intention towards e-learning: an extension of the expectation - confirmation model. Comput. Educ. 54(2), 506–516 (2010) 19. Lin, W.-S., Wang, C.-H.: Antecedences to continued intentions of adopting e-learning system in blended learning instruction: a contigency framework based on models of information system success and task-technology fit. Comput. Educ. 58(1), 90–107 (2012) 20. Venter, P., Van Rensburg, M.J., Davis, A.: Drivers of learning management system use in a South African open and distance learning institution. Australas. J. Educ. Technol. 28(2), 183–198 (2012) 21. Venkatesh, V., Bala, H.: Technology acceptance model 3 and research agenda on interventions. J. Inf. Technol. 39, 273–315 (2008) 22. Teo, T., Ursavaş, Ö.F., Bahçekapili, E.: Efficiency of the technology acceptance model to explain pre-service teachers’ intention to use technology: a Turkish study. Campus Wide Inf. Syst. 28(2), 93–101 (2011) 23. Fathema, N., Sutton, K.: Factors influencing faculty members’ learning management systems adoption behavior: An analysis using the Technology Acceptance Model. Int. J. Trends Econ. Manag. Technol. 2(6), 20–28 (2013)

Information System of Performance Management and Reporting at University: Example of Student Scorecard Adnane Rahhou1,2(&) and Mohammed Talbi2 1

2

Multidisciplinary Laboratory in Sciences and Information, Communication, and Education Technology (LAPSTICE), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P. 5366, Maarif, Morocco [email protected] Observatory of Research in Didactics and University Pedagogy (ORDIPU), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P. 5366, Maarif, Morocco [email protected], [email protected]

Abstract. University education has become very complex, due to the management of education through information systems. So much investment should lead us to ask ourselves if: with all these facilities, is the education system performing well? And how to measure this performance? A survey concerning experienced teachers showed that there is no access to synthetic data allowing visibility on student performance, and therefore there would not be enough monitoring regarding the evolution of profiles and skills. In this context, the present work is focused on the implementation of an Information System of Performance Management and Reporting, introduced through the Student Scorecard as one of its parts, that offers the possibility to combine several key performance Indicators (KPIs). This Student Scorecard can generate a performance report in order to easily identify the causes and the main causes of nonperformance of a student or a group of students, during a specific period. Keywords: Academic performance  Student scorecard Key performance indicators  Information system  Reporting

 Scoring

1 Introduction Put yourself in a situation as a teacher, and you want to study the performance of a student or a group of students for a selection to access a master program, or to study the progress of students, identify their strengths and weaknesses, or you want to reward the top students, understand low marks obtained during a specific period, in specific subjects, during a specific exam, etc. That would require collecting the results of different subjects, different programs, different levels and different semesters. Then consolidate them in a homogeneous and ergonomic file, filter the useful information, calculate averages, classify them and to compare them. So many manual actions, which require time and which are not possible for everyone. In addition, the management of this performance is at the bottom of a database containing more than hundreds of © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 67–76, 2019. https://doi.org/10.1007/978-3-030-03577-8_8

68

A. Rahhou and M. Talbi

thousand marks of students obtained in exams. Hence, the interest of developing and implementing an Information System of Performance Management and Reporting without financial investment and based on a simple triple principle: data injection automatic processing - generating a performance report. In this article, we present the Student Scorecard, concerning the performance management of the student, as a part of this information system. The objective is to understand the performance situation of the student and identify the causes and main causes of non-performance. Indeed, the use of the scorecard “is the capability to give the ability to know at any point in its implementation whether the strategy they have formulated is, in fact, working, and if not why” [1]. By similarity, studying performance is based on the ability to generate simple and fast report, revealing the strengths and weaknesses of the student, which represents all the innovation of this work. This student scorecard, although designed for the LMD educational system (License, Master, Doctorate), remains transposable in any other educational system.

2 Context of Research 2.1

Measure of Performance in Education

The problem of studying performance has been raised since the 1990s, or perhaps even before. At that time, it has been claimed “although professors evaluate students … the entire ministerial curriculum often lacks a measure of performance of educational objectives” [2]. But, today’s measurement of educational performance has become an even harder task for many teachers because of its computer and IT dependence and the need for data mining skills. Indeed, “the issue of performance measurement is important in any organization under governance by information systems” [3]. Otherwise, the management of this performance is primarily a matter of choice of KPIs before the constraint of technicality. In higher education, although the university performance study has started since the 1990s, the idea of introducing KPIs to measure the performance of production dates back to the 1980s [4], and has been observed in the United Kingdom, the United States, Australia and some European countries. From that time until today, this measure of performance must satisfy basic criteria, also supported by the same author, and which are known in the field of performance management in any organization: The implementation of a performance measurement system (PMS) governed by an information system, the choice of KPIs adapted to study a performance situation and the implementation of a scorecard for the quantitative and qualitative monitoring of the performance. In our case, considering the university as an organization, it is possible to transpose the same methodology, through Governance of student skills and profiles through an information system of performance management and reporting, define the KPIs adapted to study student performance and design of a student scorecard to generate a performance report.

Information System of Performance Management and Reporting at University

2.2

69

Reporting of Performance in Education

Although the calculation of performance depends on statistics and datamining, however, “a study of statistics is undoubtedly essential for making sound decisions in today’s teaching learning scenario” [5]. These decisions are the result of an analysis of the causes of non-performance of students, not by massive data processing, but rather by interpreting the content of the most significant concept that leads to actions: “the report”. Indeed, it is known to any organization that the measurement and analysis of performance, provide relevant data for planning, revising, improving and comparing. [6]. These operations represent the main actions to be carried out after the analysis of a student performance report. Also, these operations, within a performance management system (PMS), are driven by key performance indicators (KPIs) that determine success or failure [7]. However, these strengths and weaknesses can only be determined with precision after a relevant analysis of a relevant report. In other words, the relevance of this performance report depends primarily on the choice of the most relevant KPIs. Moreover, in education, the reporting process database is usually made up of evaluation scores, the main source of the performance measure. This being true insofar as “a test score is generally represented by a number which indicates the level of performance by a student”. It is defined as a “summary of the evidence contained in an examinee’s responses to the items of a test are related to the construct or constructs being measured” [5, 8]. Finally, all these elements listed in order combine the evaluation database, the performance management system, the reporting and the corrective actions to be carried out. This principle is the one followed to detail the developed Student Scorecard, and highlight the importance of its reporting on student performance.

3 Research Objectives The purpose of this research is to find out if the students are performing and thus locate their advancement individually and collectively through the reading and interpretation of the Student Scorecard. Thus, it would be possible to see their skills, their strengths, their weaknesses that lead us to understand the causes and main causes of nonperformance and therefore identify the likely threats that could disrupt effective learning.

4 Problem Statement Motivated by this observation, and to express our problem clearly, we wondered what the student learns, how does he learn? Does he learn effectively? Does he learn easily? What skills does he satisfy? And which ones are missing? All these questions converge towards the general problematic: Is the teaching-learning process at the university performing well? And which KPIs are adequate to measure this performance?

70

A. Rahhou and M. Talbi

5 Methodology 5.1

Key Performance Indicators of Scoring

As a first step, the research questions, cited above, allowed us to edit a general survey questionnaire on the performance situation of teaching and learning. Then, the analysis of the answers collected allowed us to identify KPIs, that are considered most adapted, and which will make relevant the report generated by the Student Scorecard, eventually conceived. These KPIs are shown in Table 1. Table 1. Key performance indicators of scoring. KPIs on the dashboard of student scorecard Student’s performance profile Student’s skills map Module validation Student’s ranking Alert catch-up exam session Student’s grade Compensation frequency

KPIs on the details boards of student scorecard Score average Frequency to access catch-up exam session Graduation duration Results of all years of study Average results different kind of modules

These KPIs, incorporated in the student scorecard, make it possible to identify the strengths and weaknesses and therefore the causes and main causes of nonperformance, in summary and in detail, and even to anticipate a probable situation of non-performance following a relevant analysis of the performance report. 5.2

Diagnostic Method

In order to answer our problematic, a survey was conducted (spring 2017) on 23 qualified teachers from the faculty of sciences Ben M’Sik of Casablanca with an average seniority of fifteen years, as presented in Fig. 1.

Fig. 1. Seniority of teachers.

5.3

Statistical Survey, Sample and Population

The ten selected questions are listed in the survey questionnaire, presented in Table 2.

Information System of Performance Management and Reporting at University

71

Table 2. Questionnaire survey. Question Question 1: Do you have access to a general and global periodic report about student performance? Question 2: What do you think of receiving periodic performance report about learnings and teachings? Question 3: Do you have access to data about the evolution of student performance? Question 4: Do you have access to a description/list of overall student skills? (Subjects where the student excels) Question 5: Do you have access to the student’s ranking? Question 6: Do you have the possibility to see the frequency of access to the catch-up exam session? Question 7: Availability of students exam results of all years of study (out of tuition office) Question 8: Do you have the possibility to consult the average results of the major subjects of an academic program? Question 9: Do you have the possibility to consult the average results of the secondary subjects of an academic program? Question 10: Do you have the possibility to consult the average results of the cultural subjects of an academic program?

Answers and results Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11

The sample represents a group of more than two thousand teachers at the faculty of sciences Ben M’sik of Casablanca, which represents 20% of the total population. In addition, the choice of qualified and experienced teachers reduces the statistical margin of error. Indeed, the answers obtained are very close to each other.

6 Findings The calculated statistical results are presented in the figures below. These statistical results show rates ranging from 90% to 96% of questioned teachers who have no access to an overall performance report, nor to data about the evolution of student performance (Figs. 2 and 4) or the evolution and description of their acquired skills, and not even their rankings (Figs. 5 and 6), although 83% of them say that a performance report about learnings and teachings would be interesting (Fig. 3).

Fig. 2. Global performance report.

72

A. Rahhou and M. Talbi

Fig. 3. Learnings and teachings performance report.

Fig. 4. Student performance evolution.

Fig. 5. Student skills.

Fig. 6. Student ranking.

Fig. 7. Specific average results.

Information System of Performance Management and Reporting at University

73

Fig. 8. Availability of exam marks.

Fig. 9. Specific average results.

In addition, about access to exam marks over a longer period, only 13% claim to be able to access them permanently from the faculty information system. On the other hand, 87% do not pronounce on any descriptive way of access (Fig. 8). Moreover, 90% report that they cannot access electronically the averages of major, complementary or cultural subjects (Fig. 9). As for the catch-up exam sessions, 83% of the teachers questioned are promoters of setting up an alert index (Fig. 7). The consolidation of all these observations shows that it is practically impossible to obtain a measure of student performance in the current state. This is due, according to the analysis of the results, to the lack of an effective process for reporting and monitoring skills and profiles that would allow a study of the causes and main causes of nonperformance.

7 Research Outcomes 7.1

Operation of the Information System of Performance Management and Reporting (ISMPR)

Based on the results of the survey, and to provide a solution to this lack of effective performance measurement system, we developed the ISPMR. One of the strengths of this information system is that it gives the user the possibility to focus on the indicators he wants to study through the Student Scorecard. So he would be able to study a very specific performance situation. The triple principle of the Student Scorecard, already mentioned above, which is the source of student performance report, is described in Fig. 10.

74

A. Rahhou and M. Talbi

Fig. 10. Triple principle of the information system of performance management and reporting.

If the information system data needs to be changed, then any addition or deletion in the names of the entities or attributes of the ISPMR (changing a module name, a material, a coefficient, a sector, a teacher, etc.) must be done directly in the relevant writable entities. 7.2

Student Scorecard

The Student Scorecard, presented in Fig. 11, shows in the header a dashboard that displays a graphic and summary view of the student’s performance. Then the performance detail by year and by semester, and even by cycle of program in the general performance view. The entire display is given by representative color codes to facilitate visual identification. The use of this student scorecard could be summed up in two manual actions: introduce the student’s code or surname and then click on “Performance Report” to generate a PDF report. It is to highlight that because of the complexity of the cardinality relationship between the entities that operate the student ranking, the selection of the program is also done manually from the ranking entity. This action is guided by a link that appears on the dashboard once the program changes, depending on the selected student. This work does not only support the relevance of the use of the educational information system. It presents briefly (due to the broad technical content of the system developed) this student scorecard [9] that automatically generates a performance report, thus making it easier to read and interpret the performance data of a selected student.

Information System of Performance Management and Reporting at University

75

Fig. 11. Student scorecard.

8 Future Scope In order to make this performance management and reporting system a complete tool, and in parallel with the student scorecard, a general reporting process is also being designed, allowing the study of larger entities and in different situations, such as: a sector to cross with the year, the module, the subject, or even the whole curriculum. The data entry form is also continuously upgraded to be more autonomous. Moreover, the conception of this information system as a software program, especially the student scorecard, faster and more ergonomic, is also envisaged. The source code could therefore be used in business intelligence programs specialized in reporting, so that would be a convenient and recommended solution.

76

A. Rahhou and M. Talbi

9 Conclusion The hyper-technology that universities are experiencing today makes performance management an unavoidable issue. Indeed, the governance of education through information systems makes processes easy, but lacks performance monitoring because of the lack of relevant reporting. Researchers in the field of education should be more interested in performance management systems. In other words, many pedagogical problems could be solved by analysing student performance data. Particularly when the university lives a deviation of results regarding the learning objectives, which is perpetuated and whose main causes have not been identified.

References 1. Kaplan, R.S., Norton, D.P.: The Balanced Scorecard: Translating Strategy Into Action. Harvard Business Press, Boston (1996) 2. Massy, W.F., Zemsky, R.: Using IT to Enhance Academic Productivity. Interuniversity Communications. Council, Inc. (1995). http://www.educause.edu/ir/library/html/nli0004.html 3. Epstein, M.J., Manzoni, J.F.: Performance Measurement and Management Control: Improving Organizations And Society. Studies in Management and Financial Accounting, EPublishing Inc., New York (2006) 4. Evans, J.R., Lindsay, W.M.: Managing for Quality and Performance Excellence. Cengage Learning, Boston (2016) 5. Mohan, R.: Measurement, Evaluation and Assessment in Education. PHI Learning Pvt. Ltd., Delhi (2016) 6. Preuss, P.G.: Data-Based Decision Making and Dynamic Planning: A School Leader’s Guide. Eye On Education, New York (2007) 7. Osman, I.H.: Handbook of Research on Strategic Performance Management and Measurement Using Data Envelopment Analysis. IGI Global, Hershey (2013) 8. Thissen, D., Wanier, H.: Test Scoring. Erlbaum, Mahwah (2001) 9. Rahhou, A., Information System of Performance Managment and Reporting (2018). https:// drive.google.com/open?id=1cRCRZypEZQyJSJ997WDAad47ChOx4hxQ

Mobile Learning Oriented Towards Learner Stimulation and Engagement Samir Achahod1,2(&), Khalifa Mansouri2(&), and Franck Poirier1(&) 1

Lab-STICC, University Bretagne Sud, Vannes, France [email protected], [email protected] 2 Laboratory SSDIA, ENSET of Mohammedia, University Hassan II of Casablanca, Casablanca, Morocco [email protected]

Abstract. Mobile terminals have become an integral part of our experience; we rely on these devices in almost all our daily actions. But although these terminals offer innovative opportunities, especially in terms of teaching practices, their adoption within universities still presents a great challenge. This paper aims to initiate reflection on the positive impact and limitations of the use and integration of mobile devices as pedagogical practice in order to improve the quality of e-learning, describing approaches to determine the place of usability, motivation and user experience (UX), as well as strategies to be followed to properly integrate mobile learning with the aim of fostering engagement, and fight against abandonment. Keywords: Mobile learning Motivation

 User experience  Engagement factors

1 Introduction Mobile has become the number one screen in the world. According to UNESCO, nearly 6 billion people have a connected mobile device. In Morocco, almost all students have a smartphone; it is a device that could be useful in the techno-pedagogical process of training. The benefits of e-learning for learners are diverse: easier access to information, quality content, training at the learner’s pace, reduced cost over time, learner empowerment. On the other hand, e-learning raises several problems: the high rate of demotivation and drop-out, the low level of interaction between learners and teachers, the lack of personalization of content to the learner’s profile and the low success rate. Many studies have shown that students are not adequately encouraged to use the platforms that support educational activities, that their engagement is low enough and that the activities and resources offered are not numerous enough and not diversified enough [1, 10].

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 77–81, 2019. https://doi.org/10.1007/978-3-030-03577-8_9

78

S. Achahod et al.

2 Mobile Learning Mobile learning or m-learning or p-learning for (Pervasive Learning) is an educational system adapted to mobile uses that promotes learning because the student has the choice of the tool he knows and masters [2], supports interoperability and portability of content, allows continuous access to knowledge using personal electronic devices [3], it allows learning through social interactions and interactions with content using mobile devices, anywhere and at any time [2], through the linking of different factors such as: time, location, learning environment, content, technology, learner profile and pedagogy used [4]. Despite all the advantages mentioned above, the acquisition of mobile equipment remains a major problem within universities due to certain factors [2]: need for autonomy, need for competence, need for affiliation, lack of equity between students, complexified class management, additional workload, price of material resources, etc.

3 Approaches to Better Integrate the Mobile in E-Learning The parameters mentioned above explain the non-use of mobile devices in the educational environment. As a result, we have proposed learner motivation and engagement approaches that can be used as a framework for the adoption of mobile devices in all its dimensions to achieve greater teaching effectiveness. 3.1

Motivation

Edward Deci and Richard Ryan’s research shows that motivation is based on three basic needs: autonomy, competence and social relationships [5]. Indeed, associating mobile devices to a learning platform increases students’ intrinsic motivation [5]. 3.2

The User Experience

It refers to any interaction, course, program or other experience during which learning takes place, it can also be used to highlight or reinforce the purpose of an educational interaction [6]. 3.3

Engagement Factors

Engagement according to O’Brien and Toms [7] is a quality of user experience (UX) that depends on several factors such as aesthetic appeal (stimulation), innovation, usability of the system, ability to get involved, overall evaluation, etc. To measure user engagement with a mobile application, three main factors are identified from the O’Brien and Toms model [8] (Fig. 1). Fig. 1. Engagement process

Mobile Learning Oriented Towards Learner Stimulation and Engagement

79

• User engagement and experience (UX): According to Donald Norman is related to human psychology • Engagement and Motivation: From R. Ryan and E. Deci determines his commitment to a technology • Engagement and usability: According to Nielsen is described using 5 factors: ease of learning, error prevention, user satisfaction, ease of recall and overall effectiveness. 3.4

Metacognitive Skills of Self-regulation

Metacognition refers to our own knowledge about our cognitive products and processes and their regulation. Metacognitive skills of self-regulation refer to the learner’s ability to define his objectives, to plan strategies to be implemented to achieve these objectives, to monitor the implementation of these strategies and to evaluate and adapt these strategies during the learning process [1]. Several authors have shown the positive influence of metacognitive and self-regulatory skills on learning [6]. After a literature review on the issue, we were finally able to identify as many strategies used to support metacognition as can have a positive impact on learners’ motivation and engagement: • Metacognitive notification systems to allow the learner to turn to his thoughts and activity [1, 6]. • Metacognitive processes for good time management [1]. • Learner dashboard to inform the learner about the status of his actions and interactions [1]. • The learner’s open model will allow access to a set of indicators relating to the learner’s skill level through his traces of activities [9]. • A personalized and adaptable environment (adaptive learning) [10]. • The development of learning activities for its impact on both autonomy and competence needs [10]. • Personalized coaching to help learners better organize their progress and evaluate their learning [11]. • Virtual companion agent to follow up-training whose function is to increase commitment [11]. • The personalized recommendation of Free External Resources [11].

80

3.5

S. Achahod et al.

Information and Communication Technology Integration Models

These are theoretical models that can serve as a blueprint for the ICT integration process, to help teachers in their reflection on the adoption of digital in courses and obtain a real pedagogical interest. They describe the different practices to be followed and the tasks to be accomplished in order to complete the progress of the process and to better place technology in the pedagogical activity. These include: the SAMR model, the ASPID model, the continuum of approaches and the TPACK model [12] (Fig. 2).

4 Conclusion and Perspectives

Fig. 2. The TPACK model (by Cox)

Mobile learning is a topical subject, so various factors can constitute a barrier to the integration of mobile devices in an educational context. This research work aims to identify strategies facilitating the use of educational platforms on mobile devices. To this end, several approaches have been described with the aim of achieving greater pedagogical effectiveness and meeting all the criteria influencing the implementation of these devices in teaching. Our next step will be to stimulate reflection on identifying the factors influencing learners’ engagement and motivation to use mobile devices and the relationships between these factors.

References 1. Sambe, G.: Vers un apprentissage autoregule dans les MOOC (2016) 2. Fievez, A.: Les dossiers Carrefour éducation – École branchée Le BYOD : entre perspectives et réalités pédagogiques par (2015) 3. Burns-sardone, N.: Making the case for BYOD instruction in teacher education. Issues Informing Sci. Inf. Technol. 11, 191–201 (2014) 4. Sofi, A., Laafou, M., Mahdi, K., Janati-idriss, R.: La technologie mobile au service de l’enseignement et l’apprentissage : le cas de l’ENS Tétouan La technologie mobile au service de l’enseignement et l’apprentissage : le cas de l’ ENS Tétouan (2017) 5. L’apprentissage mobile et la réussite des élèves Qu’est-ce que l’ apprentissage mobile ? 6. Azevedo, R.: International Handbook of Metacognition and Learning Technologies, vol. 28 (2013) 7. O’Brien, H.L., Toms, E.G.: Examining the generalizability of the user engagement scale (UES) in exploratory search. Inf. Process. Manag. 49, 1092–1107 (2013)

Mobile Learning Oriented Towards Learner Stimulation and Engagement

81

8. Isabelle, T., Guillaume, G., Isabelle, T., Guillaume, G., De, I.: Impact de l’utilisabilité, la motivation et l’expérience utilisateur sur l’engagement à utiliser une application mobile (2014) 9. Girard, S., Johnson, H., Girard, S., Johnson, H.: DividingQuest : opening the learner model to teachers (2007) 10. Karsenti, T.: Trois stratégies pour favoriser l’engagement des participants à un MOOC (2017) 11. Bourda, Y., Popineau, F.: Trois pistes de personnalisation (2011) 12. Eric, V., Par, T., Vekout, E., Secondaire, E.: Quelques modèles d’intégration des TICE, pp. 1–6 (2013)

A Learning Style Identification Approach in Adaptive E-Learning System Hanaa El Fazazi ✉ , Abderrazzak Samadi, Mohamed Qbadou, Khalifa Mansouri, and Mouhcine Elgarej (

)

Laboratory Signals, Distributed Systems and Artificial Intelligence, ENSET, University Hassan II, Mohammedia, Morocco [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract. Adaptive e-learning systems are considered one of the interesting research areas in technology-based learning strategies. The main goal of these systems is to offer learns a personal and a unique learning experience based on their preferences, needs, educational background, learning style, etc. The objec‐ tive of this research is to identify the learning style of the learner. The identifi‐ cation is based on using web Log Mining data which contain learning behavior of the learner, and then the learning styles are mapped to Felder-Silverman Learning Style Model categories using Fuzzy C means Algorithm. The learning style can be changed over a period of time therefore the system has to adapt to the changes. For this, an Artificial Neural Network Algorithm is used to predict the learning style of a learner. Keywords: Adaptive e-learning system · Learning style Felder-Silverman Learning Style Model · Fuzzy C means Algorithm Artificial Neural Network Algorithm

1

Introduction

Adaptive e-learning systems integrate learner characteristics such as knowledge level, learning style, and skills to provide personalized learning, and to recommend relevant educational material [1, 2]. It is also important to understand the different characteristics of learners in order to meet their requirements. Amongst learner knowledge, learner style and learning characteristics are identified as important factors in learning [3, 4]. The learner’s requirements can be obtained through their usage patterns. These patterns differ among learners who attend the same course. Even if each learner has a specific needs, a different level of knowledge and a way of learning style, the E-learning systems provides the same resources to all learners. Course sequencing is a technology started in the area of Adaptive Learning System with the essential object to provide a learner with the suitable sequence of information to learn, and sequence of learning tasks to work with. This sequence of learning task can be defined based on learning behavior of the learner captured through the e-learning portal using Web Usage Mining. This © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 82–89, 2019. https://doi.org/10.1007/978-3-030-03577-8_10

A Learning Style Identification Approach in Adaptive E-Learning System

83

process helps to find out what users are looking for on the learning portal [5]. It uses the web logs to extract potential patterns of learning styles. This pattern is used to analyze the learner’s navigation behavior simultaneously with the efficiency of the interface components, which help in adapting the resource delivery. The captured learning behavior can be mapped to learning styles by using suitable learning style model. The Felder-Silverman Learning Style Model (FSLSM) is the most popular learning style model available which indicates various categories of learners. The FSLSM defines four dimensions and eight categories of learners as shown in Fig. 1.

Fig. 1. Dimensions and categories of FSLSM

Depending on the learning style model, the adaptive e-learning system should use the mechanism to identify the learning styles of new learner. Therefore the prediction model must be developed to classify a new learner and personalized contents and inter‐ face can be provided to the learner on the e-learning portal. In this paper, the FSLSM learning style model is used to identify the learning styles. After identification, the Fuzzy C Means Clustering techniques are applied to map the learning style categories of FSLSM to the captured data. The mapped data is then used to predict the learning styles using Artificial Neural Network Classification technique [5]. This paper is structured as follows. Section 2 describes related works on automatic approaches used to identify learning style. Section 3 presents an overview of the proposed system. Section 4 describes the methodology used for clustering and classification. Section 5 presents the results analysis. Finally, Sect. 6 concludes this work and suggests future works.

2

Related Works

Various studies on adaptive e-learning systems based on student’s learning styles have been conducted in past years, every student has different preferences, needs, knowledge

84

H. El Fazazi et al.

and experience and there are numerous techniques that have been used to identify and classify learning styles with various learning style models. Graf et al. [6] proposes an automatic student modeling approach for identifying learning styles based on the style model. The proposed approach uses the behavior of students during they are learning in an online course in order to gather information about their learning styles. By applying a simple rule-based mechanism, learning styles are calculated based on the gathered indications. Tieu Binh et al. [3] propose an approach on multilayer perceptron to predict academic result based on learning styles. By measure the relationship between learning styles and learner performance in a subject or the entire course, the authors conducted an online survey with a participation of students in various courses to analysis and show the effects of different learning styles on students’ performance. This test based on the FelderSoloman questionnaire. Besides these, they have built an artificial neural network to predict academic performance based on students’ learning style. Feldman et al. [7] described the process of automatic detection of learning styles and analyzed the components that play a role in this process. They also presented the analysis result of different techniques and discussed some limitations, implications and research gaps. Authors have described different detection techniques, which are principally described under classification mechanism of Data Mining. Authors have also proposed the user model for the education system that is used to detect the learning styles. Abdullah et al. [1] have proposed a new approach to classify students dynamically depending on their learning style. This approach is experimented on 35 students for Data Structure online course created using Moodle. The learning style of each student is identified according to FSLSM by extracting learner’s behavior and data from Moodle log. Hogo [2] has presented the use of different fuzzy clustering techniques such as Fuzzy C-Means (FCM) and kernelled FCM to find the learner’s categories and predict their profiles. Fuzzy clustering reflects the learner’s behavior which is more than crisp clus‐ tering. The author has mentioned three different types of learners such as bad, worker and regular students without considering any standard learning style model for learning styles of the learners. The usage data of students is captured based on questionnaire approach of FSLSM without using real time usage data of learners. Kolekar et al. [8] proposed a methodology to extract learning patterns of the user to develop an Adaptive User Interface based on Web Usage Mining (WUM) and Artificial Neural Network (ANN). The WUM used as an automated mechanism for style recog‐ nition facilitating the gathering of information about learning preferences, making it imperceptible to students, and the ANN algorithm uses the recent history of system usage so that systems using this approach can recognize changes in learning styles or some of their dimensions over time.

3

Overview of the Proposed System

To build an adaptive e-learning system based on learning style the learner behavior has to be captured [2]. We used data from the log file which captures usage details of learners

A Learning Style Identification Approach in Adaptive E-Learning System

85

who are accessing the e-learning portal. The learners can access different topics in the portal with different file formats video, text or a demo; they can also go through exercise modules. The e-learning portal is a combination of learning components “Pages” and learning contents “files” which is considered as learning objects. To map learning objects onto FSLSM categories all learning objects need to be labeled, the proposed mapping shown in Table 1. The learners access many web pages on the e-learning portal, each web page have different learning objects such as text, ppt, videos, index of the sequential contents, diagrams, video links etc. The information on the web pages is defined based on the learning objects that can be mapped to FSLSM’s categories of learners. The elearning system captures the files and the pages accessed by each learner, these details of the pages are used to identify the learning styles of learners. Table 1. Mapping of learning objects on FSLSM’s categories FSLSM’s categories Sensing Intuitive Sequential Global Visual Verbal Active Reflective

4

Learning objects Videos, Examples PPTs, List, PDFs References, Exercise References, Assignment, Topic List Charts, Images Email, Announcements, PDFs Demo, PPTs, Videos PDFs, Announcements, References

Methodology

To build the proposed e-learning system we follow two important steps, in first place, we identify the learning style using captured data (sequences) and then modify the elearning portal based on identified learning styles. The learning styles of learners are identified based on defined parameters values of FSLSM and grouped by using a clus‐ tering algorithm then classified to the new learner’s behavior using a classification algo‐ rithm [2, 5, 9]. To implement these algorithms, we converted the data captured as XML format into learning sequences, the sequence file gather the information about each sequence which contains sequence id, learner id, session id, the page and the file id with the time spent on each of them. To identify the learning styles of the learner we propose to use Fuzzy C Means (FCM) algorithm to label the sequences into FSLSM categories because the identified sequences are not mapped to learning objects of FSLSM’s categories [10, 11] and to classify the new sequence of any learner into any of the eight defined cate‐ gories of FSLSM, we use Artificial Neural Network algorithm (ANN) [8, 12]. The approach of labeling the sequences and classification of new sequence is shown in Fig. 2.

86

H. El Fazazi et al.

Web page CharacterisƟc s

Feature Values of FSLSM

Fuzzy C Means Algorithm

Similiar Sequences file

Cluster1 : Sensing Cluster2 : IntuiƟve

New sequence

Cluster3 : Verbal Cluster4 : Visual

ANN Algorithm

Cluster5 : SequenƟal

Cluster 6 : Global Cluster7 : AcƟve

FSLSM Category

Cluster8 : ReflecƟve

Fig. 2. Learning style identification approach

5

Results

To test the proposed approach, we use a web log data with 1000 learning sequences. The sequences are pre-processed and are labeled according to eight categories of FSLSM using FCM clustering algorithm. The FSLSM mapped learning objects are considered as feature values for labeling. Also the characteristics of web page are identified based on feature values of FSLSM. These characteristics are used to understand the learning styles of the learner who is accessing a specific web page. Around 1000 sequences are grouped into eight clusters of FSLSM. After clustering the existing sequences, the classifier is trained using ANN algorithm to identify the FSLSM category of the new learner. The parameters of each sequence are assigned to each input node i.e. Y1 to Y5. The parameters of each sequence used for prediction are Filename, Type of file, Time spent on file, Frequency of file peer page and FSLSM Category. The eight categories of FSLSM are considered as the output nodes for neural network. The number of nodes for hidden layer is calculated as shown in Eq. 1 where the final value will be truncated to identify the nodes [16]. Nodes =

(Input Nodes ∗ Output Nodes ) ∗ 2 3

(1)

The 1000 sequences are clustered using FCM algorithm and labeled with FSLSM categories of learners. Center values for each cluster are calculated based on the feature values based on learning objects to FSLSM categories. The result of clustering is shown in Table 2.

A Learning Style Identification Approach in Adaptive E-Learning System

87

Table 2. Results of FCM algorithm Clusters Intuitive Sensing Sequential Global Visual Verbal Active Reflective

Number of sequences 125 109 124 119 139 131 146 134

Since some of the sequences belong to more than one cluster, the total number of sequences in cluster is more than the total number of sequences in the output set. This is because one learner can belong to more than one type of categories as per FSLSM spectrum. It is analyzed that, the learner can be active and at the same time interested in different types of visual contents on the portal to understand the specific topic [10]. The ANN algorithm is used to classify the new sequence as per the eight categories of FSLSM. Once the model is trained based on sequences, which are labeled as per FCM, the new sequence can be classified and learning styles of the learner identified. The algorithm is executed for various numbers of Iterations (IT). According to values defined in the Table 3, the accuracy of ANN classification algorithm is 79.11% for 100 iterations. As the number of iteration increases, the accuracy also increases but algorithm takes more time for execution. Table 3. ANN classification results IT 60 100

ANN A 77.15 79.11

P 78.40 80.90

R 81.45 85.12

F1 79.20 81.14

The suggested system based especially on two important aspects of adaptive elearning systems. A dynamic way of identifying the learning styles by capturing the web logs in real time and the consideration of learning style model to characterize the learner. In adaptive e-learning systems, the identification of the learning styles is essentially for adapting the contents and user interface. In this situation, the system should be able to identify the learning styles and adapt the learning models according to the learner’s needs. Also, the learning styles are identified based on standard FSLSM learning style model which has eight categories of learning styles. Any specific learner falls into any one of the eight categories of the FSLSM initially and category changes over the period of time. The dynamically changed learning styles are considered in order to provide an adaptive user interface and contents. Some of the existing systems presented in the literature review are static in nature where the learning styles have been captured once and the contents are provided respectively. Also, some of the other systems classify the learners into general categories such as beginner, advanced, expert and so on. Such type

88

H. El Fazazi et al.

of systems does not focus on learning capabilities. The proposed system gives the learning styles of the learner during the learning process.

6

Conclusion

To improve the efficiency of online course, Adaptive E-learning Systems is a promising research area. A fundamental element in this area is to identify the learning styles of the learners using a standard Learning Style Model, which can be useful to adapt the learning experiences. Throughout this paper, we presented an approach to automatically detect and identify learning style of learners using Web Log Analysis, the data is converted into XML format to identify the unique sequences of each learner during a specific session. After identification, the Fuzzy C Means (FCM) algorithm is used to map the sequences to the eight categories of the Felder-Silverman Learning Style Model (FSLSM) FSLSM which are defined as Visual, Verbal, Active, Reflective, Sequential and Global, Sensing, Intuitive. Some sequences are assigned multiple labels according to the featured values of FSLSM learning objects. After the mapping process, the Arti‐ ficial Neural Network Algorithm is used to classify the new sequence of learner’s session as per eight classes of FSLSM. In future work we attempt to provide the adaptive contents and interface to the new Learner based on his own learning style.

References 1. Truong, H.M.: Integrating learning styles and adaptive e-learning system: current developments, problems and opportunities. Comput. Hum. Behav. 55, 1193 (2016) 2. Alshammari, M.T.: Adaptation Based on Learning Style and Knowledge Level in E-learning Systems (2016) 3. Binh, H.T., Duy, B.T.: Predicting students’ performance based on learning style by using artificial neural networks. In: IEEE International Conference on Knowledge and Systems Engineering (KSE) (2017) 4. Felder, R.M., Silverman, L.K.: Learning and teaching styles in engineering education. Eng. Educ. 78, 674–681 (1988) 5. Abdullah, M.A.: Learning style classification based on student’s behavior in moodle learning management system. Trans. Mach. Learn. Artif. Intell. 3(1), 28 (2015) 6. Graf, S., Kinshuk, Liu, T.-C.: Identifying learning styles in learning management systems by using indications from students’ behaviour. In: IEEE International Conference on Advanced Learning Technologies (ICALT 2008), pp. 482–486 (2008) 7. Feldman, J., Monteserin, A., Amandi, A.: Automatic detection of learning styles: state of the art. Artif. Intell. Rev. 44(2), 157–186 (2015) 8. Kolekar, S.V., Sanjeevi, S.G., Bormane, D.S.: Learning style recognition using artificial neural network for adaptive user interface in e-learning. In: IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp 1–5 (2010) 9. Radwan, N.: An adaptive learning management system based on learner’s learning style. Int. Arab J. E-Technol. 3, 7 (2014) 10. Agbonifo, O.C.: Fuzzy c-means clustering model for identification of students’ learning preferences in online environment. Int. J. Comput. Appl. Inform. Technol. 4(1), 15–21 (2013)

A Learning Style Identification Approach in Adaptive E-Learning System

89

11. Hogo, M.A.: Evaluation of e-learners behaviour using different fuzzy clustering models: a comparative study. Int. J. Comput. Sci. E-educ. 7(2), 131–140 (2010) 12. Kolekar, S.V., Sanjeevi, S.G., Bormane, D.S.: Learning style recognition using artificial neural network for adaptive user interface in E-learning. In: IEEE International Conference on Computational Intelligence and Computing Research (2010)

Towards Smart Innovation for Information Systems and Technology Students: Modelling Motivation, Metacognition and Affective Aspects of Learning James Ngugi and Leila Goosen(&) University of South Africa, Pretoria 0003, Gauteng, South Africa [email protected]

Abstract. Literature identified multiple factors, which promote Innovative Behavior (IB) among employees. The problem stated is that facilitating the development of IB among undergraduate Information Technology (IT) Higher Education Institution (HEI) learners is, however, not well understood. The proposed scheme or solution included addressing this literature gap through an examination of how motivation, metacognition and affective aspects of learning, as components of Cognitive and Metacognitive Strategies (CMSs), act as antecedents of IB, via the action of Knowledge Sharing Behavior (KSB). Models of teaching and learning, as well as aspects related to motivational diagnosis and feedback that promote metacognition, motivation and affect, were considered. The research employed a quantitative cross-sectional survey, with the subjects being 268 learners enrolled in IT programs, from seven Kenyan public HEIs. Data collected using a questionnaire, together with a 2,000bootstrap sample generated, tested standardized total, direct and indirect effects. Major findings are summated in a structural equation model for learners in an educational context, which largely supported all hypotheses. Findings also revealed that CMSs acted as a significant driver of KS and IB among undergraduate IT learners. The conclusions include recommendations, which enable HEI managers to leverage attributes of IB antecedents, including tasks and problem-solving processes, in learning contexts. Keywords: Innovation  Information systems and technology students Motivation  Metacognition  Affective aspects of learning

1 Introduction Al-Husseini and Elbeltagi [1] view universities as manufacturers and suppliers of innovation, with a responsibility to design novel products for use by the larger society. Research has identified several individual and contextual level factors that promote Innovative Behavior (IB) among employees. However, the effect of Cognitive and Metacognitive Strategies (CMSs), Course Design Characteristics (CDCs) and Knowledge Sharing Behavior (KSB) in facilitating the development of innovative behavior among Information Technology (IT) learners at Higher Education Institutions © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 90–99, 2019. https://doi.org/10.1007/978-3-030-03577-8_11

Towards Smart Innovation for Information Systems and Technology Students

91

(HEIs) is not well understood. Hence, the purpose of the study was to develop a novel structural equation model of the individual and contextual drivers of innovative behavior among IT learners. In terms of the structure of the remainder of the article, the literature review as next section therefore provides various perspectives on CMSs, CDCs, KSB and IB. The methodology section explains that the subjects of the cross-sectional, quantitative, explanatory survey were 248 undergraduate learners enrolled in IT programs from seven public higher education institutions in Kenya. Data collection was with the aid of a questionnaire. A 2,000-bootstrap sample was generated to test the standardized total, direct and indirect effects. The findings were summated in a knowledge sharing-innovative behavior structural equation model, with the aid of Analysis of Moment Structures software. The findings largely supported all hypotheses. The findings also lent support to the positive effect of course design characteristics in fostering IT learners’ innovative behavior. The findings suggested a significant indirect relationship between cognitive and metacognitive strategies and innovative behavior, which is fully mediated by knowledge sharing behavior. Secondly, the indirect relationship between course design characteristics and innovative behavior was significantly and fully mediated by knowledge sharing behavior. The findings of the study additionally revealed that both course design characteristics and cognitive and metacognitive strategies acted as significant drivers of knowledge sharing and innovative behavior among undergraduate IT learners. In conclusion, this article provides key recommendations for managers involved at higher education institutions on how to leverage the attributes of cognitive and metacognitive strategies and course design characteristics at individual and contextual level, triggering innovative behavior. Additional information regarding the study reported in this article is available in Ngugi and Goosen [2].

2 Review of the Literature 2.1

Self-regulated Learning

In an abductive study of the dark side of agile software development, Annosi, Magnusson, Martini and Appio [3] found empirical evidence to suggest that self-regulated teams’ social conduct influences their resulting learning and innovation. 2.2

Course Design Characteristics

Morgeson and Humphrey [4] developed and validated the Work Design Questionnaire (WDQ) as a comprehensive measure for assessing job design and the nature of work. 2.3

Knowledge Sharing Behavior

In a context relatively close to the study reported on in this article, Hart [5] looked at how digital equity and workplace learning influenced acceptance of a knowledge

92

J. Ngugi and L. Goosen

sharing behavior technology in the higher education workplace. Stenius, Hankonen, Haukkala and Ravaja [6] tried to understand knowledge sharing behavior in the work context by applying a belief elicitation study, while Gross and Kluge [7] provided an example from the steel industry, of predictors of knowledge-sharing behavior for teams in extreme environments. Knowledge sharing behavior has been associated with innovative behavior in literature, for example, when Edú-Valsania, Moriano and Molero [8] investigated authentic leadership and employee knowledge sharing behavior in terms of the mediation of the innovation climate and workgroup identification, as well as with SRL [3, 9]. 2.4

Innovative Behavior

Numerous studies draw attention to the individual and contextual conditions under which employees are likely to be innovative, such as the one by Gomes, Curral, Caetano and Marques-Quinteiro [9]. Gurtner and Reinhardt [10] view the antecedents and outcomes of ambidextrous idea generation as the starting point for incremental and radical innovations.

3 Methodology 3.1

Research Context

In 2016, technology start-ups in Kenya raised funding of more than US$43 million, out of the US$129 million raised in the Africa [11], suggesting a heavy leverage on technological innovation in Kenya. The founder of Facebook, Zuckerberg, also toured Kenya, to meet young IS and technology entrepreneurs and share experiences on the mobile money transfer system [12]. According to the Commission for University Education [13], Kenya had 71 universities (23 public and 48 private) - the latter system characteristic echoes that reported on by Al-Husseini and Elbeltagi [1], who provided a comparison study between public and private university education in Iraq, in terms of transformational leadership and innovation. Out an undergraduate student enrolment of 475,750 in Kenya in 2016, 20,925 students undertake IS or IT related course programs. The gender parity is at 16,265 (77.7%) for males, and 4,660 (22.3%) female, which indicated that there were more male students undertaking such courses. 3.2

Instruments Used in the Study

Measurement of Cognitive and Metacognitive Strategies. The original Motivated Strategies for Learning Questionnaire (MSLQ) has two broad listings of scales: those related to motivation and learning strategies, respectively [14]. The motivation scales consist of value, expectancy and affective components. Some items in the motivation scales were deemed by the researcher to be closely correlated with the task characteristics subscale of the CDC scale. For instance, task value had items, such as “I think the course material in this class is useful for me to learn”, which

Towards Smart Innovation for Information Systems and Technology Students

93

closely relates to the task characteristics subscale of the present research. Other components of the motivation scale were also not used, as these were not aligned to the desired measures. Hence, only the learning strategies scale was applicable to the present study. Learning Strategies Scales. The learning strategies scales had two main components, namely resource management and cognitive and metacognitive strategies. This study adopted the cognitive and metacognitive learning strategies component of the MSLQ. Informed by Pintrich, Smith, Garcia and McKeachie [14, p. v], the cognitive and metacognitive learning strategies component has 31 items, as follows: (1) (2) (3) (4) (5)

Rehearsal (4 items), Elaboration (6 items), Organization (4 items), Critical thinking (5 items), and Metacognitive self-regulation (12 items).

Measurement of cognitive and metacognitive strategies was thus with the aid of a version of the Pintrich [15] MSQL. Learners were asked to rate their responses on a 7point Likert scale that ranged from 1 (not true at all) to 7 (very true of me). The higher the score, the more it indicated a high level of CMSs. The reliability scores for the subconstructs were as indicated (see Fig. 1).

1 0.9 0.8 0.695 0.7 0.6 0.69 0.5 0.4 0.3 0.2 0.1 0 Rehearsal

0.645

0.58

0.881 0.79

0.8

0.76

0.787

0.64 α in pilot study α from MSLQ Manual

Elabora on

Organiza on

Cri cal thinking

Metacogni ve selfregula on

Fig. 1. Comparison of reliability coefficients reported in the MSLQ manual and study.

Overall, the scale had a Cronbach alpha reliability of 0.945 and all 31 items used in the pilot study were retained. Although the organization and rehearsal scales had slightly low values for Cronbach alpha, further modification to remove items with low item-to-total correlation was not necessary, as it would have minimal effect on improving the scale reliability. The original cognitive and metacognitive strategies scale had items arranged so as to mix up the question items from the different subscales. This was replicated in the present study, in the same order (see Table 1).

94

J. Ngugi and L. Goosen Table 1. Items in the Learning Strategies Scale (modified from [15]).

Critical thinking

No 4 10

12 23 26 Elaboration

13 20 22 24 25

31 Metacognitive selfregulation

2 3 6 8 14 15 16 17 19

Item I often find myself questioning things I hear or read in this course to decide if I find them convincing When a theory, interpretation, or conclusion is presented in class or in the readings, I try to decide if there is good supporting evidence I treat the course material as a starting point and try to develop my own ideas about it I try to play around with ideas of my own related to what I am learning in this course Whenever I read or hear an assertion or conclusion in this class, I think about possible alternatives When I study for this class, I pull together information from different sources, such as lectures, readings, and discussions I try to relate ideas in this subject to those in other courses whenever possible When reading for this class, I try to relate the material to what I already know When I study for this course, I write brief summaries of the main ideas from the readings and my class notes I try to understand the material in this class by making connections between the readings and the concepts from the lectures I try to apply ideas from course readings in other class activities such as lecture and discussion During class time, I often miss important points because I’m thinking of other things (REVERSED) When reading for this course, I make up questions to help focus my reading When I become confused about something I’m reading for this class, I go back and try to figure it out If course readings are difficult to understand, I change the way I read the material Before I study new course material thoroughly, I often skim it to see how it is organized I ask myself questions to make sine I understand the material I have been studying in this class I try to change the way I study in order to fit the course requirements and the instructor’s teaching style I often find that I have been reading for this class but don’t know what it was all about (REVERSED) I try to think through a topic and decide what I am supposed to learn from it rather than just reading it over when studying for this course (continued)

Towards Smart Innovation for Information Systems and Technology Students

95

Table 1. (continued) No

Item

28

When studying for this course I try to determine which concepts I don’t understand well When I study for this class, I set goals for myself in order to direct my activities in each study period If I get confused taking notes in class, I make sure I sort it out afterwards When I study the readings for this course, I outline the material to help me organize my thoughts When I study for this course, I go through the readings and my class notes and try to find the most important ideas I make simple charts, diagrams, or tables to help me organize course material When I study for this course, I go over my class notes and make an outline of important concepts When I study for this class, I practice saying the material to myself over and over When studying for this course, I read my class notes and the course readings over and over again I memorize key words to remind me of important concepts in this class I make lists of important items for this course and memorize the lists

29 30 Organization

1 7 11 21

Rehearsal

5 9 18 27

Measurement of Course Design Characteristics was based on an adaptation of the Work Design Questionnaire by Morgeson and Humphrey [4]. Measurement of Knowledge Sharing Behavior: The scale developed and validated by Yi [16] as a measure of knowledge sharing behavior had been used extensively in research with consistent results, e.g. by Edú-Valsania, et al. [8], Gross and Kluge [7], Hart [5], as well as Stenius, et al. [6]. The Likert scale presented to the respondents had the mid-point removed to anchor from 1 representing strongly disagree to 4 representing strongly disagree. The removal of the neutral scale was aimed at addressing subtle threats to validity in quantitative research, by decreasing inattentiveness in responses, as suggested by McKibben and Silvia [17]. Measurement of Innovative Behavior was based on the scale developed by Hartjes [18], in a case study at the B.V. Twentsche Kabelfabriek, The Netherlands, on aligning employee competences with the organizational innovation strategy. The scale consisted of 11 items, with four (4) response measures of the following four (4) sub-dimensions: opportunity exploration, idea generation, championing and application. The Kleysen and Street [19] scale, toward a multi-dimensional measure of innovative behavior, had previously been used in research, for example by Cimenler, Reeves, Skvoretz and Oztekin [20] in a causal analytic model to evaluate the impact of researchers’ individual

96

J. Ngugi and L. Goosen

innovativeness on their collaborative outputs, as well as Wojtczuk-Turek and Turek [21], who looked at the significance of perceived social-organization climate for creating employees’ innovativeness in terms of the mediating role of person-organization fit. The use of Harman’s single-factor test has found wide application in literature that tests Common Method Bias (CMB). Studies that have used Harman’s single-factor test and the variable of knowledge sharing include Arpaci and Baloğlu [22], on the impact of cultural collectivism on knowledge sharing among information technology majoring undergraduates, while Wipawayangkool and Teng [23] assessed tacit knowledge and sharing intention from a knowledge internalization perspective. In addition, in the context of self-regulated learning, Garcia-Perez-de-Lemma, Madrid-Guajardo and Martin [24] explored the influence of university–firm governance on small and medium enterprises’ innovation and performance levels. Studies on innovative behavior have also explored CMB with variables such as ethical leadership [25] and its impact on service innovative behavior, including the role of job autonomy, climate setting in sourcing teams [26] and different dimensions of job autonomy and their relation to work engagement and innovative work behavior [27]. All these studies suggest that using Harman’s single-factor test is an acceptable research procedure to test for CMB. 3.3

Missing Data Analysis

The literature seems to converge on 10% as being a critical cut-off point for consideration of missing data. Hence, based on a review of missing data methodology by Enders [28] and a comparison of missing data handling methods in a linear structural relationship model by Mamun, Zubairi, Hussin and Rana [29], treatment of data with higher than 10% of missing data may present problems.

4 Discussion of the Findings Computation of the implied correlations among the research variables yielded findings as presented in Table 2. Table 2. Implied correlations among the research variables.

Course design characteristics Cognitive and metacognitive strategies Knowledge sharing Innovative behavior

Course design Cognitive and characteristics metacognitive strategies 1.000

Knowledge Sharing

.373

1.000

.501

.442

1.000

.452

.401

.661

Innovative behavior

1.000

Towards Smart Innovation for Information Systems and Technology Students

97

The correlations generally support the implied hypotheses, as both KSB and IB have strong significant correlations (r > 0.4) with the individual and institutional antecedents of CMSs and CDCs, respectively. The most significant correlation was between KSB and IB (r = .661), which lends further support to the findings of the path analysis. Further, the correlation between CDC and KSB (r = .501) was strong and significant. In order to test the influence of KSB in mediating the effects of both CMSs and CDCs on IB, however, there was a need to further explore the mediating effect. 4.1

Bootstrapping with Bias Corrected (BC) Confidence Intervals

The bootstrapping method has been widely documented in literature [30–33]. Specifically, multiple authors have recently applied the bootstrapping method in studies related to innovation behavior [34, 35].

5 Conclusions and Recommendations It can be concluded that extrinsic motivation for sharing knowledge, in the form of rewards, may not always be available. In this regard, it is recommended that higher education institution managers may need to leverage individual rewards in the form of financial gains or recognition to stimulate the dimension of written contributions among learners. Along with Zheng, Skelton, Shih, Leggette and Pei [36], the authors recommend that learners should be given sufficient time and autonomy to identify problems, learn required knowledge through CMSs, and formulate innovative solutions in a way that not only engages them, but is also relevant to their learning level and interest.

References 1. Al-Husseini, S., Elbeltagi, I.: Transformational leadership and innovation: a comparison study between Iraq’s public and private university education. Stud. Univ. Educ. 41(1), 159–181 (2014) 2. Ngugi, J., Goosen, L.: Modelling course-design characteristics, self-regulated learning and the mediating effect of knowledge-sharing behavior as drivers of individual innovative behavior. EURASIA J. Math. Sci. Technol. Educ. 14(8), 1–18 (2018) 3. Annosi, M., Magnusson, M., Martini, A., Appio, F.: Social conduct, learning and innovation: an abductive study of the dark side of agile software development. Creativity Innov. Manage. 25, 515–535, (2016) 4. Morgeson, F.P., Humphrey, S.E.: The Work Design Questionnaire (WDQ): developing and validating a comprehensive measure for assessing job design and the nature of work. J. Appl. Psychol. 91(6), 1321–1339 (2006) 5. Hart, J.: How digital equity and workplace learning influence acceptance of a knowledge sharing behaviour technology in the higher education workplace. Doctoral dissertation. University of Illinois, Urbana-Champaign (2015)

98

J. Ngugi and L. Goosen

6. Stenius, M., Hankonen, N., Haukkala, A., Ravaja, N.: Understanding knowledge sharing behaviour in the work context by applying a belief elicitation study. J. Knowl. Manag. 19(3), 497–513 (2015) 7. Gross, N., Kluge, A.: Predictors of knowledge-sharing behavior for teams in extreme environments: an example from the steel industry. J. Cogn. Eng. Decis. Mak. 8(4), 352–373 (2014) 8. Edú-Valsania, S., Moriano, J.A., Molero, F.: Authentic leadership and employee knowledge sharing behaviour: mediation of the innovation climate and workgroup identification. Leadersh. Organ. Dev. J. 37(4), 487–506 (2016) 9. Gomes, C., Curral, L., Caetano, A., Marques-Quinteiro, P.: Better off together: a cluster analysis of self-leadership and its relationship to individual innovation in hospital nurses. Psicologia 29(1), 45–58 (2015) 10. Gurtner, S., Reinhardt, R.: Ambidextrous idea generation-antecedents and outcomes. J. Prod. Innov. Manag. 33(S1), 34–54 (2016) 11. Disrupt Africa: Africa Tech Startups Funding Report 2016 (2016). http://disrupt-africa.com/ funding-report/ 12. Technoran: Mark Zuckerberg in Nairobi to meet with entrepreneurs and developers (2016). http://techmoran.com/mark-zuckerberg-nairobi-meet-entrepreneurs-developers/. Accessed 22 Mar 2017 13. Commission for University Education (CUE): Statistics on University Education in Kenya. CUE, Nairobi (2016) 14. Pintrich, P.R., Smith, D.A.F., Garcia, T., McKeachie, W.J.: A manual for the use of the Motivated Strategies for Learning Questionnaire. Technical report 91-B-004, The Regents of the University of Michigan, Michigan (1991) 15. Pintrich, P.R.: Multiple goals, multiple pathways: the role of goal orientation in learning and achievement. J. Educ. Psychol. 92(3), 544–555 (2000) 16. Yi, J.: A measure of knowledge sharing behavior: scale development and validation. Knowl. Manag. Res. Pract. 7(1), 65–81 (2009) 17. McKibben, W.B., Silvia, P.J.: Inattentive and socially desirable responding: addressing subtle threats to validity in quantitative counseling research. Couns. Outcome Res. Eval. 7(1), 53–64 (2016) 18. Hartjes, B.J.G.: Aligning employee competences with organizational innovation strategy: a case study at B.V. Twentsche Kabelfabriek. Unpublished Master’s thesis, University of Twente, Enschede, The Netherlands (2010) 19. Kleysen, R.F., Street, C.T.: Toward a multi-dimensional measure of individual innovative behavior. J. Intellect. Cap. 2(3), 284–296 (2001) 20. Cimenler, O., Reeves, K.A., Skvoretz, J., Oztekin, A.: A causal analytic model to evaluate the impact of researchers’ individual innovativeness on their collaborative outputs. J. Model. Manag. 11(2), 585–611 (2016) 21. Wojtczuk-Turek, A., Turek, D.: The significance of perceived social-organization climate for creating employees’ innovativeness: the mediating role of person-organization fit. Manag. Res. Rev. 39(2), 167–195 (2016) 22. Arpaci, I., Baloğlu, M.: The impact of cultural collectivism on knowledge sharing among information technology majoring undergraduates. Comput. Hum. Behav. 56, 65–71 (2016) 23. Wipawayangkool, K., Teng, J.T.: Assessing tacit knowledge and sharing intention: a knowledge internalization perspective. Knowl. Process. Manag. 23(3), 194–206 (2016) 24. Garcia-Perez-de-Lema, D., Madrid-Guijarro, A., Martin, D.P.: Influence of university–firm governance on SMEs innovation and performance levels. Technol. Forecast. Soc. Chang. 123, 250–261 (2017)

Towards Smart Innovation for Information Systems and Technology Students

99

25. Dhar, R.L.: Ethical leadership and its impact on service innovative behavior: The role of LMX and job autonomy. Tour. Manag. 57, 139–148 (2016) 26. Kiratli, N., Rozemeijer, F., Hilken, T., De Ruyter, K., De Jong, A.: Climate setting in sourcing teams: developing a measurement scale for team creativity climate. J. Purch. Supply Manag. 22(3), 196–204 (2016) 27. De Spiegelaere, S., Van Gyes, G., Van Hootegem, G.: Not all autonomy is the same. Different dimensions of job autonomy and their relation to work engagement & innovative work behavior. Hum. Factors Ergon. Manuf. Serv. Ind. 26(4), 515–527 (2016) 28. Enders, C.K.: A review of handbook of missing data methodology. J. Educ. Behav. Stat. 41(5), 554–556 (2016) 29. Mamun, A.S.M.A., Zubairi, Y.Z., Hussin, A.G., Rana, S.: A comparison of missing data handling methods in linear structural relationship model: evidence from BDHS2007 data. Electron. J. Appl. Stat. Anal. 9(1), 122–133 (2016) 30. Falk, C.F., Biesanz, J.C.: Two cross-platform programs for inferences and interval estimation about indirect effects in mediational models. SAGE Open 6(1), 1–13 (2016) 31. Kim, J.H.: Bias-correction and endogenous lag order algorithm for bootstrap prediction intervals. J. Stat. Plan. Inference 177, 41–44 (2016) 32. Nguyen, H.-O., Nguyen, H.-V., Chang, Y.-T., Chin, A.J., Tongzon, J.: Measuring port efficiency using bootstrapped DEA: the case of Vietnamese ports. Marit. Policy & Manag. 43(5), 644–659 (2016) 33. Shintani, M., Guo, Z.: Improving the finite sample performance of autoregression estimators in dynamic factor models: a bootstrap approach. Econ. Rev. 37(4), 360–379 (2018) 34. Giebels, E., De Reuver, R.S., Rispens, S., Ufkes, E.G.: The critical roles of task conflict and job autonomy in the relationship between proactive personalities and innovative employee behavior. J. Appl. Behav. Sci. 52(3), 320–341 (2016) 35. Gkorezis, P.: Principal empowering leadership and teacher innovative behaviour: a moderated mediation model. Int. J. Educ. Manag. 30(6), 1030–1044 (2016) 36. Zheng, W., Skelton, G., Shih, H., Leggette, E., Pei, T.: Nurture motivated, confident, and strategic learners in engineering through cognitive and psychological instructions for an entry-level course. In: Proceedings of the Annual Conference and Exhibition, Washington, DC (2009)

Key Elements of Educational Augmented and Virtual Reality Applications Houda Elkoubaiti(&) and Radouane Mrabet Smart Systems Laboratory, Ecole Nationale Supérieure d’Informatique et d’Analyse des Systèmes – ENSIAS, University Mohammed V of Rabat, Rabat, Morocco [email protected], [email protected]

Abstract. Many countries have launched initiatives to reform their educational systems, which reflects the growing worldwide interest to develop education. These initiatives mainly emphasize the important role of technologies to improve education. In fact, integrating technology is beneficial for educational sector due to the improvements it can add to teaching and learning processes. Augmented Reality (AR) and Virtual Reality (VR) are among the technologies that have promising potential for education. In this article, we provide the key elements of educational AR and VR applications. We present a generic architecture that supports both AR and VR applications designed for classroom use. The present architecture highlights teacher’s role in conducting AR and VR activities. First, they select and prepare relevant and high quality content for these applications. Then, they guide and supervise their students during these activities. Keywords: Virtual reality

 Augmented reality  Architecture of AR and VR

1 Introduction Augmented reality (AR) and virtual reality (VR) technologies have already arisen in many fields. In the educational sector, they hold an important potential to ameliorate education. In fact, many studies have reported the amount of benefits of using AR and VR in educational sector through its different school subjects and grades [1]. However, their efficiency depends on many factors especially the expertise of teachers in using these technologies to meet their students’ needs, the content, the time and the students’ interest that will be assigned to AR and VR applications. In fact, AR and VR technologies are not created especially for learning purposes. Which creates a challenge for teachers to adapt these technologies for classroom uses. Teachers should get training to master the use of these technologies in classroom. They should get tools to help them to supervise students and stimulate their attention and interest while using these technologies in learning activities especially while they tackle this type of activities for the first time. Teachers must define the learning activities that involve the use of these technologies, their objectives, their learning outcomes, verify the accuracy of the content and the possibilities of interaction that they have all along the activities. Moreover, integrating these technologies will change the role of teacher. He will not © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 100–105, 2019. https://doi.org/10.1007/978-3-030-03577-8_12

Key Elements of Educational Augmented and Virtual Reality Applications

101

play his traditional role in being the only source of knowledge, however his role is crucial in maximizing learning outcomes expected from using AR and VR in learning activities. Our paper deals with AR and VR learning applications designed for classroom uses. Concretely, it proposes a general architecture supporting AR and VR applications. Further, the different components involved in this architecture are analyzed.

2 Augmented and Virtual Reality AR is a technology that adds virtual information over the reality [2]. Using AR, users get virtual objects superimposed over real objects. In fact, AR combines virtual and real elements in one display. One important feature of AR technology is the precise alignment of the virtual elements with the real ones. There are many types of AR applications, they differ in terms of their ways of displaying content to users. They include [3]: Location-based AR: It determines the location of users and displays related information using GPS technology. Marker-based AR: It uses a camera to scan a marker (e.g. barcode, QR code, and photo) and provides related content. Markerless AR: It uses a camera to scan the real world of the user and provides related content with possibilities of interaction. Projection-based AR: It provides an artificial interface projected on a physical surface with possibilities of interaction with the projection. Whereas VR focuses on the virtual information, it provides a virtual experience that stimulate the human perception to believe the synthetic reality [2]. The real world of users is replaced with another computer-generated one. There are many types of VR technologies with different levels of immersion, graphic quality and tracking accuracy. There are two types of VR hardware technologies, those related to the projection including large screens 3D and Personal 3D like HMDs and those related to interaction including tracking systems, haptic devices and 3D sound systems [4]. The most immersive technology is HMDs and they include [3]: HMDs-based on smartphones: they rely on smartphone screen to display the content and its tracking technologies to control users motion. HMDs-based on PC: they rely on computer or a gaming system to provide the VR experience. However, they have their own screens and tracking technologies. However, with the technological evolution, there are HMDs with their own computing resources and tracking technologies. Moreover, many software tools are used to create the 3D VR environments and the programming of interaction [5].

102

H. Elkoubaiti and R. Mrabet

3 Related Work AR and VR provide immersive experiences with possibilities of interactions. They both create virtual objects, however, AR aligns virtual elements within a real scene to enhance the real world. Whereas VR immerses users in a virtual world and isolates them from their real world. This ability to visualize virtual objects and enable interactions is an important affordance for education. Namely, the visualization simplifies the understanding of complex structures and difficult concepts. Moreover, the creation of virtual objects enables experiments that are impossible or difficult to conduct in the real world. They are effective tools for the experiential learning [6]. Furthermore, many studies reported the potential of these technologies to boost students’ engagement and motivation [7]. In addition, many previous studies investigated the affordances of AR and VR technologies for learning purposes and presented general architectures for these technologies. Most of these studies highlight the informal learning activities. For instance, in [8], they are interested in open access education. They presented an educational ecosystem for immersive education. However, this architecture presents AR and VR learning activity as a self- directed activity in which students interact only with the learning objects and the virtual environment. Moreover, most of the researches provide separate studies of these technologies. Some of them are interested in AR technology [9]. While others focus on VR technology [4]. However, they do not consider the process of learning inside the classroom with the presence and the real time intervention of a teacher. As far as our proposition is concerned, our architecture highlights the role of teachers in preparing the accurate content for AR and VR applications and guiding their students in real time while they perform AR and VR activities.

4 Proposed Architecture Our architecture provides learners with AR and VR content in real time. As depicted in Fig. 1, it involves two actors: students and teachers. Each component of this architecture is affecting the others. In fact, teachers are not only technology facilitators, they are also responsible for maximizing students’ learning outcomes. First, they prepare the right content for the applications. Second, while carrying out the activities, they explain the learning goals and provide guidelines to students. Then, they interact with them to avoid students’ distraction. Finally, they evaluate the outcomes of the activity and how much they meet the intended objectives. They should also take into considerations students’ questions, comments, propositions and feedback to ameliorate the activities. 4.1

Input and Output Devices

Input devices track users’ and objects’ (e.g. markers, artifacts) motion, position and orientation. They allow users’ interaction with the virtual elements. While output devices provide users with the rendering of the applications. They involve all the technologies that provide graphical visuals (e.g. graphics displays), 3D audio (e.g.

Key Elements of Educational Augmented and Virtual Reality Applications

103

Fig. 1. Generic architecture supporting AR and VR applications.

audio hardware) and tactile sensations (e.g. haptic interfaces). More sophisticated input and output technologies are, more hard computing resources are needed. 4.2

Computing Platform

Computing platform is responsible for running the process that starts with the collection of the input data from input devices and ends with the rendering process to the output devices. In fact, AR and VR applications are demanding in terms of computing resources. Their performance is affected by the hardware used and its related characteristics including CPU, GPU, RAM, storage. In fact, there are many types of computing platforms. There are dedicated platforms such as smart glasses and standalone HMDs as well as non-dedicated platforms such as smartphones and desktop computers. Moreover, cloud infrastructure and fog computing can be used to run tasks that demand huge computing, storage and networking resources. 4.3

Data Base

It contains useful data for the learning and teaching process. It includes: Knowledge base: It involves educational resources like courses, books and articles that teachers can use to create appropriate AR and VR applications. Moreover, it includes different learning materials that teachers have created and shared. For instance, teachers can create and share an AR or VR application. Teachers’ profiles: It allows them to access, add, recommend and delete a resource. Students’ profiles: It contains students’ data, scores, performance and achievements.

104

4.4

H. Elkoubaiti and R. Mrabet

Teacher Subsystem

It is composed of two types of engines. The first type of engines are interfaces of an authoring tool that support teachers and help them to build applications without need to have technical expertise. In fact, an authoring tool provides teachers with the necessary to perform their applications. They only need to specify the content and define its formats and the way it will be displayed. They include: Content engine: It is responsible for the content representation. It allows teachers to define the list of the virtual information that will be displayed and its formats. Interaction content engine: It defines the list of the input data that should be considered as parameters to deliver and update the output rendering. Rendering content engine: It specifies the way, the format and the sequence of the data that will be rendered. The second type include: Authentication engine: It authenticates teachers to access to some privileges such as learning resources database and students profiles data. The guiding and monitoring engine: This engine enables teachers to provide a real time support to students. It allows a real time visualization of students’ eyes tracking, which enables teachers to follow their students’ performance and guide them while needed unless they are connected and immersed in the AR and VR applications. It also allows them to interact with students. For instance, teachers can highlight relevant items in the activity to focus on or to discuss. 4.5

Students Subsystem

It enables students to generate AR and VR applications in real time. Its engines provide students with AR and VR content rendering, readjust it in accordance with their interactive gesture or movement or with the change of the context of the real scene considered in the AR applications. Moreover, it generates an adaptive content according to students’ needs. Which creates an interactive, adaptive and accurate AR and VR based activities. The interaction engine: It permits to adjust the rendering content according to the input data. It uses the parameters defined in the interaction content engine to adapt the data of content engine. The adaptation engine: Students have different learning styles and different cognition levels, hence their learning needs are different. For this reason, this engine adapts the content to meet students’ needs. This engine allows these processes: – Student recognition: It uses the face recognition technology and the students’ profiles base to identify students, in order to get personalized AR and VR experiences. – Adaptation process: It adjusts the output rendering according to students’ profiles. For instance, a teacher can assign different content for students who need help.

Key Elements of Educational Augmented and Virtual Reality Applications

105

The rendering engine: It generates the visual, auditory and haptic rendering of the AR and VR applications based on the instructions defined in the rendering content engine and the recommendations of adaptation engine.

5 Conclusion With the multitude affordances of AR and VR for learning and teaching process, the educational sector will hold a continuing proliferation of these technologies. However, an accurate integration and implementation of these technologies in classroom environment is necessary. The proposed architecture dispenses new ways for teachers to conduct AR and VR learning activities. It enhances the process of learning inside classrooms and highlights the role of teachers in maximizing students learning outcomes. However, many factors including connectivity, adequate content, computing resources, security and privacy issues, weakness of teachers in using these technologies, crowded classrooms and uninterested students are affecting the success of such activities. They should be adapted to accommodate the technical requirements of these technologies. In the future, we opt for implementing this architecture for a specific school subject.

References 1. Castellanos, A., Pérez, C.: New challenge in education: enhancing student’s knowledge through augmented reality. In: Augmented Reality, p. 273 (2017) 2. Bastug, E., Bennis, M., Médard, M., Debbah, M.: Toward interconnected virtual reality: opportunities, challenges, and enablers. IEEE Commun. Mag. 55(6), 110–117 (2017) 3. Farrell, W.A.: Learning becomes doing: applying augmented and virtual reality to improve performance. Perform. Improv. 57(4), 19–28 (2018) 4. Górski, F., Buń, P., Wichniarek, R., Zawadzki, P., Hamrol, A.: Effective design of educational virtual reality applications for medicine using knowledge-engineering techniques. Eurasia J. Math. Sci. Technol. Educ. 13(2), 395–416 (2017) 5. Vergara, D., Rubio, M.P., Lorenzo, M.: On the design of virtual reality learning environments in engineering. Multimodal Technologies and Interaction 1(2), 11 (2017) 6. Vaughan, K.L., Vaughan, R.E., Seeley, J.M.: Experiential learning in soil science: use of an augmented reality sandbox. Nat. Sci. Educ. 46(1), 1–5 (2017) 7. Martín-Gutiérrez, J., Mora, C.E., Añorbe-Díaz, B., González-Marrero, A.: Virtual technologies trends in education. Eurasia J. Math., Sci. Technol. Educ. 13(2), 469–486 (2017) 8. Mangina, E.: 3D learning objects for augmented/virtual reality educational ecosystems. In: 23rd International Conference on Virtual System and Multimedia (VSMM), pp. 1–6, IEEE (2017) 9. Xiao, J., Xu, Z., Yu, Y., Cai, S., Hansen, P.: The design of augmented reality-based learning system applied in U-learning environment. In: International Conference on Technologies for E-learning and Digital Entertainment, pp. 27–36. Springer, Cham (2016)

Teaching and Learning How to Program Without Writing Code Michel Adam1(B) , Moncef Daoud1(B) , and Patrice Frison2(B) 1

2

Universit´e Bretagne Sud, Lorient, Vannes, France {michel.adam,moncef.daoud}@univ-ubs.fr IRISA Lab and Universit´e Bretagne Sud, Vannes, France [email protected]

Abstract. When learning how to program, there are many obstacles to overcome: learning a programming language, using a programming environment, compiling, executing and debugging. This paper proposes a method for gradually creating a program. The programming objects (variables, arrays, indices) are directly manipulated to emulate a given algorithm. In addition, the operations are recorded and the program automatically produced. We show how to gradually create a program by first simulating the algorithm operations and then by recording the associated instructions. The method is illustrated by AlgoTouch, a programming by demonstration tool. Keywords: Programming for beginners Programming by demonstration · Designing algorithms Visualization of algorithms

1

Introduction

In France, since 2016, the teaching of programming must be provided to all students from primary school onwards. Since this reform, the scientific community is interested in computer education for all levels of learning [1]. It is not, however, a question of proposing a programming course or the teaching of a language for beginners. Their goal is to discover specific concepts and methods to computer science. The idea is to distinguish the concept of “algorithm”, to solve a problem, from that of “program”, executed by a machine, associated with a programming language. These concepts (algorithm, machine, language, information) are the bases for teaching a computing course according to the French Computing Society [13]. However, programming itself is no easy task [6]. Many obstacles have to be overcome: learning a programming language, whose syntax is sometimes restricted, using a programming environment for writing code, compiling, executing and debugging. To overcome these difficulties, specific environments have been developed for beginners, the most known if which is Scratch1 . In addition, 1

http://scratch.mit.edu.

c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 106–117, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_13

Programming Without Writing Code

107

many visualization systems have been proposed for animating algorithms (see [15] for a bibliographic study). But, as [2] points out, writing a program follows a process that is rarely described in textbooks or courses. Indeed, in general, the principle of an algorithm is given, then its operation is shown and finally the code is displayed directly. On the other hand, experience shows that learners often have writer’s block when it comes to writing the program of which the principle of its algorithm has just been described. As noted by [10], even with specific environments, novices “had trouble constructing correct loops, and referencing array elements correctly within those loops through the use of array indices”. In this article, we show how to gradually create a program. We propose an approach that involves manipulating the problem data to find the solution, the “algorithm”, and generate the program. The creation of a program is done in three stages. First, the user manipulates the problem data as real objects, to find the algorithmic solution to the problem. Secondly, she/he records the performed actions to automatically produce the program. Finally, she/he plays the recorded actions to verify the validity of the solution. The user must consider testing all possible cases by proposing datasets that cover all possible situations. This approach can generate a program without writing code either directly or indirectly. This article introduces the concepts of data manipulation, code generation from the actions carried out and the notions of particular cases to examine in the designing of an algorithm. A complete example is given to demonstrate the possibilities of this approach. The concepts presented are illustrated using the AlgoTouch environment.

2

Basic Concepts

The main idea is to manipulate the programming objects (variables, arrays, indices) on screen. By simple gestures, the user can move these elements, drag and drop values into memory boxes (variables or arrays), make comparisons and perform basic operations (addition, subtraction, multiplication, division). Actions can be recorded and can later be played back. In fact, the recording produces real program code (C-like). The code is gradually completed as new datasets produce situations not encountered before. The study of datasets and the different situations is then essential for the design of the algorithm and the generation of the associated program. 2.1

Direct Manipulation

The user must be able to manipulate the basic elements used by an algorithm like a teacher shows the operation of an algorithm on a whiteboard. We call it the whiteboard metaphor.

108

M. Adam et al.

Variables, Constants, Arrays and Indices: Variables are represented by a box with a name and a content. Arrays are represented by a rectangular area containing its member elements. In AlgoTouch, indices are dedicated variables. An index is a variable attached to a specific array. AlgoTouch shows graphically what element of the array is actually being pointed at by the index as shown in Fig. 1. The concept of index variable is similar to the one used in the Alvis Live! system [9]. Note that when creating an array A, the constant A.length is automatically created. The elements of an array are accessible via indices.

Fig. 1. Indices i, j and the associated array A. The icon above i is used to select A[i].

Operations: The main operation used in any algorithm is assignment. The goal is to modify the content of a variable by the result of some expression. With direct manipulation, for example, it is very easy to change the value of variable a with the content of another variable b. With AlgoTouch, the user drags the content of b and drops it on a. Simple arithmetic operators can be used, namely addition, subtraction, multiplication, division and remainder. All actions generate instructions in a C-like notation. When simulating an algorithm, it is sometimes necessary to compare two variables. A comparator resembles a scale, with the values to be compared placed on its pans. As shown in Fig. 2, when comparing the values of variables Min and A[2], the comparator indicates that Min is greater than A[2], therefore the user simulates operations corresponding to this case only. Note that the scale metaphor was introduced in ToonTalk [11]. 2.2

Recording Mode

Once the user has done some preliminary manipulations to see how the algorithm may be designed, recording a sequence of instructions of the algorithm can be achieved. It is necessary to first configure a typical state of the algorithm at a

Programming Without Writing Code

109

Fig. 2. Comparing values with a scale.

given step. For instance, one would initialize some variables, arrays, or indices. Then after activating the Record mode the user executes the operations for this part of the algorithm. When the recording mode is turned off, the generated program reflects all actions that were executed. Conditional Statements: What happens when the user activates a comparator that compares two variables? As explained above the user will execute only the actions corresponding to the current state of the scale. For instance, when comparing variable a with b, if a is greater than b, the user will execute only the actions associated with this case. But what about the actions to do in the other case? We will explain later, in Sect. 4.2, how to take care of this problem in a very simple and intuitive way: this case will be executed on the fly while replaying the sequence. Play: The previously recorded actions are automatically transformed to program instructions. So, the user can see a real program produced as a result of their manipulations. When the user activates the Play mode, the program is executed. At the end, the user can see the results (content of variables and arrays) when examining the screen. Recording While Executing: When executing the recorded program, the execution can be interrupted due to missing code. This is typically the case when the user has used a comparator, for instance, by recording only the actions corresponding to one of the two possible outcomes (e.g. a is greater than b). The code associated with the other condition is not available. In that case, the execution is stopped. The user must perform the actions corresponding to the case (sometimes there is nothing to do). Once the user has done it, the program is updated and ready to be executed again. This approach is similar to the one introduced by Pygmalion [14].

110

M. Adam et al.

With this mechanism, the user must think of a specific test case when some part of the program has not been recorded. From a pedagogical point of view, this is interesting because a programmer must be rigorous in the design of a correct program.

3

AlgoTouch Environment

The screen of the AlgoTouch environment consists of five areas as shown in Fig. 3: the design area (1) where manipulation of the elements occur; the console area (2) used by the system to display information to the user; the toolbox area (3) with buttons; the system area (4) to manipulate the file project; and the program area (5) for displaying the generated program.

Fig. 3. AlgoTouch environment.

The design area (1) is where variables, arrays, indices and macros (explained later) are shown. By simple touch gestures, the user can move these elements, drag and drop values into variables or arrays, and increase or decrease indices values. The console area (2) displays the instructions produced when normal mode is activated. This can be useful for users when using the tools for the first time, to visualize the basic instructions associated with manipulating the objects. The toolbox area (3) contains buttons for commands: mainly for creating objects (a variable, an array of elements or an index) and for execution control (start recording, stop recording, replay the recorded sequence, record exit case, display a scale or execute the algorithm). The system area (4) offers a menu bar with some specific features concerning the project being designed: save, open, create and export the program.

Programming Without Writing Code

111

The program area (5) (labeled “Instructions”) contains the code produced by the user’s manipulations, when the record mode is activated. The generated code is a named macro. A macro is either a simple sequence of instructions, or an iteration with initialization (From), exit conditions (Until), a body (Loop), and a termination (Terminate), as illustrated in Fig. 4. The pseudo-language loop syntax is inspired by the Eiffel language2 . It is also possible to create subalgorithms defined in macros, thus facilitating the creation of algorithms not limited to a single iteration.

Fig. 4. Macro structure: simple and loop

4

Full Example: Finding the Maximum

In this section we study the construction of the classic problem of finding the largest element in an array. The user has an array A of 10 elements, an index max, which designates the place of the greatest element and an index i, which runs through the array. 4.1

Manipulation

The user tries to find the solution to solve the problem by comparing A[max] and A[i]. If A[i] is greater than A[max], the value of max becomes then that of i. The user then increments i to select a new value in A[i], in order to prepare for the next iteration. All these operations are carried out by direct manipulation of the data (Fig. 5). 4.2

Automation

At this point, the user is ready to automate the search algorithm for the largest value of an array. When executing manually the different operations of the algorithm, the same sequence of instructions is always repeated: (1) compare A[max] with A[i], (2) if A[i] is greater than A[max], set max to the value of i, (3) increment i. 2

https://www.eiffel.org.

112

M. Adam et al.

Fig. 5. Array of 10 values to process. At the creation of array A, the A.length constant is automatically set. Two index variables, max and i, provide access respectively to A[max] and A[i] thanks to the icons placed above. In addition, arrows indicate the corresponding elements.

Fig. 6. Case A[i] > A[max]

From the situation shown in Fig. 6, the user records the sequence of actions corresponding to the case A[i] > A[max]. The code of the produced loop body is incomplete (TODO comment): if (A[i] > A[max]){ max = i; } else { // TODO } i = i + 1;

Fig. 7. Case A[i] = A.length.

Fig. 8. Case i >= A.length

Now it remains to record the sequence of instructions to execute before the first iteration. The user can try several configurations. When she/he finds the one that gives the right solution, that is the index max starts at 0 and i at 1. She/he records the initialisation block, “From”, to complete the program. Finally, one must eventually record the “Terminate” block. In this case, it is only the writing of the result. At the end of this phase, the user has constructed the different sequences of the FindMax macro. Each block was recorded one after the other, according to the order, “Loop”, “Until”, “From” and “Terminate”. The AlgoTouch code, built automatically, is: Define FindMax From max = 0; i = 1;

114

M. Adam et al.

Until (i >= A.length) Loop if (A[i] > A[max]){ max = i; } i = i + 1; Terminate Write "Max is " A[max]; End

Note: The dataset used makes it possible to test all the cases and to cover the whole code so as to eliminate all the “TODO” comments. 4.3

Building a Full Program with Macros

The method used makes it possible to simulate more complex algorithms and to produce the corresponding program, for example the selection sort. Thus, Fig. 9 shows the corresponding program with three macros: – Swap which exchanges two cells of the array, in this case A[max] and A[j]; – FindMax looking for the index of the greatest value starting at index j of the A array. This macro is a generalization of the previous FindMax macro (Sect. 4). Only the “From” block has been changed; – SelectionSort that sorts by placing the largest value in the array at index j. Each macro is built and tested progressively. The user first constructs the innermost macros, Swap and FindMax, and finally SelectionSort. So a complex program is produced by simple data manipulation [4].

Fig. 9. The generated macros of the selection sort program

Programming Without Writing Code

5

115

Related Work

Our approach is based on a method to create progressively a program, starting with hands-on experience and terminating by the creation, the execution and the test of a real program. For this goal, we have created the AlgoTouch tool to assist the teacher in explaining how to design a program and to help the students to creating their own programs. In this section, we will compare our method with similar methods and tools proposed in the literature. A lot of tools provide visualization, animation and annotation of code but, to our knowledge, very few offer direct manipulation to produce code automatically. Such tools are also known as “Program by Demonstration” tools [3]. With it, the user does the work and the code is produced automatically. Pygmalion [14] was one of the first system dealing with direct manipulation in its concept of (“an electronic blackboard”): recording operations on real data, executing as soon as possible and completing code when conditional statements occur. AlgoTouch is similar to Pygmalion, it operates on typical computer elements like variables and arrays since its goal is to show how to construct standard algorithms on arrays. A similar approach is used in ToonTalk [11], which replaces computational abstractions by concrete familiar objects and has been successfully used by thousands of children. The main goal of our approach is to facilitate the translation from problem to code. To solve this problem, a seven steps approach is proposed by [7]. The first 4 steps are done with pencil and paper. Students work on small examples by hand, write down what they do, test their algorithm on different cases. This is similar to our approach where we use AlgoTouch to solve a problem by direct manipulation. In step 5, the authors asks the students to translate to code, then in steps 6 and 7, to test and debug the program. In fact, the translation part is the hardest part. We think that our approach, assisted by AlgoTouch, can be used to complete this task. An environment called SORTING, dedicated to learning of sorting algorithms is proposed by [12]. The system provides different representation systems to the students. The students start by experimenting using hands-on experience. Then they are asked to interpret their experience using firstly a text-based, and secondly a pseudo-code based representation systems. If the students encounter difficulties in the previous step, the system shows a list of instructions that the student has to re-order. In complement, the system provides animations, and pseudo-code execution. Our approach is similar to SORTING in the sense that we propose different ways to help the students increasing their knowledge. Starting with hands-on experience, learning the associated computer instructions, then recording parts of the algorithm, executing to validate on different cases, and finally producing the full program. However, compared to SORTING, the strength of AlgoTouch is that the code is produced automatically according to the student’s manipulations of the problem data. In fact, AlgoTouch can be used also to animate an existing program, and to control its execution. Animation is very important as shown by [8]. A recent animation tool, Online Python Tutor [5], is interesting to compare with Algo-

116

M. Adam et al.

Touch since these tools operate in the opposite way. With Python Tutor, the user writes a line or block of code and the system shows visually the result. With AlgoTouch, the user does an action on some variable and the lines of code are produced. In fact, the two approaches are interesting in a pedagogical point of view.

6

Status and Future Work

The AlgoTouch environment allows one to directly manipulate the elements of a program. It facilitates the design of algorithms and the automatic generation of the corresponding program. It allows beginners to free themselves from the prior learning of a programming language. In successive stages, the user can manipulate the data of an algorithm, record the corresponding operations and finally replay the recorded sequence to verify if it operates correctly. In this paper, we showed how to deal with an algorithm to search for the maximum in a sequence, then we used it in a concrete example of sorting by selection. Some experiments were conducted with 9th grade middle school students and others with first year college students. The students mastered the tool very quickly (2 hours). They learned the notions of data, variable, array, instruction, sequence of instructions, alternative and iteration. They were also able to discover, by practice, the notion of “programming language”, and have become aware of that of “algorithm” and its programming. Over a short period of two hours, the novices were able to produce the maximum search program. Our first experiments show that our approach seems effective for the introduction to programming for students. However, new experiments must be carried out in order to have feedback from teachers and students to improve the method.

References 1. Baron, G.-L., Drot-Delange, B.: L’´education a ` l’informatique ` a l’´ecole primaire. 1014, Bulletin de la Soci´et´e informatique de France 8, 73–79 (2016) 2. Boisvert, C.R.: A visualisation tool for the programming process. In: Proceedings of the Annual ACM SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE 2009, pp. 328–332. ACM, New York (2009) 3. Cypher, A.: Watch What I Do: Programming By Demonstration. MIT Press, Cambridge (1993) 4. Frison, P.: Creating an insertion sort algorithm with AlgoTouch, April 2016. Online video on Youtube. https://www.youtube.com/watch?v=d0ndvXbnyMA 5. Guo, P.J.: Online Python tutor: embeddable web-based program visualization for CS education. In: Proceeding of the 44th ACM Technical Symposium on Computer Science Education, SIGCSE 2013, pp. 579–584. ACM, New York (2013) 6. Guzdial, M.: Programming environments for novices. In: Computer Science Education Research, pp. 127–154 (2004) 7. Hilton, A.D., Lipp, G.M., Rodger, S.H.: A technique for translation from problem to code. In: Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2018, pp. 375–375. ACM, New York (2018)

Programming Without Writing Code

117

8. Hosseini, R., Sirki¨ a, T., Guerra, J., Brusilovsky, P., Malmi, L.: Animated examples as practice content in a Java programming course. In: Proceedings of the 47th ACM Technical Symposium on Computing Science Education, SIGCSE 2016, pp. 540–545. ACM, New York (2016) 9. Hundhausen, C.D., Brown, J.L.: What you see is what you code: a “live” algorithm development and visualization environment for novice learners. J. Vis. Lang. Comput. 18(1), 22–47 (2007) 10. Hundhausen, C.D., Farley, S.F., Brown, J.L.: Can direct manipulation lower the barriers to computer programming and promote transfer of training?: an experimental study. ACM Trans. Comput. Hum. Interact. (TOCHI) 16(3), 13 (2009) 11. Kahn, K.: How any program can be created by working with examples. In: Your Wish Is My Command, pp. 21–44 (2001) 12. Kordaki, M., Miatidis, M., Kapsampelis, G.: A computer environment for beginners’ learning of sorting algorithms: design and pilot evaluation. Comput. Educ. 51(2), 708–723 (2008) 13. SIF: Enseigner l’informatique de la maternelle ` a la terminale. 1024, Bulletin de la Soci´et´e informatique de France 9, 25–33 (2016) 14. Smith, D.C.: Pygmalion: an executable electronic blackboard. In: Cypher, A. (ed.) Watch What I Do, pp. 19–48. MIT Press, Cambridge (1993) 15. Sorva, J., Karavirta, V., Malmi, L.: A review of generic program visualization systems for introductory programming education. Trans. Comput. Educ. 13(4), 15:1–15:64 (2013)

Towards a Dynamics of Techno-Pedagogical Innovation Within the University: Case Study Hassan II University of Casablanca Nadia Chafiq1(&), Mohamed Housni2, and Mohamed Moussetad3 1 Multidisciplinary Laboratory in Sciences and Information, Communication, and Educational Technology (LAPSTICE), Observatory of Research in Didactics and University Pedagogy (ORDIPU), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P. 7955 Sidi Othmane, Casablanca, Morocco [email protected] 2 Laboratory of Information Technology and Modeling (LTIM), Faculty of Sciences Ben M’Sik, Casablanca, Morocco [email protected] 3 LIMAT Lab, Physics Department, Faculty of Sciences Ben M’Sik, Casablanca, Morocco [email protected]

Abstract. In the Moroccan context, universities have put digital transformation at the heart of their projects of development thanks to a wide range of hybrid learning, Small Private Online Course and Massive Open Online Courses. On the one hand, the purpose of using these devices consist in helping improve their performance and in enhancing their attractiveness. On the other hand, it aimed at meeting the increasingly diverse student’s needs, thanks to an infrastructures reorganization and a renovated pedagogy. Casablanca University Hassan II supports teachers in their efforts to improve the learning of their students via the establishment of e-learning training workshops. In this context, the present work focuses on the formulation of an accompanying model for the adoption of a techno-pedagogical innovation by teachers. Subsequently, we examine the contributions of this model via the evaluation of the digital strategy for the teaching in the Hassan II University of Casablanca. Keywords: University Learning scenarios

 Techno-pedagogical innovation  Teachers

1 Introduction The combination of the introduction of technologies and the pedagogical approaches significantly affects the role of teachers and even calls into question the act of teaching [1]. Similarly, in the last few years, the teaching profession has profoundly changed: different student profiles, changes in methods and content. Indeed, some factors affect the choice of a techno-pedagogical way by the teachers. For example, heavy investment, which means to realize a course with the Information and Communication Technology for Education (ICTE), either as a whole or merely partly, this requires © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 118–125, 2019. https://doi.org/10.1007/978-3-030-03577-8_14

Towards a Dynamics of Techno-Pedagogical Innovation Within the University

119

time, and teachers are not ready to be committed. According to the OECD [2], when a teacher has access to digital tools, and he is competent, motivated, financially supported and guided by the top management team, there is a very high probability that he/she will integrate digital tools into innovative practices. Like many universities, the Hassan II University of Casablanca supports teachers in their efforts to improve the learning of their students. Distance learning remains a major strategic line for this university. Thus, if the Hassan II University is to integrate digital technology into education, it is through teaching that action must be taken. However, teaching and mentoring habits, the rigidity of the environment regarding the integration of the Information and Communication Technology (ICT) and the lack of support provided by education departments are among the main obstacles to the implementation of ICT in the field of pedagogical training [3]. Our research fits into this issue, and this first observation has led us to question the training of teachers towards the adoption of a techno-pedagogical innovation: How is the situation in regards to the condition of teacher training about the uses of ICT? How to guide and train teachers to acquire techno- pedagogical skills? To examine these questions, we first resorted to a methodological approach focused on the formulation of an accompanying model for the adoption of a techno-pedagogical innovation by teachers. Then, the last part focus on the evaluation report of the digital strategy for the teaching process in the Hassan II University of Casablanca.

2 Accompanying Model for Teachers in Order to Adopt a Techno-Pedagogical Innovation The dimensions of this model are based as well as suggestions held by the basis of the international reports. The latter is dedicated to the promotion of distance education (for example the report Enseigner avec le numérique-mission Fourgous and report OCDE [2]). The accompanying model emerging from this research consists of the following three levels: Level 1: Training Level 2: Digital trust Level 3: Evaluation (see Fig. 1). Level 1 (Training): The training cycles related to the online release of a course should consider the following dimensions: Dimension related to the design of learning scenarios, technical dimension (for example, the use of the platform and the digital working environment) and online tutoring dimension. Through Level 1, we wanted to highlight teachers needs regarding help for using these technologies (training). Pedagogical dimension (learning scenarios): First, this model emphasizes the pedagogical dimension, the training offered to teachers should hit the design of learning scenarios phase, as it is the most important of the whole process of putting a course online. The design of learning scenarios starts with defining the global thematic of the course, the target audience, the statement of the objectives and the identification of the scenario content and last but not least, the drafting [4]. Technical dimension (equipment): Second, most universities have a digital work environment that provides many services to their users, with the adoption of a content management training platform such as Moodle. Adopting a training platform often requires upgrading existing IT infrastructure and equipment in the facility. A platform

120

N. Chafiq et al.

Fig. 1. Accompanying model for the adoption of a techno-pedagogical innovation by the teacher.

for open and distance learning is a software that assists in the monitoring of distance teaching. This type of software includes the necessary tools for the three main users of this device: teacher, student and administrator. The dimension of online tutoring: Thirdly, this model maintains the tutorial dimension defined as a questioning of the existing social relations between the teacher and the student: How would it be, if a technological innovation does not take into consideration the social dimension? An online course requires the intervention of a tutor who will be required to perform various functions. These functions can be considered as evaluation criteria for tutoring. Level 2 (Digital Trust): Through Level 2, we wanted to highlight the needs of teachers regarding support regarding the use of technologies, whether it was about structures or incentives. According to Jean-Louis Auduc, if you want to feel free to innovate, you need to feel secure and confident with your institution [5]. In order to give teachers confidence and encourage them to design online courses, incentives could be implemented by the institution: for example, counting hours of distance teaching and licensing copyrights. Indeed, the release of digital educational resources raises the question of copyright and the legal protection of these digital resources. This aspect is now a major issue for higher education institutions in Morocco. Level 3 (Evaluation of the Adoption of a Techno-Pedagogical Innovation): The evaluation of the adoption of a techno-pedagogical innovation by the teachers can be done through the feedbacks given by the peer (the teacher’s colleagues) and by the students or by the implementation of a quality approach based on criteria of online course quality. The latter focuses on considering the following facts: pedagogical dimension (learning scenarios) and technical dimension.

Towards a Dynamics of Techno-Pedagogical Innovation Within the University

121

Similarly, the introduction of digital technology into the teaching activities of an institution requires to ensure the quality of courses available online. We need to clarify that these three levels (Training-Digital trust-Evaluation) are then used to analyze the adoption of a conducted techno-pedagogical innovation by teachers, for example, within the Hassan II University of Casablanca. Thus, it seems to us that no innovation can be copied and pasted since the culture, the context, the actors and the mobilized dimensions are specific to an institution [6]. The following part will be dedicated to the evaluation of the digital strategy regarding education at Hassan II University of Casablanca.

3 Adoption of a Techno-Pedagogical Innovation by the Teachers: Case of University Hassan II of Casablanca The University Hassan II of Casablanca (UH2C) contains 18 educational establishments spread over six university campuses: Campus El-Jadida Road - Hay Hassani; Campus Anfa Hospital District; Campus Hay Inara - An Chock; Mohammedia Campus; Ben M’Sik Campus; Ain Seba Campus. The digital strategy of the UH2C is based on the development and daily use of the digital. This strategy is based on two complementary and inseparable elements: the development of distance education, where the Information and Communication Technology (ICT) plays an essential role, and the daily use of ICTs in educational activities and students daily life. Likewise, the UH2C is continuing to deploy smart infrastructures and buildings to guide the digital educational strategy implemented: Project Smart University. Hassan II University of Casablanca has a user-oriented organization centralized around two units with welldefined roles and responsibilities: Information System Commission (ISCOM) and Pedagogical Innovation Cell (PIC). Indeed, the Information System Commission aims to accelerate the process of development and guidance of the project Digital University of the university in close collaboration with the Pedagogical Innovation Cell in order to amplify the use of the Moodle pedagogical platform and the development of the Massive Open Online Courses (MOOCs) at the university. This unit must not only take over but also contribute to the functional aspects of the digital strategy, accompany and even train users in the new tools deployed, communicate on the changes and choices, current or future. This unit supports teachers in their efforts to improve their online teaching. Also, the UH2C has a platform dedicated to MOOCs which has been added to the e-learning Moodle platform. Up to date, it has 64 MOOCs, which the UH2C aims to increase to 200, and it similarly seeks to enrich the Moodle platform with other online courses. To achieve these objectives, the “PIC” Pedagogical Innovation Cell could play an important role in mentoring teachers about the design of online courses. From this point of view, Fig. 2 represents the teams of the pedagogical innovation unit.

122

N. Chafiq et al.

Fig. 2. Continuous collaboration between the CIP teams for the implementation of the digital strategy regarding education.

The above figure illustrates the respective competencies of each PIC team; it highlights the necessity of collaboration between those teams to guide the change and the implementation of a techno-pedagogical innovation dynamics. Therefore, the PIC consists of four teams: Regulation and operational charter: This team takes care of the organizational aspects of e-learning within the university. It aims to establish procedures and regulations (hourly volume, write the charters of use of e-learning platforms and tutoring charter). Online release and operationalization: This team’s mission is to list the objectives concerning distance learning and to operationalize online courses. Technical resources, hardware and software: This team has to see the technical needs both hardware and software to promote the development of distance teaching. Training and accompaniment: This is a guidance team responsible for the support and accompaniment of users (administrative staff, teachers and students). This team ensures the deployment of the necessary infrastructures and guides future users, contributing to the smooth conduct of digital strategies. Moreover, the PIC support team not only guides the teachers in establishing their course but can even help in the production of digital resources. The continuous collaboration of the team in charge of technical parts and infrastructures with the support team in charge of training and accompaniment is capital for the implementation of the digital strategy [7]. We will now analyze the three levels of our model applied to the context of University Hassan II of Casablanca.

Towards a Dynamics of Techno-Pedagogical Innovation Within the University

123

Level 1 (Training): Like many universities, UH2C supports teachers in their efforts to improve the learning of their students. Distance learning, therefore, remains a major strategic focus for University Hassan II of Casablanca, with the desire to offer courses in hybrid mode, combining face-to-face and distance learning. The university implemented training cycles (Moodle - MOOC, etc.) for teachers: more than 20 workshops are offered each year to teachers. We must note that since 2014, the newly recruited teacher at University Hassan II of Casablanca is required to participate in workshops related to digital. There are also specific workshops for the university teachers or the members of the Pedagogical Innovation Cell, for thematic accompaniments or focused on a type of pedagogical innovation (for example, tutoring or online evaluation workshop). Those workshops are sometimes led by external ICT experts. We will not go into the detailed description of each of these actions, but what we intend to clarify is that these training cycles contribute to the development of the techno-pedagogical skills of the university’s teachers. Level 2 (Digital Trust): In order for teachers to gain trust in the digital and to encourage them to design online courses, incentives have been implemented by the UH2C. Indeed, as part of its policy of promoting innovative endeavor and financing projects which enable to increase and diversify the offer of MOOCs, the UH2C will continue its role of accompaniment, with calls for digital projects (APN- UH2C) organized around four lines: creation of a MOOC, construction of digital resources, teaching practice integrating digital and training devices. A budget has therefore been allocated to the calls for projects in order to produce new MOOCs, to experiment and to develop different technologies. Level 3 (Evaluation of the Adoption of a Techno-Pedagogical Innovation): Level 3 aims at evaluating the adoption of pedagogical innovation by teachers. This evaluation relied on the feedback given by the newly recruited teachers after the follow-up of a training cycle dedicated to the use of the Moodle platform and the online release of the learning scenarios. To better understand how this type of training could support teachers in order to gain those tools, it is essential to know their points of view. Two types of data have been collected: Quantitative data obtained from online surveys for newly recruited teachers - Qualitative data obtained during the use of the Moodle platform. A total of 22 teachers completed an end-of-training questionnaire, including six women and 16 men. For our survey, we opted for the Google Forms tool since this broadcast mode offers various advantages, from which we can quote: the increase in the number of respondents, the reliability of responses and the ease of exploitation). We have 53% of teachers able to work on the Moodle platform. These teachers say they have ideas on how to improve their educational scenarios, which they will apply as soon as the opportunity arises. Some teachers mention the techniques and tools they discovered during these workshops and which they find effective in their class. Some teachers also found that these training courses what help them reinforce their confidence - their trust in the use of digital technology in their pedagogical practices.

124

N. Chafiq et al.

However, among the teachers who have completed the questionnaire, we have 47% who are not able to use the Moodle platform. According to the teachers, this underutilization of e-learning is due to several reasons among which we can quote: – The absence of technical assistance needed by the teachers during the development of the online courses after the training cycles. – The absence of networking, the absence of a legal status applied in the University to protect copyright. Through these results, we notice that the situation is still insufficient compared to the objectives set by the Hassan II University. Efforts are still needed to bring it under more trustworthy guidelines.

4 Conclusion In this paper, we were interested in studying the adoption of a techno-pedagogical innovation by teachers. This approach of questioning the adoption of innovation appears as a kind of evaluation of the digital strategy regarding education in the Hassan II University of Casablanca. The objective of this strategy is to promote a techno-pedagogical innovation dynamics by encouraging teachers to renovate their practices through various ways: training, counting distance teaching hourly volume, establishing procedures and regulations of distance teaching. Therefore, our objective was to analyze this innovation by placing it in the digital strategy of the university, which finally gave us the possibility to suggest a techno-pedagogical innovation model that combines training, digital trust and evaluation. The first point of this model focuses on the digital strategy, which enables us to use teams of the Pedagogical Innovation Cell in a perspective of teachers accompaniment in designing online courses and creating digital educational resources. It is not a question of putting this step as an end in itself, but a way that leads to the second point, which is specifically towards digital trust (reinforce the confidence - the numerical trust of teachers for the use of digital technology in their pedagogical practices). In this phase, teachers are expected to design pedagogical scenarios and create pedagogical resources using technology effectively.

References 1. Thierry, K., Lorraine, S., Larose, F.: Les futurs enseignants confrontés aux TIC: changements dans l’attitude, la motivation et les pratiques pdagogiques. Le renouvellement de la profession enseignante: tendances, enjeux et défis des années 2000, vol. XXIX, no. 1, p. 1, (2001) 2. Fourgous, J.: Apprendre autrement à l’ère du numérique, p. 155 (2012) 3. Zeroual, R., Jouhadi, M., MBarky, B., Andoh, A.: Mise en ligne et adaptation par scénarisation d’un support de cours de Prothèse maxillo faciale à un dispositif, E. learning. EDP Sciences, p. 2 (2017) 4. Kwanyoung, K., Jimin, P., Chanhoo, S.: L’impact des TICE sur la formation des enseignants en Corée, Revue internationale d’éducation de Sèvres, 55, 129–140 (2010) 5. Auduc, J.: Redonner du sens au métier d’enseignant, p. 1 (2013)

Towards a Dynamics of Techno-Pedagogical Innovation Within the University

125

6. Lison, Ch., Bédard, D., Beaucher, C.H., Trudelle, D.: De linnovation un modèle de dynamique innovationnelle en enseignement supérieur. Revue internationale de pédagogie de l’enseignement supérieur, p. 5 (2014) 7. Rapport d’études, l’université numérique : éclairages internationaux. Travaux conduits par la Caisse des Dépôts en partenariat avec l’OCDE et la Conférence des présidents d’université, p. 63 (2010)

Towards the Design of an Innovative and Social Hybrid Learning Based on the SMAC Technologies Nadia Chafiq1(&) and Mohamed Housni2 1 Multidisciplinary Laboratory in Sciences and Information, Communication, and Educational Technology (LAPSTICE), Observatory of Research in Didactics and University Pedagogy (ORDIPU), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P 7955 Sidi Othmane, Casablanca, Morocco [email protected] 2 Laboratory of Information Technology and Modeling (LTIM), Faculty of Sciences Ben M’Sik, Hassan II University of Casablanca, B.P 7955 Sidi Othmane, Casablanca, Morocco [email protected]

Abstract. The blended learnings are currently part of the higher education scene, but with significant variations between teachers, institutions and even countries. The introduction of a hybrid learning in the university context influences the habits and the behaviours vis-vis the teaching practices deeply. In this regard, to facilitate the adaptation and the acceptance of the changes induced by the implementation of hybrid devices in the teaching activity and thus reducing the learner’s refusal factors, it is necessary to design an innovative hybrid learning that meets the social needs of Generation C learners. The design of these devices must adapt to the rapid development of Social, Mobile, Analytics and Cloud Computing and must meet the current needs of the digital generation. In this context, the present work focuses on the implementation of a design model of a hybrid social and innovative device. Subsequently, we examine the contributions of this model via the experimentation of a platform dating from the period 2017–2018. Keywords: SMAC technologies

 Hybrid learning  FOAD  LMS

1 Introduction In higher education, the problem of massification is recurrent. For some learners, geographical distance is added, hence the need to set up a hybrid learning environment. We are therefore witnessing the emergence of hybrid learning that combines a variety of teaching methods, alternating distance learning and face-to-face training. Charlier, Deschryver and Peraya define it as: “a hybrid learning environment is characterized by the intentional introduction into a training environment an innovative factors: the articulation of the presence and the distance supported by a platform. The functioning of a hybrid device is based on complex forms of mediatization and mediation” [1]. The use of emerging technologies in the educational world presents an opportunity to © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 126–133, 2019. https://doi.org/10.1007/978-3-030-03577-8_15

Towards the Design of an Innovative and Social Hybrid Learning

127

explore new venues for learning or training activities, rethinking methods, redesigning modes of evaluation and revising the design of digital devices. SMAC technologies are the combination of Social, Mobile, Analytics, and Cloud computing, which represent an emerging trend in the industry. Information and communication technology have profoundly transformed all aspects of our lives (the business world, education). Malcolm Frank [2] likened SMAC technologies to an economic model that would have a significant impact on the business world. In the field of education, for example, the separate use of social, security, analytics, mobile and cloud technologies already offer considerable benefits for teaching and learning. For this reason, the university should be ready to adopt or develop new devices to keep pace with these changes. In fact, the use of SMAC technologies involves adopting social, collaborative and secure technology-based modes of exchange, creating new mobile uses, which does not tether the user to a unique place, besides integrating analytics to exploit data and continuing to grow or use new resources and services offered by the cloud. Similarly, SMAC technologies offer unprecedented opportunities for hybrid environment designers to reconstruct the classic conceptual models. One of the most common questions that designers and engineers want to answer is how to improve an existing model?. Therefore, design models can be useful to suggest a set of tools to use, to construct powerful devices by e-learning experts, which can then be enriched to adapt to technological developments. An instinctive trend of most designers of a new device raises the question: What has worked in the past? What are others doing? How to take advantage of existing best practices? These questions were asked by Laurillard, D. (2012) [3] in his book related to device design. Taking inspiration from these questions, we first present a methodological approach focusing on the development of a design model for a social and innovative hybrid device based on SMAC technologies. Finally, the last part is devoted to the “beta test” stage of the first edition (2017–2018) of a hybrid device based on SMAC technologies.

2 Problem Statement Since 2009, we have been working on this type of devices mainly in the first years of the creation of the Observatory of Research in Educational Didactics and Pedagogy (ORDIPU). We have tried to analyze the impact on student learning and teacher practices. During our experiments dating from 2009–2012, we noticed that there was not much for exchange for using Learning Management Systems (LMS), such as the Moodle platform [4]. The Moodle platform, which is rarely used as a tool for exchanges (which is one of its main functions), the platform has been used as a progressive management tool for group realization of a project and has triggered mediated collective processes. This result encourages us to design an innovative hybrid and social device that takes into consideration private spaces of communication and expressions specific to learners.

128

N. Chafiq and M. Housni

3 Methodology Our methodological approach focuses on the development of a design model for an innovative hybrid and social device. Next, we will present the highlights of the evaluation of the FOUL experiment based on SMAC technology.

4 Descriptive Dimensions Model of the Hybrid Innovative and Social Environment Based on SMAC Technologies The goal here is not to design a complete hybrid device, but to define dimensions and to find out whether we need to design a hybrid learning based on SMAC technologies. The model below summarizes these dimensions in three levels (see Fig. 1): Innovative device, Social Device, the Role of the actors and Acquisition of meta-cognitive skills and language. Before setting up a hybrid learning around a platform like Moodle, we have gathered the dimensions that we think are important.

Fig. 1. Descriptive dimensions model of the hybrid Innovative and social environment based on SMAC technologies (N. Chafiq, M. Talbi)

Towards the Design of an Innovative and Social Hybrid Learning

129

Innovative Hybrid Environment: This model considers the innovative environment as a process based on SMAC technologies and as a result of efficient integration. The literature review has shown that open platforms for mobile access have an immediate impact on the interaction of learners because participants who own mobile phones tend to interact more with their classmates. In addition, students using LMS, tools and digital resources leave traces of their actions - analytic learning data - that can be tracked and represented to teachers as information about students performances, from where the use of a variety of techniques and tools to process and interpret the large volumes of data from the digitization of content [5]. Finally, other emerging technologies could be equated in the design of innovative hybrid learnings, for example, cloud computing is a technology relocating data and applications on cloud infrastructure accessible from the Internet [6]. Collaborative Environment: More and more platforms are introducing tools to create learning networks: they are integrated as well as a variety of tools allowing real-time communication, sharing and co-creation of knowledge. The development of socialization tools, allowed to glimpse the possibility of recreating a certain virtual presence at a distance [9], simulating class interactions. Role of Actors (Self-managed Learning): As part of this system, the learner becomes responsible and actor of his training course. Students should develop self-determined learning (this is heutagogy), a fundamental skill to better adapt to the fast pace of innovation. According to Hase and Kenyon [7], heutagogy emphasize how to learn and exchange knowledge. On the other hand, the hybrid learning modifies the role of the teacher by developing his method of accompaniment. In addition to the teacher’s roles, the teacher-tutor will also have those who correspond to tutoring: a social role, an organizational role, a teaching role and an evaluator role [8].

5 Striking Points in Evaluating a Hybrid Environment Based on SMAC Technologies (Beta Test) Constant evaluation of an environment is an essential component to ensure that it continues to meet the needs of users. The best way to identify areas of success and improvement areas of a device is to collect data for evaluation and then act on it. 5.1

Presentation of the Experimentation

FOUL (French for University Objectives Online) is the suggested platform for experimentation at the Faculty of Science Ben M’Sik (tice-lt.info/foul/) (see Fig. 2). The goal of this platform is to provide an online environment enriched by SMAC technologies that will help students to complete university-type tasks in French. The FOUL platform offers many functionalities allowing to bring a real added value to a course, from the simple database of documents to the development of online tests. We have tried to integrate this tool into our teaching system in order to develop the hybridity of the scenario set up and to encourage the acquisition by students of new knowledge and skills outside the classroom. For the case of our device, the beta test

130

N. Chafiq and M. Housni

Fig. 2. FOUL platform

began November 15, 2017, for three months for a target audience of 40 students in the first year of the bachelor degree (Branch: Sciences of the Matter and Chemistry (SMC)). At the end of the training, we proposed an evaluation questionnaire of the FOUL platform based on the SMAC technology in order to know the first reactions of the students on this support to the teaching that we present for the first time. It is a questionnaire dealing with general information related to the device, as well as the specificities of synchronous and asynchronous interactions. Also, the students (SMC) were solicited by an open question about the validity of the device. This last question in the questionnaire is a synthesis of the salient points made by the learners in their perception of the experiment, namely: «Give two adjectives to qualify the teaching of the French using the FOUL platform in which you participated.» 5.2

Analysis and Discussion

In this part, we will present the results of the data processing, namely a synthesis of the most salient dimensions of the hybrid device highlighted from the quantitative data grouped around four aspects namely: Innovative dimension (Use of the tools of the FOUL platform), interaction and collaboration between actors and metacognitive and linguistic skills. Innovative Dimension: Almost 85% (very useful and useful answers) of students consider that using the tools of the platform is useful. Through these tools, students can ask questions that the teacher or other students can answer. Indeed, the FOUL platform is a learning management system; it has a mobile interface, as well as several platforms. Mobile devices allow users to not only be constantly connected to their data and resources but also to be always connected to each other. 80% of students have access to the platform via the Moodle Mobile app. Despite this, we present the new technological developments with a caveat: not all students can receive a course that contains all these technological tools. When building a hybrid learning using SMAC technologies, the teacher should determine the technologies that most students will have access to and find another alternative for those

Towards the Design of an Innovative and Social Hybrid Learning

131

who do not. Also, from the FOUL platform, we will be able to generate two types of data: data on the course, users, and the actions of users within the platform and the data from the forums. While having data is helpful, it is only once they have been analyzed and evaluated that they become really useful. The analysis of these data can give the teacher a clearer picture of where the teaching needs to improve. Interaction and Collaborations Between the Actors: Thanks to the monitoring tools offered by the FOUL platform, we have interaction statistics allowing us to assess the life of the group, the productivity of the learners and the level of achievement related to the educational activities. Synchronous (chat) and asynchronous (e-mail, forum) tools have enabled new sociability, and have created new forms of virtual exchange. Some LMS tools are more social- friendly than others, for example, tools for synchronous interactions are more conducive to socialization. Online socialization has some advantages for some distance students, especially by breaking their isolation: Socialization between peers can be a factor of emulation for each student about his progress in his learning path [9]. The contribution of the hybrid device as catalysts of interaction and autonomy has been confirmed once again. When we ask students to evaluate the effectiveness of their online learning experience, it is the ability to talk with their peers and the teacher they value most. Meta-cognitive Skill Acquisition: During the hybrid learning test based on SMAC technologies, we discovered that for learners it was not just about learning language skills, but also about metacognitive skills. Now, they have the opportunity to question their way of learning and to feel more responsible for their learning than in a conventional class. Based on the analysis comments given by participants who have taken part in the test, we can say that our system has made it possible, thanks to the socialization tools associated with the platform (FOUL), to promote autonomy. According to the expression of the learners, this was an excellent opportunity to learn French. The satisfaction that SMC students felt at the end of this experiment is reflected massively in the following question: “Give two adjectives to qualify your teaching using the FOUL platform in which you participated.” We present schematically in two families adjectives qualifying the device as a whole. For the SMC group, the most cited adjectives are: Innovative, useful and social. This mid-term evaluation of this hybrid learning experience allowed us to assess the sense of satisfaction that SMC students experienced at the end of this experiment (see Fig. 3).

Fig. 3. Grouping of adjectives qualifying the hybrid learning

132

N. Chafiq and M. Housni

6 Conclusion The university must be strongly involved in training future SMAC technology executives to respond to a real need in order to position itself as a major regional player. This should be done by adapting the training program to future occupations such as data scientist, data officer and data protection. The SMAC leads us then to reinvent our ways of working in a new environment where training is both the object and the medium of this transformation. In this paper, after discussing the need to integrate new dimensions into the design of a SMAC-based hybrid learning, we evaluated a hybrid learning based on these four technologies within the FOUL platform. Passing through this device, we offered learners an individualized learning approach that identified the strengths and weaknesses of learners and then proposed a personalized educational scenario as to the content to be studied, the multimedia resources to be consulted and the work to be done. These new dimensions could lead us to ask some questions: What are the contributions and the limitations associated with the use of SMAC technology for learning? Is the hybrid learning, integrating innovative and social dimensions, more effective than before? What equipment or infrastructure do academic institutions need to implement this device?. These questions go far beyond the results achieved in this work. By way of future research, the answers to these questions will be presented as part of a practical extension. Acknowledgment. I would like to thank my advisor Mr. Talbi, Ph.D., for their invaluable guidance and many useful suggestions during my work on this paper. I would also like to express my gratitude to all those who gave me the possibility to complete this paper.

References 1. Charlier, B., Deschryver, N., Peraya, D.: Apprendre en présence et à distance, à la recherche des effects des dispositifs hybrids? p. 5 (2005) 2. Malcolm, F.: Don’t Get SMACked: How Social, Mobile, Analytics and Cloud Technologies are Reshaping the Enterprise, p. 3 (2012) 3. Laurillard, D., Charlton, P., Craft, B., Dimakopoulos, D., Ljubojevic, D., Magoulas, G., Masterman, E., Pujadas, R., Whitley, E., Whittlestone, K.: A Constructionist Learning Environment for Teachers to Model Learning Designs, p. 4. Blackwell Publishing Ltd., Hoboken (2011) 4. Chafiq, N., Benabid, A., Berdagi, M., Lima, L.: Intérêts et limites de la mise en oeuvre d’un dispositif hybride pour le developpement de la compétence langagière chez les étudiants scientifiques. Le langage et l’homme: Revue de didactique du français 47(1), 111–119 (2012). ISSN 0458-7251 5. Housni, M., Namir, A., Talbi M., Chafiq N.: Applying Data Analytics and Cumulative Accuracy Profile (CAP) Approach in Real-Time Maintenance of Instructional Design Models (2018)

Towards the Design of an Innovative and Social Hybrid Learning

133

6. Hennion, N.: Introduction aux technologies cloud, p. 3 (2011) 7. Hase, S., Kenyon, C.: From andragogy to heutagogy, p. 1 (2000) 8. Chafiq, N., Talbi, M.: Tutoring functions in a blended learning system: case of specialized French teaching. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 8(2), 25 (2017) 9. Loisier, J.: La socialisation des étudiants en FAD au canada francophone. Réseau d’enseignement francophone distance du Canada (REFAD), p. 5 (2014)

Information Systems and Technologies Opening New Worlds for Learning to Children with Autism Spectrum Disorders Leila Goosen(&) University of South Africa, Pretoria 0003, Gauteng, South Africa [email protected]

Abstract. The purpose of this article is introducing how Information Systems (IS) and technologies are opening new worlds for learning to children with Autism Spectrum Disorders (ASDs). Theoretical conceptual frameworks provide topic definitions. The literature review incorporates others’ views to support, refute, or demonstrate the author’s position. The research methodologies for dealing with the issues, controversies, or problems presented are offered. Results are discussed, and a brief discussion of conclusions provided. Keywords: Autism spectrum disorders  Information systems and technologies Learning

1 Introduction For several years now, the use of various Information Systems (IS) and technologies have been having profoundly transformative effects in the lives of people with Autism Spectrum Disorder (ASD) [1]. Especially since the introduction of tablet devices, like the iPad, and the subsequent explosion of such devices and specialized applications for communication and related skills [1], the proliferation of such relatively inexpensive mobile technologies have dramatically changed how learning and behavioral services are provided and/or delivered for individuals with autism spectrum disorder [2]. From touch screen telephones to tablets, mobile computer devices have never been less expensive, more userfriendly and/or more universally available. Children these days are frequently referred to as digital natives in terms of information systems and technologies. According to Lofland [2], this is often also true for students on the autism spectrum. Many ASD individuals are, in fact, more comfortable when interacting with inanimate objects, like a computer or an iPad. It has been established that mobile technology, that, for most people, serve as only entertainment or convenience [1], can also be used effectively to assist for learning academic areas, from fine motor to social, functional life and organizational skills, video modelling, speech/language therapy and increasing independence [2]. Personal Computers (PCs) and standard operating systems first became an early option, and now, IS and technologies are opening new worlds and creating great new possibilities, not only for developers, but also for those on every part of the autism © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 134–143, 2019. https://doi.org/10.1007/978-3-030-03577-8_16

Information Systems and Technologies Opening New Worlds

135

spectrum [1, 2]. Since they came along, mobile, multiple-use technologies have been offering opportunities to consumers and/or students, which extended far beyond the capacities of earlier devices, at significantly lower costs. Various IS and technologies, including, for example, affective computing, multitouch interfaces, robotics and virtual reality, have been developed for supporting children with ASD [3]. Such innovative technologies, on their own or in conjunction with others, could be used positively in several critical areas, which affect individuals with autism spectrum disorders, their families and the professionals who support them.

2 Theoretical and Conceptual Frameworks The ASD phenotype consist of a heterogeneous group of serious, pervasive and complex neuro-developmental disabilities, which share a core set of ‘common denominator’ symptoms [4]. These could include qualitative impairment regarding social interaction skills, as well as language and/or qualitative impairment in terms of communication deficits/problems [5]. As examples, Reed, Hyman and Hirst [6] indicated that children with ASD show significant deficits regarding social skills, such as the initiation of conversation, responses to social situations, social problem-solving, etc. Boraston and Blakemore [7] (citing Wing and Gould) added that other aspects in the triad of impairments/deficits can involve difficulties with especially reciprocal social interactions, and/or cognitive dysfunction, which also often have co-occurring developmental and/or medical conditions [8] and/or are accompanied by restricted and/or unusual patterns of stereotyped repetitive behaviors [9]. According to Tseng and Do [10], ASD represents a Pervasive Developmental Disorder (PDD). Although Nkabinde [5], citing Putman and Chong] indicated that the effects of autism vary between children, ASD is usually diagnosed by age three. Nkabinde [5, p. 977] goes on to indicate that, according “to the Diagnostic and Statistical Manual of Mental Disorders”, Fourth Edition, Text Revision (DSM-IV-TR), within the autism spectrum, several disorder classifications exist: Asperger’s Disorder, Autistic Disorder, Disintegrative Disorder, Pervasive Developmental Delay - Not Otherwise Specified (PDD-NOS) and Rett’s Disorder. When individuals have acute speech and language disabilities, Augmentative and Alternative Communication (AAC) strategies can provide them with opportunities for expressing themselves, enabling them to have a ‘voice’ [2]. According to Shane et al. [11], the burgeoning role of applying information systems and technologies in society has been providing opportunities to develop new means to visually support language and communication for individuals with ASDs. The latter paper [11] offered an organizational framework to describe traditional and emerging AAC information systems and technologies, highlighting how such tools could support visual approaches within this framework, for everyday communication and the improvement of language instruction. Along with applications, which were acquired via consumer-oriented delivery models, the growing acceptance of handheld media devices suggested potential paradigm shifts in augmentative and alternative communication for people with autism spectrum disorder.

136

L. Goosen

In the paper by Odom et al. [12], these authors proposed theoretical and conceptual frameworks to examine using information systems and technologies by and for children with autism spectrum disorder in their communities, as well as home and school settings. Their framework was then used for describing research literature on the efficacy of interventions and instruction, which utilized such IS and technologies. The program designed by Diener, Wright, Wright and Anderson [13] was grounded in community-based participatory research models, and built on positive youth development frameworks for creating a technologies program, which, in turn, built on the interests and strengths of youth with autism for promoting social engagement, software skill development and vocational exploration.

3 Literature Review The use of various information systems and technologies in intervention and instruction, as well as the treatment of, children and adolescents with ASD has increased at a remarkable tempo. The purpose of the paper by Odom et al. [12] was examining literature, which underlie the use of such IS and technologies in interventions and instruction, related to high school students with autism spectrum disorder. These authors’ literature review from 1990 through to end 2013 identified thirty studies, which documented the effectiveness of different forms of information systems and technologies, as well as their impact on academics, adapting and/or challenging behavior, communicating, independence, social competence, as well as vocational skills. The current article will extend the review of literature to also include 2014 through 2018. The chapter by Stasolla, Boccasini and Perilli [14] provided readers with a literature overview regarding empirical evidence available in literature within the decade 2005 through 2015, which concern the use of assistive information systems and technologies-based programs for supporting adaptive behaviors by children with ASDs. According to the inclusion and exclusion criteria, thirty-six studies had been retained and grouped into four main categories, relating to communication, adaptive, life and/or social skills, and challenge behaviors. Their chapter also outlined the strengths and weaknesses of these studies and emphasized the practical application of assistive information systems and technologies-based programs. The recent chapter by Newbutt, Sung, Kuo and Leahy [15] provided a brief review of the acceptance, challenges, and future applications of wearable technology and how Virtual Reality Technologies (VRTs) had been used within research contexts for supporting children with ASDs. One area for innovating and research, which has been evolving since the mid-1990s for aiding people with an autism spectrum disorder is that of using Virtual Reality Technology (VRT). Research, such as reported on in the chapter by Diener et al. [13], has focused on using information systems and technologies for helping to facilitate the development of functional, personal, prevocational/vocational and social skills in youth with autism spectrum disorder. The chapter by Newbutt et al. [15] provided a literature review in this and related fields, in addition to a distillation of the key affordances, challenges and issues identified in these evolving areas of research. Their review focused on potentially usefully applying VRTs

Information Systems and Technologies Opening New Worlds

137

for training and supporting people with an autism spectrum disorder in developing lifeskills, that is social, job, and independent living skills, and where there had been successful implementation in applicable contexts. As more accessible and affordable wearable devices, for example, Oculus Rift™) were becoming commercially available, these authors described a project they undertook. It pursued questions surrounding the acceptability and practicality, which quickly needed to be addressed if sustainable lines of inquiry surrounding the role of Head-Mounted Displays (HMDs) and VRTs, as well as the impact these could have for this specific population, is to be developed. In Reed et al. [6], the deficits shown in children with autism spectrum disorder were targeted through the application of information systems and technologies to teach social skills - some of these used an information systems and technologies-based approach as resource-efficient alternative, instead of common forms of instruction. Their literature review aimed at determining the quantity of empirical studies, which used an information systems and technologies-based social skills intervention, explored the features of the social skills, which were targeted in these studies, as well as analyzed the number of these studies, which reported on the reliability of the independent and dependent variables. Although participation in out-of-school activities is related to improved outcomes for neuro-typical children in a number of areas, which included higher functioning in school and psycho-social development, parents of youth with autism spectrum disorder often report that their children have difficulty in these [13]. Another growing concern for all individuals with autism spectrum disorder was employment, and to have skills to live independently [2]. A program was therefore designed by Diener et al. [13] to tap into technical talent and use information systems and technologies to facilitate personal and social skills, as well as addressing gaps in the vocational preparation for children on the autism spectrum. These authors thus developed an information systems and technologies-based summer and after-school program, which teach youth with autism spectrum disorder software skills, which enabled them to create three-dimensional designs. Cramer, Hirano, Tentori, Yeganyan and Hayes [16] indicated that vSked was a collaborative and interactive assistive technology for children with autism, which combined an interactive token-based reward system, choice boards and visual schedules in an integrated classroom system, used collectively. For many decades, eye-tracking had been used for investigating gaze behavior in the general population [7]. Recent studies have been extending its use to individuals with ASD. Such studies have focused on processing socially salient stimuli. The goal of the chapter by Nkabinde [5] was exploring the effective use of IS and technologies to assist individuals with ASDs. In 1985, the Picture Exchange Communication System (PECS) had been developed for children having limited abilities for verbally expressing themselves. The idea had been that by pointing at a picture, the children could communicate what they wanted. PECS has since been modernized through the development of applications for the iPad. Children had a choice in terms of a wider variety of communication options by simply touching a screen, which would facilitate the process, providing a wide choice variety, which had never been available before. The chapter additionally discussed how parents and teachers could be helped to understand how using iPads technologies improve communication for ASD children.

138

L. Goosen

The use of assistive information systems and technologies was widely recommended by the Division of Early Childhood of the Council for Exceptional Children as appropriate intervention strategies for using especially with children having autism spectrum disorder [9]. Tseng and Do [10] indicated that although there was no cure for autism, with appropriate aid, children with ASDs can progressively learn, maintaining a good life quality with ASDs. The market therefore demanded that information systems and technologies applications made it easier for children to deal with autism spectrum disorder in their lifetime. Although various information systems and technologies designs have already been developed and used for early intervention and treatment, these are not common in the long-term training of children with autism spectrum disorder. The work by Tseng and Do [10] presented Facial Expression Wonderland (FEW), a novel design prototype for training children with autism spectrum disorder, based on progressive levels of training for a given background context. The prototype was designed to improve the abilities of children with autism spectrum disorder in facial expression recognition. The work also discussed how IS and technologies could facilitate life for youngsters with ASD. According to Vélez-Coto et al. [17], people with low-functioning autism spectrum disorder, as well as other disabilities often found it difficult understanding the symbols used traditionally during the learning process in educational materials. Technologybased interventions, especially for low-functioning autism, were becoming increasingly common, which helped children with cognitive disabilities in performing academic tasks, train skills to work with visual signifiers and concepts and improving their abilities and knowledge in this regard.

4 Research Methodologies Almost two decades ago, the study by Sherer et al. [18] was designed for comparing the efficacy of ‘self’ versus ‘other’ video-modelling interventions as a model in terms of enhancing the conversation skills of five children with ASDs, whose ages ranged from four to eleven. They were taught how to answer a number of conversational questions under both self and other video-modelled conditions. Results reported on in Sherer et al. [18] were evaluated by combining multiple baselines and alternating treatments designs. In terms of methodology, Vélez-Coto et al. [17, p. 25] implemented a pre-test/posttest design for testing “SIGUEME. Seventy-four children with low-functioning” autism spectrum disorder were trained to use SIGUEME over 25 sessions and compared with 28 children, having not received any such interventions. In the study reported on by Bereznak, Ayres, Mechling and Alexander [19], a multiple probe design across behaviors replicated across participants had been used for evaluating the effectiveness of their intervention. Three male high school students with ASDs participated in the study reported on by Bereznak et al. [19, p. 269]. To increase daily living and vocational independence, skills had been taught using video selfprompting and mobile technology via an iPhone. Specifically, how to use a washing and copy machine respectively, as well as how to make noodles, were taught. The

Information Systems and Technologies Opening New Worlds

139

study introduced a novel approach for the use of instructional video, in that 2 of the 3 students had ultimately been able to learn self-prompting with the iPhone and taught themselves the targeted skills. Maintenance probes had also been performed, and the iPhones needed “to be returned to all” 3 participants for 2 out of 3 behaviors for returning to criterion levels. The study by Wright, Diener, Dunn and Wright [20] used a qualitative design for examining intergenerational family relationships, which were facilitated by employing a freeware three-dimensional design program intervention, Google SketchUp™. Seven high-functioning boys, aged between eight and seventeen, with ASDs participated in workshops. The interdisciplinary investigating team capitalized on the strengths of the boys in terms of visual-spatial skills, structuring the computer workshops for facilitating social interaction and skill development. Qualitative analysis involved the thematic analysis of the transcripts from the focus groups with grandparents and parents. The chapter by Newbutt et al. [15, p. 221] reported the process that was engaged in, in a descriptive, illustrative and rich manner, in order to work with participants across a range of autism spectrum disorders, with range referring to low to high functioning autism spectrum disorder with a wide range of intelligence quotients for assessing and measuring the acceptability of, and their experiences with, HMD VRTs (ecological validity, immersion, presence and any negative effects). As this was “sometimes a neglected aspect of this type of research”, these authors discussed the ethical approaches taken by the research team when “using HMDs with this population” in some detail. Roelfsema et al. [21, p. 734] asked schools “to provide the number of children enrolled,” who had had a clinical diagnosis of Autism Spectrum Conditions (ASCs) and/or 2 “control neurodevelopmental conditions. Prevalence was evaluated” through negative binomial regression, with adjustments made for non-response and the size of the schools. Many first-hand accounts from persons diagnosed with an ASD highlighted the challenges inherent in the processing, in real-time, of complex, high-speed “and unpredictable social information such as facial expressions”. In their paper, Madsen, El Kaliouby, Goodwin and Picard [22, p. 19] described a new technology meant to help people analyze, capture “and reflect on a set of social-emotional signals communicated by” facial-head movements during live social interaction, which occurred with everyday social companions. Madsen et al. [22, p. 19] also described their development of new hardware combining the use of a miniature camera, which was “connected to an ultra-mobile PC together with custom software developed” for tracking, capturing, interpreting and intuitively presenting various interpretations of the facial and head movements, for example, “presenting that there is a high probability the person looks ‘confused’”.

5 Results Please note that since this article essentially consists of a literature review, none of the author’s ‘own’ results and/or indication of the work of the author are provided.

140

L. Goosen

The results obtained by Vélez-Coto et al. [17] showed a statistically significant improvement for the experimental group in terms of their attention, as well as a significant change in association and categorization, and interaction. Recent results reported by Baio et al. [23] for 2014 showed that the overall prevalence of autism spectrum disorder among eleven sites to be 16.8 per 1,000 children aged eight years. A more locatized study reported on by Roelfsema et al. [21, p. 734] found that the prevalence estimates of ASCs in Eindhoven had been “229 per 10,000, significantly higher than in Haarlem (84 per 10,000) and Utrecht (57 per 10,000),” while “the prevalence for the control conditions were similar in all regions.” Sherer et al. [18, p. 140] indicated that three out of the five children performed at 100% accuracy levels at post-treatment. “Results indicated no overall difference in rate of task acquisition between the two conditions, implying that” in general, participants, who had successfully learned from video, “learned equally as well via both treatment approaches.” Further anecdotal evidence suggested that children, who had been “successful with video treatment had higher visual learning skills than” participants, who had been unsuccessful with this approach. The two key themes, which emerged in the study reported on by Wright et al. [20], were reframing expectations, in terms of creating a safe environment and parental efficacy, as well as building intergenerational bridges among children, grandparents, parents and siblings. These results also indicated that information systems and technologies could build on the strong points of children with autism spectrum disorder, promoting the social engagement of these children with their families. The results reported by Reed et al. [6, p. 1003] indicated that the majority of their studies were relying on a DVD or video for delivering the intervention (modelling or feedback), which had been “conducted in school settings, and” were targeting “more than one social skill. The most common social skill addressed was” to initiate conversations, “followed by play skills. All but one study included standardized assessment before treatment;” none of the studies, however, were using “a published social skills assessment.” Similar to previous research, the reliability of dependent variables had been common; the reliability of independent variables, however, was infrequent. In the study reported on by Bereznak, Ayres, Mechling and Alexander [19], results indicated that all three participants increased their performance across all behaviors through an increased percentage of steps, which were performed independently. Although some failures occurred, the results discussed in the chapter by Stasolla et al. [14] were fairly positive. The paper by Madsen et al. [22] described new information systems and technologies, and the results from a succession of pilot studies, which were conducted with children diagnosed with autism spectrum disorder, who used these information systems and technologies in their peer-group settings, contributing to further development via their feedback. In their paper, Cramer et al. [16, p. 1] presented the results of a study on 3 “deployments of vSked over the course of a year in two autism classrooms.” These results demonstrated that vSked could promote student independence and reduce both the quantity of teacher-initiated prompting, encouraging consistency and predictability, as well as time required for transitioning from one activity to the next. The preliminary results produced by Diener et al. [13] indicated that a focus on the interests, strengths and talents of youth with autism spectrum disorder, instead of on

Information Systems and Technologies Opening New Worlds

141

remediating deficits, can make a difference in three domains: enhanced social engagement with their peers, increased confidence and self-esteem, as well as vocational aspirations and exploration. Finally, Weiss et al. [24] offered results from two usability studies, which evaluated using collaborative technologies designed to support children with ASDs’ learning of social competence skills through information systems and technology-delivered cognitive-behavioral therapy.

6 Conclusions Like those of Vélez-Coto et al. [17], the author’s conclusions are that many of these IS technologies were effective instruments for improving attention, categorization and interaction in children with autism spectrum disorder. These were also useful and powerful tools for teachers and parents to increase children’s motivation and autonomy. Based on their preliminary study, their chapter concluded with how Newbutt et al. [15] thought head-mounted displays and virtual reality technologies might be used in future studies for helping to enable autism spectrum disorder populations. Many authors cited here were primarily concerned with addressing some of the challenges that the ASD population face on a daily basis, and the contexts in which VRT can be applied, focusing on moving research from laboratories to real-life, beneficial contexts.

References 1. Chandler, D.: Opening new worlds for those with autism: technology is creating great new possibilities for those on every part of the spectrum. Inst. Electr. Electron. Eng. Pulse 7(4), 43–46 (2016) 2. Lofland, K.B.: The use of technology in the treatment of autism. In: Cardon, T.A. (ed.) Technology and the Treatment of Children with Autism Spectrum Disorder, pp. 27–35. Springer, Heidelberg (2016) 3. Chen, W.: Multitouch tabletop technology for people with autism spectrum disorder: a review of the literature. Procedia Comput. Sci. 14, 198–207 (2012) 4. Acab, A., Muotri, A.: The use of induced pluripotent stem cell technology to advance autism research and treatment. Neurotherapeutics 12(3), 534–545 (2015) 5. Nkabinde, Z.P.: Information and computer technology for individuals with autism. In: Silton, N.R. (ed.) Innovative Technologies to Benefit Children on the Autism, pp. 71–85. IGI Global, Hershey (2014) 6. Reed, F.D.D.G., Hyman, S.R., Hirst, J.M.: Applications of technology to teach social skills to children with autism. Res. Autism Spectr. Disord. 5(3), 1003–1010 (2011) 7. Boraston, Z., Blakemore, S.: The application of eye-tracking technology in the study of autism. J. Physiol. 581(3), 893–898 (2007) 8. Rubenstein, E., Wiggins, L., Schieve, L., Bradley, C., DiGuiseppi, C., Moody, E., Pandey, J., Pretzel, R., Howard, A., Olshan, A., Pence, B.: Associations between parental broader autism phenotype, child autism spectrum disorder phenotype in the Study to Explore Early Development. Autism, 29 January 2018

142

L. Goosen

9. Fountain, C., Zhang, Y., Kissin, D., Schieve, L., Jamieson, D., Rice, C., Bearman, P.: Association between assisted reproductive technology conception and autism in California, 1997–2007. Am. J. Public Health 105(5), 963–971 (2015) 10. Tseng, R.-Y., Do, E.Y.-L.: Facial Expression Wonderland (FEW): a novel design prototype of Information and Computer Technology (ICT) for children with Autism Spectrum Disorder (ASD). In: Proceedings of the 1st International Health Informatics Symposium (IHI 2010), Arlington (2010) 11. Shane, H.C., Laubscher, E.H., Schlosser, R.W., Flynn, S., Sorce, J.F., Abramson, J.: Applying technology to visually support language and communication in individuals with autism spectrum disorders. J. Autism Dev. Disord. 42(6), 1228–1235 (2012) 12. Odom, S.L., Thompson, J.L., Hedges, S., Boyd, B.A., Dykstra, J.R., Duda, M.A., Szidon, K. L., Smith, L.E., Bord, A.: Technology-aided interventions and instruction for adolescents with autism spectrum disorder. J. Autism Dev. Disord. 45(12), 3805–3819 (2015) 13. Diener, M.L., Wright, C.A., Wright, S.D., Anderson, L.L.: Tapping into technical talent: using technology to facilitate personal, social, and vocational skills in youth with Autism Spectrum Disorder (ASD). In: Cardon, T.A. (ed.) Technology and the Treatment of Children with Autism Spectrum Disorder, pp. 97–112. Springer, Heidelberg (2016) 14. Stasolla, F., Boccasini, A., Perilli, V.: Assistive technology-based programs to support adaptive behaviors by children with autism spectrum disorders: a literature overview. In: Kats, Y. (ed.) Supporting the Education of Children with Autism Spectrum Disorders, pp. 140–159. IGI Global, Hershey (2017) 15. Newbutt, N., Sung, C., Kuo, H.J., Leahy, M.J.: The acceptance, challenges, and future applications of wearable technology and virtual reality to support people with autism spectrum disorders. In: Brooks, A., Brahnam, S., Kapralos, B., Jain, L. (eds.) Recent Advances in Technologies for Inclusive Well-Being, pp. 221–241. Springer, Cham (2017) 16. Cramer, M., Hirano, S.H., Tentori, M., Yeganyan, M.T., Hayes, G.R.: Classroom-based assistive technology: collective use of interactive visual schedules by students with autism. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver (2011) 17. Vélez-Coto, M., Rodríguez-Fórtiz, M., Rodriguez-Almendros, M., Cabrera-Cuevas, M., Rodríguez-Domínguez, C., Ruiz-López, T., Burgos-Pulido, Á., Garrido-Jiménez, I., MartosPérez, J.: SIGUEME: Technology-based intervention for low-functioning autism to train skills to work with visual signifiers and concepts. Res. Dev. Disabil. 64, 25–36 (2017) 18. Sherer, M., Pierce, K.L., Paredes, S., Kisacky, K.L., Ingersoll, B., Schreibman, L.: Enhancing conversation skills in children with autism via video technology: Which is better, “self” or “other” as a model? Behav. Modif. 25(1), 140–158 (2001) 19. Bereznak, S., Ayres, K.M., Mechling, L.C., Alexander, J.L.: Video self-prompting and mobile technology to increase daily living and vocational independence for students with autism spectrum disorders. J. Dev. Phys. Disabil. 24(3), 269–285 (2012) 20. Wright, C., Diener, M.L., Dunn, L., Wright, S.D.: SketchUp™: a technology tool to facilitate intergenerational family relationships for children with autism spectrum disorders (ASD). Fam. Consum. Sci. Res. J. 40(2), 135–149 (2011) 21. Roelfsema, M., Hoekstra, R., Allison, C., Wheelwright, S., Brayne, C., Matthews, F., BaronCohen, S.: Are autism spectrum conditions more prevalent in an information-technology region? a school-based study of three regions in the Netherlands. J. Autism Dev. Disord. 42(5), 734–739 (2012) 22. Madsen, M., El Kaliouby, R., Goodwin, M., Picard, R.: Technology for just-in-time in-situ learning of facial affect for persons diagnosed with an autism spectrum disorder. In: Proceedings of the 10th International SIGACCESS Conference on Computers and Accessibility, New York (2008)

Information Systems and Technologies Opening New Worlds

143

23. Baio, J., Wiggins, L., Christensen, D.L., Maenner, M.J., Daniels, J., Warren, Z., KurziusSpencer, M., Zahorodny, W., Rosenberg, C.R., White, T., Durkin, M.S.: Prevalence of autism spectrum disorder among children aged 8 years - autism and developmental disabilities monitoring network, 11 sites, United States, 2014. MMWR Surveill. Summ. 67(6), 28 (2018) 24. Weiss, P.L., Gal, E., Zancanaro, M., Giusti, L., Cobb, S., Millen, L., Hawkins, T., Glover, T., Sanassy, D., Eden, S.: Usability of technology supported social competence training for children on the autism spectrum. In: International Conference on Virtual Rehabilitation, Zurich (2011)

Technologies to Inspire Education in Science, Engineering and Technology Through Community Engagement in South Africa Patricia Gouws

and Leila Goosen(&)

University of South Africa, Pretoria 0003, Gauteng, South Africa {GouwsPM,GooseL}@UNISA.ac.za

Abstract. This study purpose was exploring emerging technologies in community engagement, inspiring students’ interest in Science, Engineering and Technology in South Africa. The article introduces the Inspired towards Science, Engineering and Technology project, before reviewing literature related to educators using emerging technologies in community engagement. It shows teaching practices and curricula, and students’ experiences and assessment. The study is located within a theoretical framing, clarifying community engagement issues in an emerging technologies context. Explaining the research methodology for the study attends to the importance of interpretation for qualitative design and considers issues of reliability and validity for quantitative designs. Quantitative results and qualitative responses is discussed, providing insight into participants’ demographics. Results are also connected to literature. The authors make suggestions about the implications of results for universities. Conclusions drawn provide guidance to researchers and practitioners beyond the primary domain, and the authors reflect on results contributing to the research area. Keywords: Community engagement

 Computer courses  LEGO

1 Introduction Inspired towards Science, Engineering and Technology (I-SET) is a community engagement project of the College of Science, Engineering and Technology (CSET) at the University of South Africa (UNISA), aimed at inspiring young people and their communities’ interest in Science, Engineering and Technology through activities involving robotics. Comparable to what was reported on by Dahlberg, Barnes, Buch and Rorrer [1], the I-SET project also sought viable strategies for broadening participation in Information Systems (IS) and technologies, with similarities to that of Breedt and Pieterse [2, p. 17], improve these students’ confidence in using computers and/or, like Millen and Patterson [3], stimulating these through community engagement activities. Based on the objectives of the I-SET project detailed on the next page, the topic of this article is novel, relevant and timely, and should be of interest to various stakeholders at the Europe Middle East and North Africa conference on Information Systems and Technologies to support Learning (EMENA-ISTL) 2018. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 144–153, 2019. https://doi.org/10.1007/978-3-030-03577-8_17

Technologies to Inspire Education in Science, Engineering and Technology

145

In line with these project objectives, the purpose of the study reported in this article thus was to explore, against the background of the I-SET project as a community engagement activity, research questions on how educators can use LEGO to: 1. motivate their students through creative activities [4]? 2. instil a spirit of curiosity in their students? 3. increase students’ perception of the usefulness of programming activities? and/or help their students to: 4. develop positive attitudes? 5. see the relevance of programming activities to their daily lives? 6. experience pleasure/fun from participating in these activities?

2 Review of the Literature In line with what was detailed by Giannakos, Jaccheri and Proto [4, p. 103], on the lessons learned from the case of Norway in terms of teaching Computer Science (CS) to students through creativity, the objectives “of the project reported in this” article included: • In terms of Community Engagement, educators aim to participate in and host events, inspiring young peoples’ interest and participation in science, engineering and technology. They are presenting face-to-face training for coaches and teams, hosting and participating in the First Lego League (FLL) North Gauteng Championship, FLL National Championship, and World Robotics Olympiad. • Regarding Community Development, the project uses Web 2.0 technologies to create and deliver open educational resources for use by the communities. • With regard to Research, plans are in place to identify pertinent community engagement research questions in the project. The paper by Cook [5, p. 130] described the challenges involved in the generation of code for the LEGO® Mindstorms™ Robot Command eXplorer (RCX) microcomputer controller. This compiler improved on the widely used Not-Quite C (NQC) compiler by adding “new data types, procedures, unions, and a” mathematics library. In their experience report, Cantoni, Marchiori, Faré, Botturi and Bolchini [6, p. 187] presented a case study and systematic methodology, based on lessons learned from their use of Real Time Web (RTW) - an innovative team methodology that adopts a playful approach for effectively and collaboratively eliciting and plastically representing communication requirements by extending experiences with LEGO Serious Play (LSP). The basic tenet of LSP is that “LEGO bricks are simple to use and provide ready-made, powerful and multi-purpose symbolic pieces, known to most people and used in different cultures.” While McGill [7] investigated the influences on students’ motivation of learning to program with personal robots, McWhorter and O’Connor [8] explored whether LEGO® Mindstorms® motivated students in first year CS classes. In a comparable

146

P. Gouws and L. Goosen

context, Wiesner and Brinda [9] studied how robots fostered the learning of basic concepts in Informatics. In their paper, Wagner et al. [10, p. 185] described how they used “a set of participatory technologies”, while Buisman and Van Eekelen [11] researched gamification in educational software development. Finally, Karoulis [12, p. 1] presented “the results from a joint usability study concerning the LEGO programming environment ‘RoboLab’, which “unveiled some deficiencies of the interface”, as well as some limitations of the methods employed. “LEGO has already performed numerous evaluations on various usability aspects, aimed” at enhancing “the overall usability and utility of the product.” That study focused “on a more academic parameter; namely, the combination of” two expert-based methods, “a cognitive graphical walkthrough and a heuristic evaluation … on the same software piece, as well as on their combined results.”

3 Research Methodology 3.1

Theoretical and Conceptual Frameworks

A paper by Le Dantec [13] picked up on recent developments to examine public support for and participation in community engagement. According to de Beer [14], the interpretation of communities’ participation, as well as the extent of empowerment that takes place in community engagement projects, must, however, be constantly reviewed and reflected upon. In their paper, Taylor and Cheverst [15, p. 218] presented their experiences of using an approach illustrated by their attempts to design and understand development “with the participation of communities” and inspired by problem-based methodologies. Researchers in fields related to Human Computer Interaction (HCI) are beginning to “shift from studying technology use in uncommon or exotic communities to designing and deploying technology interventions into those same settings.” Jaeger, Bertot, Kavanaugh, Viselli and Nassar [16] stated that in trying to address significant community needs, while faced with dwindling resources, many higher education institutions have formed a system of mutually beneficial partnerships at local, regional and national levels. As an illustration of addressing such a situation, Dahlberg, et al. [1, p. 1] reported on a community engagement project with a mission to find viable strategies for broadening and promoting the empowerment and capability of communities, including, for example, “women, under-represented minorities and persons with disabilities” to participate in Computer Science. Nam and Bishop [17, p. 371] described the nature of another example from of such a partnership by presenting “an early set of findings” from an on-going collaboration between a community informatics researcher from the University of Illinois, who joined an arts, culture and communication academy in a health information campaign in a neighbourhood in Chicago. Koradia and Seth [18, p. 278] explored the role of an answering machine system at a radio station in India that serve the “needs of their surrounding communities.” Their engagement with these communities was by way of providing them with local content of interest. Communities today face “complex demands. It has become a necessity to

Technologies to Inspire Education in Science, Engineering and Technology

147

negotiate between stakeholder objectives, the expectations of citizens, and the demands of planning.” A paper by Chaytor [19, p. 226] described and demonstrated a creative way of implementing Service Learning while assisting an urban community to improve their “visibility through empowering them to participate in the information age.” Finally, Ganoe, Robinson, Horning, Xie and Carroll [20] described an iterative design for “a location-sensitive mobile application for community engagement and its use at two consecutive community-oriented” events. Initial analysis of mobile awareness and participation in these community-oriented activities were based on personal status and blog posts. As part of the I-SET initiative, a training event was organised on campus, with the aim of introducing potential and existing coaches, mentors and leaders of LEGO robotics and/or First Lego League (FLL) teams to the range of issues that are important to the coaching or mentoring of a robotics and/or FLL team, including: 1. Where does LEGO Robotics fit into education? 2. The issues and reality related to what a team is, setting up and possible team building initiatives, as well as marketing and promoting their teams, and finding sponsorship options 3. What is needed to teach in terms of building and programming of the robots 4. The equipment requirements around the LEGO To Go Box: What’s in it and where do they get it? 5. Teaching students about the importance and focus of research, teaching students how to do research and how to get students to do research 6. Competition opportunities with regard to the FLL and the World Robotics Olympics in South Africa, and the FLL Competition registration and website. Any parents, educators or persons who had a particular involvement in the FLL and/or were interested in starting or continuing robotics training for a LEGO robotics team in a specific school or community were invited to attend the event. In terms of ethical concerns, the contact details of the project leader for this research study were provided to all potential participants, in case they had any queries regarding the research. Potential participants were informed as to what they would be expected to do, what information would be required and how long their participation would take - their participation did not require more than 20 min of their time. By signing the letter of consent, they understood that their participation in this research was voluntary, their responses would be treated in a confidential manner and their privacy with regard to anonymity as a human respondent would be ensured, where appropriate (e.g. by using coded names of participants and/or their institutions). As research participants, they were free to withdraw from the research at any time without any negative or undesirable consequences to themselves and they would at all times be fully informed about the research process. They were offered no significant incentives to be participants in this study. They would not be placed at risk or harmed in any way, e.g. no responses would be used to assess them, their students/child(ern) and/or their schools/institutions. Although everyone who attended the event as described above was invited to participate in this research project, and, as advised by McGill [7], notice of the survey was given to everyone who attended, disappointingly, only a relatively small number of

148

P. Gouws and L. Goosen

responses were obtained; thus, the conclusions that are drawn based on these responses cannot necessarily be viewed with statistical confidence. With similarities to that of Breedt and Pieterse [2, p. 17], the study that this article reports on surveyed participants “in relation to the amount by which” their students were e.g. motivated, encouraged etc. Similar to that of Buisman and van Eekelen [11], this article reports on a case study, as an instance of a meeting used to collect data.

4 Quantitative Results The overwhelming majority of participants (83%) were female (Table 1). Table 1. Participants’ ages. 25–34 35–44 45–54 Older than 55 17% 25% 33% 25%

Although participants’ ages were spread fairly evenly across the options provided, they leaned towards older persons, with no-one below the age of 25. Table 2 shows that almost half of the participants are involved as teachers, with equal numbers of coaches and supporters, and only a small number of parents. Table 2. I am involved as a… Parent Teacher Coach Supporter 8% 42% 25% 25%

As specified in Table 3, almost half of participants have no educational qualifications. In terms of those who indicated that they did have such qualifications, however, more than a third of participants completed a Higher Education Diploma (HED). Table 3. Participants’ educational qualifications. None HED BEd (Honours) Other 42% 33% 8% 17%

Although a third of participants have no academic qualifications, as specified in Table 4, almost half of them completed some form of Bachelor’s degree. A quarter of participants have post-graduate level academic qualifications. Please note that information in Tables 3 and 4 are not exclusionary - some-one who has a BEd degree would be classified as being part of the ‘Other’ section for Table 3, and under the B degree for Table 4. Although there might be some overlap, the 42% of participants who indicated

Technologies to Inspire Education in Science, Engineering and Technology

149

no educational qualifications in Table 3 is not the exactly the same group who indicated having a B degree in Table 4. Table 4. Participants’ academic qualifications. None B degree Honours degree M degree 33% 42% 17% 8%

Table 5 displays that the number of years’ experience that participants have been involved with First LEGO League are spread fairly evenly over the options provided: a quarter of each have one, two or more than four years of experience, while less than 10% have three years’ experience. Less than 20% of participants were novices. Table 5. Number of years’ experience with First LEGO League. Novice 1 year 2 years 3 years More than 4 years 17% 25% 25% 8% 25%

5 Discussion of Some Qualitative Results This section presents some of the details from the qualitative part of the questionnaire used. Note that the headings presented reflect the questions that were asked. Why are you involved in First Lego League? Almost half of participants indicated that they are involved in First Lego League in order to educate the “wonderful” students, to teach them core values and extend their potential. One participant simply loves Lego, others finds it “fun and educational” and “an interesting and challenging league”, while another have an “interest in robots and electronics”. One parent has a “school kid”. Last, but not the least, one participant indicated that she worked for the College Deanery Marketing Team, and thus got involved “automatically”. What do you do to motivate the students in your First Lego League team? One of the participants indicated that this shouldn’t be a problem, while another believes in self-motivation for these students. Two more participants referred to fun items being used and having students relate to young, fun (male) coaches. Yet another brings something new to every meeting - even if it’s just sharing the manual information. Finally, one participant makes use of club tournaments, while another relies on students researching together and sharing their hardships. What do you do to help students in your First Lego League team develop positive attitudes? One of the concepts mentioned most often in response to this question related to the respondents having “a positive attitude” themselves and being positive when speaking to students, especially while encouraging and praising older students to do better if they fail the first time. Again, fun, group discussions and simply choosing

150

P. Gouws and L. Goosen

to be there for students were mentioned. Sometimes one of the respondents would “invite someone they don’t know to motivate them for 10 min”. How do you help students in your First Lego League team to set achievable aims and goals? At least one participant encouraged students to set high goals. Four more of the respondents revealed that they encourage and help students to break their main exercises down into smaller tasks and steps, setting parameters and adding challenges in a step-by-step fashion. Others made use of teaching building exercises, posters and/or examples from expert programmers. Please describe one of the ways in which you increase your students’ perception of the usefulness of First Lego League. Some of the participants chose to talk or give presentations about the usefulness of the tasks, while others introduced students to people with jobs in programming, so that the students can see the connections. While one participant achieved this by teaching globally, another pointed to the development of higher and 3D level thinking. Finally, one of the participants indicated that she was known for being professional, and some students pay attention to anything she does. Please describe one of the things that you do in your team that helps students to see the relevance of their First Lego League activities to their daily lives. Although one participant was not able to answer this question, as she did not have a team yet, others again mentioned giving presentations themselves, or inviting speakers, such as an engineer, to speak to students on different topics relevant to their everyday lives, problem solving and/or how related activities could become their jobs. Please describe one of the things that you do in your First Lego League team that helps students to experience pleasure from participating in these activities. Students were reminded that these are toys and it can be so much fun! They were given compliments and exposed to sharing behaviour. Students also took turns in leading the teams, so that they all feel responsible. Participants let students take part in knock-out robot games and mentioned informing students about robots and related careers. Please describe one of the things that you do in your First Lego League team to instil a spirit of curiosity in your students. Some of the participants allow students to solve problems, to build in their free time and/or provide them with additional access to the information systems and technology centre, so that they can conduct research on their own team discussions. Respondents also use displays for their students to explore or emphasize the importance of listening to what students want. Finally, some participants introduce students to topics of interest that they have not heard about or use a reward system.

6 Conclusions This article started by introducing the I-SET project and associated objectives. Similar to that of Wiesner and Brinda [9], the aims of this research project, specifically in terms of community engagement, included to come to conclusions regarding inspiring students’ and their communities’ interest and participation in science, engineering and

Technologies to Inspire Education in Science, Engineering and Technology

151

technology when these students used educational robots. The purpose of this study was exploring, against the background of the First Lego League as a community engagement activity, research questions on how information systems and technologies educators could use LEGO to: 1. motivate their students? 2. instil a spirit of curiosity in their students? 3. increase students’ perception of the usefulness of programming activities? and/or help their students to: 4. develop positive attitudes? 5. see the relevance of programming activities to their daily lives? 6. experience pleasure/ fun from participating in these activities? In an example of pedagogical research that specifically speaks to the third research question set for this study, Uludag, Karakus and Turner [21, p. 186] came to the conclusion, based on years of presenting similar courses, that the type of computer courses described in this article should “be taught in terms of practical domains” and/or contexts that increase their students’ perception of the usefulness of programming activities. As was described in the theoretical and conceptual frameworks section of this article, the community engagement activities as outlined are also in line with what was described by Goosen [22], in that these provided a structure through which Computer Science educators, students and their communities could collaborate to help each other and contribute knowledge and skills to their mutual benefit. With regard to issues relating to the execution of the research methodology and design, along with McGill [7], the authors concluded that adapting a survey to the applicable research environment was important for ensuring usable results. Results obtained in this study were similar to those found in studies by McWhorter and O’Connor [8], with especially the qualitative data indicating that respondents felt that robots made a novel contribution towards inspiring students’ and their communities’ motivation by sparking their interest in Science, Engineering and Technology in South Africa. In relation to especially the first research question around motivation, like Dahlberg, et al. [1], the authors of this article have come to the conclusion, based on the results presented here, that the students in this study enjoyed these kinds of exercises and were engaged in debating the effectiveness of the solutions they had arrived at. Also relating to the first research question, as well as the fifth one on relevance, like the study reported on by McGill [7, p. 10], on the influences on students’ motivation of learning to program with robots, overall, the results that can be concluded from the study reported on in this article related to participants indicating that they helped their “students to see the relevance of their First Lego League activities to their daily lives”. Uludag, et al. [21] pointed out that this could be due to students’ familiarity and comfort in understanding and using ICTs for teaching and learning. Concerning both the first and last research questions that had been set for this study, as indicated by Wiesner and Brinda [9], fun was mentioned as a satisfying factor for students by many of the participants and played an essential role in keeping students’ motivation alive.

152

P. Gouws and L. Goosen

Goosen [22] pointed out the importance of supplying communities at grass root level with a network to exchange ideas and share suggestions for improved implementation, where small discussion groups engage with the implications of e.g. renewing their curricula for Computer Science educators’ professional teaching practice. Finally, and in line with what was suggested by Havenga and Mentz [23], the authors conclude that participation by more students from all communities is needed in fields related to information systems and technologies, as opposed to only a few gifted (pale male) students from select privileged communities.

References 1. Dahlberg, T., Barnes, T., Buch, K., Rorrer, A.: The STARS alliance: viable strategies for broadening participation in computing. Trans. Comput. Educ. 11(3), 25 (2011). Article 18 2. Breedt, H., Pieterse, V.: Student confidence in using computers: the influence of parental adoption of technology. In: Proceedings of Second Computer Science Education Research Conference (CSERC 2012), Wrocław, Poland (2012) 3. Millen, D.R., Patterson, J.F.: Stimulating social engagement in a community network. In: Proceedings of the 2002 ACM conference on Computer Supported Cooperative Work (CSCW 2002), New York, NY, USA (2002) 4. Giannakos, M.N., Jaccheri, L., Proto, R.: Teaching computer science to young children through creativity: lessons learned from the case of Norway. In: Proceedings of the 3rd Computer Science Education Research Conference (CSERC 2013), Heerlen, The Netherlands (2013) 5. Cook, R.P.: Mostly C+ challenges in LEGO® RCX code generation. In: Proceedings of the 44th Annual Southeast Regional Conference (ACM-SE 44), New York, NY, USA (2006) 6. Cantoni, L., Marchiori, E., Faré, M., Botturi, L., Bolchini, D.: A systematic methodology to use LEGO bricks in web communication design. In: Proceedings of the 27th ACM International Conference on Design of Communication (SIGDOC 2009), New York, NY, USA (2009) 7. McGill, M.M.: Learning to program with personal robots: influences on student motivation. Trans. Comput. Educ. 12(1), 32 (2012) 8. McWhorter, W.I., O’Connor, B.C.: Do LEGO® Mindstorms® motivate students in CS1? In: Proceedings of the 40th ACM Technical Symposium on Computer Science Education (SIGCSE 2009), New York, NY, USA (2009) 9. Wiesner, B., Brinda, T.: How do robots foster the learning of basic concepts in informatics? In: Proceedings of the 14th Annual ACM SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE 2009), New York (2009) 10. Wagner, I., Basile, M., Ehrenstrasser, L., Maquil, V., Terrin, J.-J., Wagner, M.: Supporting community engagement in the city: urban planning in the MR-tent. In: Proceedings of the fourth international conference on Communities and Technologies (C&T 2009), New York (2009) 11. Buisman, A.L.D., van Eekelen, M.C.J.D.: Gamification in educational software development. In: Proceedings of the Computer Science Education Research Conference (CSERC 2014), Berlin, Germany (2014) 12. Karoulis, A.: Evaluating the LEGO-RoboLab interface with experts. Comput. Entertain 4(2), p. Article 6, April 2006

Technologies to Inspire Education in Science, Engineering and Technology

153

13. Le Dantec, C.: Participation and publics: supporting community engagement. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI 2012), New York, NY, USA (2012) 14. De Beer, F.: An asset-based approach to community engagement: Some challenges. Progressio 33(2), 16–29 (2011) 15. Taylor, T., Cheverst, T.: Creating a rural community display with local engagement. In: Proceedings of the 8th ACM Conference on Designing Interactive Systems (DIS 2010), New York, NY, USA (2010) 16. Jaeger, P.T., Bertot, J.C., Kavanaugh, A., Viselli, T., Nassar, D.: Panel proposal online community networks, e-government, and community-sourcing actions. In: Proceedings of the 13th Annual International Conference on Digital Government Research (DGR 2012), New York, NY, USA (2012) 17. Nam, C., Peterson Bishop, A.: This is the real me: a community informatics researcher joins the barrio arts, culture, and communication academy in a health information campaign. In: Proceedings of the iConference, New York (2011) 18. Koradia, Z., Seth, A.: PhonePeti: exploring the role of an answering machine system in a community radio station in India. In: Proceedings of the Fifth International Conference on Information and Communication Technologies and Development (ICTD 2012), New York (2012) 19. Chaytor, L.: Urban empowerment: a successful example of service learning. In: Proceedings of the 4th conference on Information technology curriculum (CITC4 2003), New York, NY, USA (2003) 20. Ganoe, C.H., Robinson, H.R., Horning, M.A., Xie, X., Carroll, J.M.: Mobile awareness and participation in community-oriented activities. In: Proceedings of the 1st International Conference and Exhibition on Computing for Geospatial Research (COM.Geo 2010), New York (2010) 21. Uludag, S., Karakus, M., Turner, S.W.: Implementing IT0/CS0 with scratch, app inventor for android, and lego mindstorms. In: Proceedings of the 2011 Conference on Information Technology Education, New York, NY, USA (2011) 22. Goosen, L.: Criteria and Guidelines for the Selection and Implementation of a First Programming Language in High Schools. North West University, Potchefstroom Campus (2004) 23. Havenga, M., Mentz, E.: The school subject Information Technology: a South African perspective. In: Proceedings of the 2009 Annual Conference of the Southern African Computer Lecturers’ Association, New York (2009)

Promoting Pro-environmental Practices About Trees in Children Using Infrared Thermography Technology Maria Eduarda Ferreira, João Crisóstomo, and Rui Pitarma(&) Polytechnic Institute of Guarda, Research Unit for Inland Development, Rua Dr. Francisco Sá Carneiro, Nº 50, 6300-559 Guarda, Portugal {eroque,rpitarma}@ipg.pt, [email protected]

Abstract. Humankind is concerned with sustainability issues, to which science education must be aware. Teaching science in this globalised world requires teaching strategies to prepare children for environmental citizenship. The teaching of tools for decision-making regarding pro-environment practices about trees is deeply linked to sustainable society building process. Under this approach, a study was developed with a group of Portuguese children attending the 3rd year of elementary school. It was about the didactic potential of thermograms from infrared technology. The thermograms were applied in conceptual learning. It was intended to link attitudinal behaviour to tree protection. This is a qualitative research-action approach. The results obtained at the moment reveal the potential of this technology as a promising didactic resource in the construction of pro-environmental attitudes in children. It showed the need to consider its application in a broader way, specifically to the teaching of sciences for environmental citizenship. Keywords: Infrared thermography Pro-environmental  Children

 Teaching sciences  Trees

1 Introduction Unbalanced environmental relations have severe consequences on the ecosphere. Over the years, human practices have contributed to compromising the present and the future of the planet. A significant challenge for formal education is to involve children in identifying the causes and, consequently, in finding solutions to the current serious environmental problems. One of the troubling problems is the deforestation of the biosphere. The development of reviewability, thinking and proactive citizenship constitutes a stimulating educational challenge. It includes not only identification but also the search for answers and solutions to these disturbing problems. The fact of primary education is compulsory in Portuguese education at early ages can play a crucial role. The child should own the ecocentric idea of being committed to nature. That is, they must understand that they are an integral part, as are all living beings, of the ecosystems that make up the ecosphere. Consequently, they cannot have a utilitarian conception of nature. The appropriation of this conception is linked to the realisation that their attitudes and practices can seriously interfere with environmental equilibrium. If they © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 154–163, 2019. https://doi.org/10.1007/978-3-030-03577-8_18

Promoting Pro-environmental Practices

155

are not the proper ones, they can contribute to destroy the existing dependencies (biotic and abiotic) compromising the future of the ecosphere. In the perspective of Barrett et al. [1], an environmentally literate individual understands the existence of links between nature and human societies. According to Roth [2], there is a causal relationship between environmental literacy and environmental impacts associated with behaviour. From this perspective, the teacher plays a crucial role in these pro-environment learning. Several investigations have shown the relevance of teaching based on real contexts [3–7] for the development of meaningful learning. Even more, in primary education, children acquire the bases of scientific, technological and cultural knowledge that will allow them to understand nature and its synergy with society. The “Study of the Environment” curriculum of Portuguese primary education states, “Students will deepen their knowledge of Nature and Society. It will be the teachers’ responsibility to provide them with the necessary tools and techniques so that they can construct their own knowledge in a systematised way” ([8], p. 102). The Portuguese national curriculum for primary education highlights the role of education in science for individual’s preparedness in the understanding and follow-up of debates on scientific and technological issues and their social implications [9]. Therefore, the “Study of the Environmental” area has a fundamental role in developing the skills to respond to current complex environmental problems. During the 4 years of primary education, Portuguese children learn science in the curriculum area of “Study of the Environment”. The science education in the constructivist approach requires the teacher to develop in their pedagogical practices the creation of learning environments that pay attention and value the socio-cultural context. It is through the environment (social and cultural practices) that the child constructs his conceptions. Salmon [10] emphasises that one of the fundamental principles of this approach is based on the valuation of individuals’ prior conceptions, since these are based on their previous experiences, which, in turn, condition their interpretation of information. School curricula should provide the construction and reconstruction of scientific knowledge through meaningful learning [11–13]). It is not a question of whether the children’s previous ideas are right or wrong; what really matters is that, from them, the teacher decodes and reconstructs these ideas by giving them meaning. According to Harlen [14], all children have ideas/explanations about what they observe in their daily lives. Children build beliefs about natural phenomena even before they start school. In some cases, the ideas created before school remain in their memories. Sometimes these ideas are contrary to what is taught to them. They seem to be resistant to change. The classic methodological interventions cannot change the way they think [15]. The teaching of science linked to real environment scenarios was a challenge. It meant the development of propositions to learn from and about trees. This study contributed to implement mandatory formal education in children’s environmental literacy. The final object is to substitute life and nature as a whole (biocentrism and ecocentrism) for the dominant focus on man (anthropocentrism) [16]. Each teaching process is idiosyncratic, that is, it is unique because of the set of features, temperament, character, etc. distinctive and proper to each educational group. Therefore, this study resulted in a teaching/learning process about the tree in the

156

M. E. Ferreira et al.

perspective of a fragile living being in need of protection. Of course, it is an idiosyncratic action of great complexity when primary school students are at the Piaget concrete operations stage [17]. Felipe [18] considers that the school should provide children with an understanding of the world depending upon their development stage. The teaching process underlies a set of actions in which the teacher is involved. Considering the nature of the content/subject and the psycho-pedagogical characteristics of the students, the teacher makes options of methodology, strategy and didactic resources. The thermal images (thermograms) obtained from infrared thermography technology (IRT) is a tool to observe particularities of the “hidden world” of living beings. Of course, if the didactic resource “image” facilitates the observation of the “hidden world” of living beings, it is a relevant pedagogical resource in children’s science teaching, in particular for abstraction of complex and complicated concepts. Thermography is a generic term for several techniques that allow the observer to visualise the surface temperature of objects. The thermogram is the result of a large and complex interaction between the heating source, the material, the condition of the surface and its irregularities. The evaluation of the objects surface temperature is based on the radiation emitted by them in the infrared range, when they are above zero temperature [19–21]. The temperature distribution in the tree results from the thermal balance between the metabolic heat generation and the exchanges with the outside environment. Internally, this heat is transferred by conduction through the structures of the tree, and by convection through the sap flow. In general, animals produce higher metabolic heat than trees [22]. The application of IRT in the trees is possible by reading the parameters: emissivity, and reflected apparent temperature (varies depending upon the angle between the camera and the tree surface) conditioned by the direction of the radiation received from the environment and the sun [23]. The application of thermography requires a strong thermal contrast, that is, there must be a difference between the irradiative power of the environment and the object under analysis [24]. Based on the theoretical context described above, this study makes part of an investigation of IRT applied to tree monitoring (Project TreeM-FCT2016-Application No. 23831). The project is taking place at the Technology School of a higher education institution. In this study, since the temperature is a physiological characteristic of the tree, it is intended to teach that there is a biological similarity between trees and animals. In other words, the tree is a living being like the pet. It is sought that this conceptual learning of being live (the concept of biological similitude) constitutes the link to the attitudinal learning of protection/preservation about the tree with similar affective meaning to that of the domestic pet. 1.1

Issues and Objectives

This research was carried out as a pedagogical intervention. The starting point was the assumption that didactic resources and their didactic exploration influence the success of students’ learning. The curricular content to teach was “The living being: the tree”.

Promoting Pro-environmental Practices

157

We adopted as a research problem to understand to: (i) The thermograms obtained by the IRT have didactic potentialities in learning the concept of living being (biological similitude between the tree and the pet)? (ii) The learning of the concept of biological similarity between the tree and the pet shown in the thermograms obtained by the IRT will have didactic potentialities in the attitudinal protection/ preservation learning in relation to the tree with the same affective significance to the domestic pet? Thus, to provide answers to the questions we raised, the following learning objectives were defined: to know students previous knowledge about the concept of biological similarity between living being tree and pet; construct, validate and apply didactic resources with thermograms obtained by IRT; validate a didactic exploration script.

2 Materials and Methods 2.1

Infrared Thermography Technology

The thermographic camera used was the FLIR model T1030sc. The obtained thermograms have a resolution of 1024  768 HD (786 432 pixels). The lens used is 28º, with FOV (Field of View) 28º  21º (36 mm) and F number 1, 2. The screen is 800  480 pixels. Atmospheric conditions measurement (temperature and relative humidity) were carried out with a humidity meter with FLIR MR 176 thermohygrometer. The software used to treat thermograms was FLIR Tools + and ResearchIR Max 4. The color pallet used in all three cases was LAVA. The living being observed was: a tree of the autochthonous species of the Iberian Peninsula- Quercus pyrenaica Willd (Fig. 1-a). The thermograms were obtained after few days without rain to avoid interference with temperature because of excess water; a little plant -Hibiscus L. (Figure 1-b); a dog- domestic pet (Fig. 1-c).

(c) (a)

(b)

Fig. 1. (a) Tree thermogram (Quercus pyrenaica Willd); (b) Little plant thermogram (Hibiscus L.); (c) Dog Thermogram.

158

2.2

M. E. Ferreira et al.

Pedagogical Intervention

The epistemological basis of this study is framed in socio-constructivism. It is developed as an action-research (IA) approach [10] under a qualitative methodology [25, 26]. The IA methodology creates a constant interaction between theory and practice. Then, it results in a guiding action to solve problems arising from the teaching-learning process. Consequently, it makes possible to understand, improve and reformulate practices; it is possible to carry out the operation of real entities in a small-scale intervention and to present a detailed analysis of the effects of this intervention. The participants of the study are: the authors of this article, they behaved not only as researchers but took the role of direct observers and participants; an education master degree student of primary education as an internship; the classroom teacher; 26 students of the 3rd year of primary education. The group of students belongs to a primary school of a city from the interior of the country. In fact, it is located in central Portugal where the curricular internship (supervised teaching practice) takes place. This study involved a group of 26 children (12 female gender; 14 male gender) from the 3st year (aged 7 and 11 years) from a school in an inland town in Portugal. The pedagogical-didactic intervention was carried on articulating two educational places: the classroom and the technology laboratory of the School of Technology of the higher education institution of the locality to which the primary school belongs. The main techniques of data collection are: direct and participant observation, student-written records and photographic records. The investigation takes place sequentially in the three phases (I, II, II). (i) PHASE I -Pre-action Objectives: Analyze the curricular framework of the concepts; Establish a sequence for the didactic exploration; construct didactic scripts to register the children’s conceptions. (ii) PHASE II- Action This was carried out applying a pedagogical intervention. It was organized sequentially in five sessions. 1st: Classroom session-identified as “What I already know about living beings” Objective: know the children’s conceptions. Operationalization: application of the didactic script. 2nd: Classroom session- to prepare the students for the visit to the technology lab of the School of Technology. Two challenges were posed: “Let’s explore … the tree is a living being?” and “Let’s explore … the dog is a living being? Objectives: Know the concepts about the biological similarity between the tree and the pet; Know the concepts previous to schooling about the need for protection/preservation of the tree compared to the affection given to pets; Motivate for discovery. Operationalization: application of two didactic scripts built for this purposeconcepts registration. 3rd: The session carried out in the technology lab of the School of Technology.

Promoting Pro-environmental Practices

159

Objectives: Learn sciences using the IRT in the technology lab; Know the thermography machine; Understand that the IRT machine shows the temperature, which is characteristic of all living beings; Observe the application of the IRT machine on objects; Observe the application of IRT to plants at the beginning of growth; Analyze the differences between a living being and a non-living being using thermograms; Compare photos of trees and animals with their respective thermograms; Discuss about the biological similarity between the tree and the animals. Operationalization: It was carried out a practical session. It was an interactive dialogue between the teacher responsible for the laboratory and the group of students. (iii) PHASE III- Post-action Operationalization: two sessions were conducted in the classroom after the field trip. The first session posed the following challenge: “Let’s answer … With infrared thermography, I observed and learned …”. It was applied the didactic script built for this purpose. Children’s concepts were registered. The second session the children wrote a free composition on “My ideas about the trees”. Objectives: Verify and consolidate children’s learning; Promote review thinking on the need for pro-tree practices. Finally, it was analysed and discussed the operationalisation of the didactic intervention.

3 Results and Discussion At this stage of the research, the first moment (pre-action) and second moment (action phase) of the empirical part are concluded. PHASE I- Pre-action From the analysis of four years of the Study of the Environment area curriculum, we found that there is a curriculum block entitled “Living beings from its environment” [8]. It includes concepts about the tree: living being, care, life cycle, diversity and food chains. According to the teacher’s class planning, these contents had already been taught at the time of the pedagogical intervention. Thus, the structure and sequence of the didactic exploration followed the plan according to Fig. 2. PHASE II- Action The script 1 was applied one month before the field trip to the Technology lab. The pedagogical options for the script construction were taken from the curriculum of the 1st, 2nd and 3rd years of schooling [8] and the planning accomplishments of the teaching/learning process with this class. The formative evaluation [27] provided by this script is based on the principles of socio-constructivism, that is, focused on improving learning. The analysis of the children conceptions records (n = 26) showed that most of them (92.3%) correctly identified living beings in the photograph, although only 61.5% knew the concept of living beings; 73% of children identified trees as living beings and were able to identify non-living beings; 15.4% of children only identified people as living beings; 57.6% of children reported that plants should be protected as animals and

160

M. E. Ferreira et al.

Fig. 2. Sequence of didactic exploration.

42.3% attributed this to the fact that plants are important to humans. Only 4% of children refer to the need of protecting trees as any other living beings in nature (ecocentric conception of the tree as a living being.) These results point to a learning of the elements of the concept (learning conception of Shepard’s behaviorists, [28]) and not for existing relationships. Then, it is concluded that the majority of these students did not achieved meaningful learning. Most of these children do not show a significant learning regarding the concept of biological similarity between living beings - tree and animal - associated to the attitudinal learning of protection in relation to the tree. These children do not reveal to understand the meaning of the living being concept applied to the tree. According to [29], meaningful learning favors responses to problems, enabling commitment and responsibility. Learning a concept means understanding, interpreting and having the ability to relate the acquired knowledge. According to the theory of socio-constructivist learning [30], learning is conditioned by prior conceptions and by the sociocultural context; it is an active process of construction and attribution of meanings. It is sensitive to motivating and challenging educational contexts. In this case, the scientific curricular concepts have already been taught to these children, and the records made by the children showed the coexistence of previous concepts and scientific conclusions, which confirms the current constructivist perspective about learning. Under the point of view of Gil-Pérez [31], effective meaningful learning means that the students will have to actively and emotionally engage in (re) building their knowledge. But, it is necessary the teacher to know and recognize the relevance of the students’ conceptions. From this perspective, scripts 2 and 3 were constructed in order to choose strategies and build the most appropriate didactic resources. These scripts 2 and 3 differ from the script 1. They are more focused to the tree (living being to be urgently protected). Besides, they are focused to the biological similarity between alive beings (tree and animal, in particular pets). The concepts register is essentially

Promoting Pro-environmental Practices

161

linked to the teaching and learning process resulting from the visit to the lab. In the overall analysis of these registers it is highlighted clearly the desire to learn about trees in the Technology lab. Children are actively involved in the (re) construction of their conceptions. An example of one of the records is: “(…) because I will know more about trees and I will verify if my answers are correct”. A guided didactic dialogue was carried out in the lab. The key questions were: “Is it the first time you come to learn science in a technology lab? What did you learn from this lab? Have you ever seen this machine? (show and explain the thermography machine); Can this machine show us a characteristic that is the same in all living things? (observe the production of thermograms; explore the various thermograms to find out where the difference is between a living being and a non-living being, and the biological similarity between the tree, the plant and the pet); Have you ever thought that the tree has a temperature like the animal? “At the end of the inspection/discovery, children are asked to express their ideas about:” The tree should be protected like all living things. Do you want to say your opinion? What can we all do to protect the trees? “It was observed that the fact that children were given time to express their opinions enhanced interactivity in the dialogue and allowed them to perceive their degree of understanding of the concept of biological similitude from the perspective of environmental literacy, and create challenges that have been experienced with IRT support. According to Vygotsky [32], social interaction and dialogue are basic elements of cognitive processes development.

4 Conclusions In this study, it was found that the applied methodology and its operationalisation using the didactic resource - IRT vs. thermograms - gave these children the opportunity to (re) construct conceptions by observing, experiencing, and discovering. They approached to scientific knowledge using advanced technology in the technology lab. It facilitated the construction of the meaning of biological similitude between tree and animal, as well as the discussion about their attitudes (and that of others) of daily life about the tree. They were able to recommend possible solutions. It was verified the pedagogical interest of the registration scripts applied before the field trip; it was valuable as a guide for the teaching process in the lab. In short, these were elements of formative evaluation. This learning is fully interconnected to the teaching practice using IRT. Although this research is being developed with primary school children, it is expected that the didactic sequence developed using thermograms may be applied in other levels of education. Also, it is expected this study to originate practices of science teaching that promote conceptual meaningful learning linked to values and attitudes of pro-environment citizenship. Of course, we observe the limits and potentialities of this study. It should be emphasized that any teaching strategy is always only a means by which meaningful learning can be achieved. However, as Assmann [33] argues, education plays a decisive role in the reorientation of humanity. In fact, education develops the necessary social sensitivity. The education must have a vision of being in the world. At the moment, this research is in the post-action phase. It is part of a broader

162

M. E. Ferreira et al.

investigation to accurately evaluate the impact of this technology as didactic resource in science learning. It is considered a potential tool to prepare and motivate students for pro-environmental practices. Acknowledgements. This research is framed in the project “TreeM – Advanced Monitoring & Maintenance of Trees” N. º 023831, 02/SAICT/2016, co-financed by CENTRO 2020 and FCT, Portugal 2020 and structural funds UE-FEDER.

References 1. Barrett, G.W., Peles J.D., Odum, E.P.: Transcending processes and the level-of-organization concept. BioScience 47(8), 531–535 (1997) 2. Roth, C.E.: Environmental Literacy: Its Roots, Evolution, and Directions in the 1990s. ERIC Clearinghouse or Science, Mathematics and Environmental Education, Columbus/OH (1992) 3. Ferreira, M.E., Porteiro, A.C., Pitarma, R.: Enhancing children’s success in science learning: an experience of science teaching in teacher primary school training. J. Educ. Pract. 6(8), 24–31 (2015) 4. Ferreira, E., Porteiro, C., Pitarma, R.: Teaching science in primary education in an engineering laboratory. In: 7th annual International Conference on Education and New Learning Technologies (EDULEARN15), pp. 2313–2320, IATED Academy, Barcelona, Spain, 6–8 July, 2015 ISBN: 978-84-606-8243-1, ISSN: 2340-1117 5. Ferreira, M.E., Cruz, C., Pitarma, R.: Teaching ecology to children of preschool education to instill environmentally friendly behaviour. Int. J. Environ. Sci. Educ. 11(12), 5619–5632 (2016) 6. Martins, I.: Inovar o ensino para promover a aprendizagem das ciências no 1.º ciclo. Noesis 66, 30–33 (2006) 7. Sá, J.: Renovar as práticas no 1ºciclo pela via das ciências da natureza, 2ª edn. Porto, Porto Editora (2002) 8. ME-Ministério da Educação de Portugal: Organização Curricular e Programas: Ensino Básico – 1º Ciclo, 4ª edn. DEB, Lisboa, ME (2004) 9. Reis, P.: Ciência e Educação: que relação? Interacções 3, 160–187 (2006) 10. Salmon, G.: E-moderating: The Key to Teaching and Learning Online. Kogan Page, London (2000) 11. Coll, C.: Os professores e a concepção construtivista. Porto, Edições ASA (2001) 12. Hodson, D.: Teaching and Learning Science: Towards a Personalized Approach Buckingham. Open University Press, Philadelphia (1998) 13. Savery, J.R., Duffy, T.M.: Problem-based learning: An instructional model and its constructivist framework. In: Wilson, B. (ed.) Constructivist Learning Environments: Case Studies in Instructional Design, pp. 135–148. Educational Technology Publications, Englewood Cliffs (1995) 14. Harlen, W.: The Teaching of Science in Primary Schools. David Fulton Publishers, London (2000) 15. Driver, R., Squires, A., Rushworth, P., Wood-Robinson, V.: Making Sense of Secondary Science: Research into Children´s Ideas. RoutledgeFalmer, London (2001)

Promoting Pro-environmental Practices

163

16. Costa, F.S., Gonçalves, A.B.: Educação ambiental e cidadania: Os desafios da escola de hoje. In: Actas dos ateliers do Vº Congresso Português de Sociologia Sociedades Contemporâneas: Reflexividade e Acção Atelier: Ambiente, pp. 33–40. APS Publicações, Lisboa (2004) 17. Piaget, J., Inhelder, B.: A Psicologia da criança. Porto, Edições ASA (1997) 18. Felipe, J.: O desenvolvimento infantil na perspectiva sociointeracionista: Piaget, Vygotsky, Wallon. In: Craidy, C.M., Kaercher, G.E.P.S. (eds.) Educação Infantil: pra que te quero?, pp. 27–37. Porto Alegre, Artmed (2001) 19. Crisóstomo, J., Pitarma, R., Jorge, L.: Emissividade de Amostras de Pinus pinaster – Contribuição para Avaliação por Termografia IV. In: Proceedings ICEUBI (2013) 20. Crisóstomo, J., Pitarma, R., Jorge, L.: Determinação da Emissividade de Materiais com Recurso a Software de Imagem. In: Proceedings CISTI (2015) 21. Crisóstomo, J., Pitarma, R., Jorge, L.: Analysis of Materials Emissivity Based on Image Software. In: Proceedings WCIST 2016, pp. 749–757 (2006) 22. Monteith, J.L., Unsworth, M.H.: Principles of Environmental Physics, 3rd edn. Academic Press, London (2007) 23. National Center for Preservation Tecnology and Training. http://ncptt.nps.gov/wp-content/ uploads/2008-06.pdf 24. Holst, G.: Common Sense Approch to Thermal Imaging. Spie Optical Engineering Press (2000) 25. Latorre, A.: La investigación-acción. Conocer y cambiar la práctica educativa, 4th edn. Barcelona, Editorial Graó (2007) 26. Bogdan, R., Biklen, S.: Investigação Qualitativa em Educação: Uma Introdução à Teoria e aos Métodos. Porto, Porto Editora (2013) 27. Gips, C., Stobart, G.: Alternative assessman. In: Kellaghan, T., Stuffebeam, D. (eds.) International handbook of educational evaluation, pp. 549–576. Kluver, Dordrecht (2003) 28. Shepard, L.: The role of classroom assessment in teaching and learning. In: Richardson, V. (ed.) Handbook of research on teaching, 4th edn. American Educational Research Association, New York, Macmillan (2001) 29. Novak, J.D.: Aprender, criar e utilizar o conhecimento: Mapas Conceituais como Ferramentas de Facilitação nas Escolas e Empresas. Lisboa, Plátano Edições Técnicas (2000) 30. Shepard, L.: The role of classroom assessment in learning culture. Educ. Res. 29(7), 4–14 (2000) 31. Gil-Pérez, D., Guisásola, J., Moreno, A., Cachapuz, A., Pessoa de Carvalho, A.M., Martínez Torregrosa, J., Salinas, J., Valdés, P., González, E., Gené Duch, A., Dumas-Carré, A., Tricárico, H., Gallego, R.: Defending constructivism in science education. Sci. Educ. 11(6), 557–571 (2002) 32. Vygotsky, L.S.: A formação social da mente. Martins Fontes, Rio de Janeiro (1996) 33. Assmann, H.: Reencantar a educação: rumo à sociedade aprendente, vol. 1, 5th edn. Vozes, Petrópolis (2001)

Emerging Technologies and Learning Innovation in the New Learning Ecosystem Helene Fournier1(&), Heather Molyneaux2, and Rita Kop3 1

National Research Council Canada, 100 des Aboiteaux Street, Suite 1100, Moncton, NB E1A 7R1, Canada [email protected] 2 National Research Council Canada, 46 Dineen Drive, Fredericton, NB E3B 9W4, Canada [email protected] 3 Yorkville University, Yorkville Landing, Suite 102, 100 Woodside Lane, Fredericton, NB E3C 2R9, Canada [email protected]

Abstract. This paper highlights a decade of research by the National Research Council in the area of Personal Learning Environments, including MOOCs and learning in networked environments. The value of data analytics, algorithms, and machine learning is explored in more depth, as well as challenges in using personal learning data to automate the learning process, the use of personal learning data in educational data mining (EDM), and important ethics and privacy issues around networked learning environments. Keywords: Personal Learning Environments Machine learning  Ethics and privacy

 Data analytics  Algorithms

1 Introduction The National Research Council has been conducting research on emerging technologies and learning innovation since 2008, starting with Personal Learning Environments (PLEs), connectivist-type MOOCs (cMOOCs) and more recently, new learning ecosystems. A decade of research has identified important gaps, especially around the types of support mechanisms required by learners to be successful in these new open and accessible learning environments. Researchers at the NRC have contributed important findings which highlight some of the challenges in the research and analysis process, especially as significant amounts of both quantitative and qualitative data are involved. The NRC’s contributions to the field span over a decade with the publication of important findings related to Big Data and Educational Data Mining (EDM), ethics and privacy issues in networked environments, and the use of personal learning data to feed into the research and development process.

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 164–170, 2019. https://doi.org/10.1007/978-3-030-03577-8_19

Emerging Technologies and Learning Innovation

165

2 New Learning Ecosystem Landscape The proliferation of Information and Communications Technologies in recent years has changed the educational landscape, creating a plethora of new opportunities for learning. New learning technologies are emerging outside formal education, and academics and technologists are experimenting with these in formal and informal settings. Personal Learning Environments (PLEs), including Massive Open Online Courses (MOOCs), are part of the new learning ecosystem landscape, offering a wide range of open and accessible learning opportunities to learners across the world. Novel technologies have prompted a new era of information abundance, far beyond the era of information scarcity and inaccessibility [1]. Social media now make it possible to communicate across networks on a global scale, outside the traditional classroom bound by brick walls. Communication on such a global scale would have been unimaginable not long ago. Data and data storage have evolved under the influence of emerging technologies. Instead of capturing data and storing it in a database, we now deal with large datastreams stored in the cloud, which might be represented and visualized using algorithms and machine learning. This presents interesting opportunities to learn from data, revealing with it hidden insights but important challenges as well.

3 Opportunities and Challenges Around Data More than 70% of the Web is now user-generated and distributed via personal presence sites such as Facebook and YouTube in addition to micro blogging sites such as Twitter [2]. The exponential growth in the use of social media such as blogs and the ease of use of video-sharing sites has facilitate both the creation and sharing process for the common end-user. Another challenge, but also an opportunity, is that data and data storage have also evolved under the influence of emerging technologies. Data streams are increasingly stored in the cloud, rather than in databases and the data might be represented and visualized through the use of algorithms and machine learning. Software and algorithms are shaped by social, political and economic interests that might influence their value for education and learning [3]. A critical reflection is thus warranted, in light of the complexities around data analytics, including the ethics and values, ambiguities and tensions of culture and politics, even the context in which data is collected which for the most part is not accounted for [3]. Data analytics must move beyond simplistic premises which reduce complex problems into technical, knowable, and measurable parameters that can be solved through technical calculation. One of the problems already highlighted in the development of algorithms is the judgements made by researchers and developers that could introduce researcher biases in the tool, which clearly could affect the quality of the recommendation or search results [4].

166

H. Fournier et al.

4 Next Generation Learning Environments At the NRC, the design and development of next generation learning includes datadriven visualizations of trace data learners have left behind in their online learning activities and machine-learning techniques to personalize the learning experience. Other development efforts have focused on personal and context-aware information about the learner to help to counter issues of human bias in collecting and analyzing data, and interpreting results. New recommendation systems now rely on artificial intelligence and techniques which take into account user learning styles, preferences, prior experience and knowledge to better predict and anticipate the needs of learners, and act more efficiently in response to learner behaviors [5]. The application of various intelligent techniques from data mining and machine learning represents a more recent trend in attempting to study and model users’ context-sensitive preferences. The use of contextual information (e.g., user’s current mood or location) is widely used by recommendation systems for mobile devices to search for and present the best resources to the users [6]. Social tags and resource metadata semantics (e.g., for deriving meaning from single words or text) are also used to enhance recommendation systems. Social networks like Facebook, Twitter, and LinkedIn are rich with detailed contextual information describing an individual’s preferences and relationships. This contextual information can make recommendation more feasible and effective—extracting user social tags from external social networks while the collaborative tagging system is used as supplemental information [6].

5 Challenges and Concerns Around New Learning Ecosystems Learning Analytics (LA) and Educational Data Mining (EDM) are emerging fields that make use of end-user data to enhance education and learning. Visualization techniques and algorithms are used to parse and filter information and data streams. LA and EDM also make visible and clarify aspects of learning or learning preferences, in order to support people in the management of their lifelong learning. For instance, the incorporation of intelligent and context-aware information about the user in the recommendation process allows for better prediction and for anticipating users’ needs, and to improve system responsiveness and visualization of progress along the learning journey [5]. Prinsloo and Slade [7] argue that since the emergence of learning analytics in 2011, the field has not only matured, but also become more nuanced as fears and realities of ethical implications around data collection, analysis, and use of student data have come to the fore. Technological advances have also given rise to increasing concerns around pervasive surveillance, with a growing consensus that the future of higher education will be digital, distributed, and data-driven. Our own research in LA and EDM has pointed out major challenges in using technology to analyze learning and using predictive analytics and visualizations to advance new learning ecosystems, as well as ethical concerns related to privacy and ownership of massive amounts of data [8–10]. The development of algorithms and

Emerging Technologies and Learning Innovation

167

other data-driven systems in education should include a critical reflection on the implications of what these systems are in fact replacing and whether the replacement is positive or negative. It is thus important to be informed about the power of learning analytics, the techniques and the kinds of findings that can be derived from the data, but also their limits. The literature underscores the need for further research around consent, potential conflicts between students’ concerns, their right to opt-out from the collection and analysis of their data, and more clarity around the central question of “who benefits?” from the analytics [7]. Munoz, Smith, & Patil refer to a report by the Executive Office of the President of the US which highlights benefits, but also addresses concerns regarding the potential harm inherent in the use of big data [7]. The US report recognizes that if “these technologies [algorithmic systems] are not implemented with care, they can also perpetuate, exacerbate, or mask harmful discrimination” [7, p. 54]. Recommendations point to the need for investing in research to mitigate against algorithmic discrimination, including the development and use of robust and transparent algorithms, algorithmic auditing, improvements of data science “fluency”, and the role of government and the private sector in setting codes of practices around data use [7]. New frameworks, code of practices, and conceptual mappings of the ethical implications of analytics (including big data analytics, learning analytics) provide guidelines for the collection, analysis, and use of personal data. New guidelines address the need to describe the scope and nature of the imposition; the quality of the data and the automation of the decisions relating to the collected data; the risk of negative unintended consequences; having the person who’s data is collected agree (with an opt out option) to the collection and analysis; the nature and scope of the oversight (if any); and the security of the collected data [7]. These issues have been highlighted in research as well [10]. The literature also cites frameworks from the UK Government and Cabinet Office which require data scientists to clarify “tricky issues” as part of a “Privacy Impact Assessment”, including reviewing the extent to which the benefits of the project outweigh the risks to privacy and negative unintended consequences; steps taken to minimize risks and ensure correct interpretation; and the extent to which the opinions of the person who’s data is collected were considered [7]. The basis and scope of authority of the educational data scientists who are building data-driven systems is currently being called into question. The ethical implications and concerns over data ownership, data protection and privacy are multifaceted and require further consideration and investigation. Williamson argues that learning analytics of the future will be based on and driven by algorithms and machine learning essentially, and we therefore have to consider how algorithms “reinforce, maintain, or even reshape visions of the social world, knowledge, and encounters with information [11 p. 4]. Regulatory frameworks, accountability, and transparency will be essential elements in frameworks that promote ethical learning analytic [7, 12, 13]. The discussion of ethical implications around learning analytics adds an additional layer and a richer understanding of how analytics can be used to increase the effectiveness and appropriateness of teaching, learning, and student support strategies in economically viable and ethical ways. However, the practical implementations of that understanding remain largely incomplete, and thus fertile ground for sustained research.

168

H. Fournier et al.

New learning ecosystem must be designed to take these elements into considerations, as well as the place of communication and dialogue between participants in creating quality learning experiences, the importance of presence and engagement of knowledgeable others as being vital in extending the ideas, critical analysis, and thinking of participants in a learning setting [14, 15]. Research in the area of EDM also highlights the importance of affective dimensions in learning, in sentiment analysis [16], in creating affective knowledge [17], and keeping humans in the loop as desirable for the filtering and aggregation of information and Socratic questioning—which research has demonstrated is better through human mediation [15]. The complexity of introducing serendipity in the information aggregation process as an alternative to human mediation includes support for random or chance occurrences in information searches and discovery [18], the use of algorithm-based platforms to elevate levels of ‘serendipity’ in the information stream to enhance thinking and critical analysis levels [19, 20], as well as the use of collaborative Filtering (CF) methods in recommender systems in making connections between different types of data [21]. The use of AI and automation in learning platforms have been successful in simulating interactions with a critical knowledgeable human, interactions that are essential in creating quality learning experiences, but there are still important gaps to address.

6 Conclusion and Discussion This paper highlights important considerations for developing new learning ecosystems that rely on massive amounts of personal data to create quality learning experiences. Several issues have been raised around the content of data-driven systems, who influences this content, and the value they add to the educational process itself. The affordances and effectiveness of new data driven learning environments come with important limits as well, including issues around automated systems; systems that do not consider the end-user (or learner) in the information discovery, capture, and analysis process and the ethics of moving from a learning environment characterized by human communication to an environment that includes technical elements over which the learner may have little or no control. New frameworks, codes of practices, and conceptual mappings of the ethical implications around data analytics provide guidelines on the scope and authority of educational data scientists who are building data-driven systems, but practical implementations in new learning ecosystems are still incomplete. These gaps provide fertile ground for further research.

References 1. Weller, M.A.: Pedagogy of abundance. Span. J. Pedagog. 249, 223–236 (2011) 2. World Stats.: Internet growth statistics: Today’s road to e-commerce and global trade internet technology reports, And the “Global Village” became a Reality. https://www. internetworldstats.com/emarketing.htm. Accessed 15 May 2018

Emerging Technologies and Learning Innovation

169

3. Fenwick, T.: Professional responsibility in a future of data analytics. In: Williamson, B. (ed.) Coding/learning, software and digital data in education. University of Stirling, Stirling (2015) 4. Hardt, M.: How big data is unfair: Understanding sources of unfairness in data driven decision making. https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de. Accessed 15 May 2018 5. Verbert, K., Manouselis, N., Ochoa, X., Wolpers, M., Drachsler, H., Bosnic, I., Duval, E.: Context-aware recommender systems for learning: A survey and future challenges. IEEE Trans. Learn. Technol. 5(4), 318–335 (2012), https://ieeexplore.ieee.org/stamp/stamp.jsp? tp=&arnumber=6189308. Accessed 16 May 2018 6. Hu, L., Qiuli Tong, Z.D., Liu, Y.: Context-aware recommendation of learning resources using rules engine. In: IEEE 13th International Conference on Advanced Learning Technologies (2013). https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6601899. Accessed 16 May 2018 7. Prinsloo, P., Slade, S.: Ethics and learning analytics: Charting the (un)charted. In: Lang, Siemens, Wise, Gašević, D. (eds.) Handbook of Learning Analytics, First Edition (2017). https://solaresearch.org/wp-content/uploads/2017/05/hla17.pdf. Accessed 16 May 2018 8. Fournier, H., Kop, R., Sitlia, H.: The value of learning analytics to networked learning on a personal learning environment. In: 1st International Conference on Learning analytics and Knowledge, Paper 14, Banff, Canada (2011) 9. Kop, R., Fournier, H., Durand, G.: Challenges to research in Massive Open Online Courses. Merlot J. Online Learn. Teach. 10, 1 (2014) 10. Kop, R., Fournier, H., Durand, G.: A critical perspective on learning analytics and educational data mining. In: Lang, Siemens, Wise, Gašević (eds.) Handbook of Learning Analytics, 1st edn., pp. 319–326 (2017). https://solaresearch.org/hla-17/hla17-chapter27/. Accessed 30 May 2018 11. Williamson, B.: Computing brains: Learning algorithms and neurocomputation in the smart city. Inf. Commun. Soc. 20(1), 81–99. https://doi.org/10.1080/1369118x.2016.1181194. https://www.tandfonline.com/doi/pdf/10.1080/1369118X.2016.1181194?needAccess=true. Accessed 30 May 2018 12. Prinsloo, P.: Fleeing from Frankenstein and meeting Kafka on the way: Algorithmic decision-making in higher education. Presentation at NUI, Galway, 22 September 2016. https://www.slideshare.net/prinsp/feeling-from-frankenstein-and-meeting-kafka-on-the-wayalgorithmic-decisionmaking-in-higher-education. Accessed 31 July 2018 13. Taneja, H.: The need for algorithmic accountability/TechCrunch, 8 September 2016. https:// techcrunch.com/2016/09/08/the-need-for-algorithmic-accountability/. Accessed 28 Aug 2018 14. Jones, C., Dirckinck-Holmfeld, L., Lindstom, B.: A relational, indirect, meso-level approach to CSCL design in the next decade. Int. J. Comput. Support. Collab. Learn. 1(1), 35–56 (2006) 15. Bates, T.: Two design models for online collaborative learning: same or different? Online learning and distance education resources, 28 November 2014. https://www.tonybates.ca/ 2014/11/28/two-design-models-for-online-collaborative-learning-same-or-different/. Accessed 30 May 2018 16. Wen, M., Yang, D., Rosé, C.: Sentiment analysis in MOOC discussion forums: What does it tell us? In: Proceedings of 2012 Educational Data Mining, pp. 1–8 (2012) 17. Cambria, E.: Affective computing and sentiment analysis. IEEE Intell. Syst. 31(2), 102–107 (2016). https://doi.org/10.1109/MIS.2016.31 18. Björneborn, L.: Three key affordances of serendipity. J. Doc. 73(5), 1053–1081 (2017)

170

H. Fournier et al.

19. Kop, R.: The unexpected connection: Serendipity and human mediation in networked learning. Educ. Technol. Soc. 15(2), 2–11 (2012). https://pdfs.semanticscholar.org/2513/ 2614cbd6733047129e5945a5784d5ede7ef2.pdf. Accessed 30 May 2018 20. Gritton, J.: Of Serendipity, free association and aimless browsing: Do they lead to serendipitous learning? https://www.researchgate.net/publication/242402926_Of_serendipity_free_ association_and_aimless_browsing_do_they_lead_to_serendipitous_learning. Accessed 4 Sep 2018 21. Lu, Q., Chen, T., Zhang, W., Yang, D., Yu, Y.: Serendipitous personalized ranking for TopN recommendation. In: IEEE/WIC/ACM International Conferences on Web Intelligence & Intelligent Agent Technology 1, pp. 258–265 (2012). https://www.cs.cmu.edu/*diyiy/docs/ wi12.pdf. Accessed 30 May 2018

Exploring the Acceptance of Mobile Technology Application for Enhancing Teaching and Learning at the College of Business Education in Tanzania Godfrey Mwandosya1(&), Calkin Suero Montero2, and Esther-Rosinner Mbise1 1

2

ICT and Mathematics Department, College of Business Education, Dar es Salaam, Tanzania [email protected], [email protected] School of Computing, University of Eastern Finland, Joensuu, Finland [email protected]

Abstract. Advancements in mobile technologies have brought forth substantial opportunities for enhancing teaching and learning environments in higher education. Based on the technology infrastructure available and teachers’ mobile technology use, this study applies the technology acceptance model (TAM) to investigate the teachers’ usage and acceptance of the student academic register information system (SARIS) application in Tanzania for enhancing their teaching. In this work, the TAM instrument is used in the form of an administered questionnaire to survey the views and perceptions of 50 teachers from the College of Business Education (CBE). The empirical results from the TAM instrument show the positive perceived ease of use (PEOU), perceived usefulness (PU) and attitude towards the use of SARIS (ATSU). The usage behavior (UB) of the CBE teachers of SARIS application confirms its acceptability. However, some of the teachers indicated discontent towards SARIS for not being involved in the requirements design. Keywords: Mobile technology  CBE Tanzania Teaching and learning enhancement  Innovative teaching Emerging economies

1 Introduction Mobile technologies have emerged as a widespread solution for enhancing the performance of different areas of society including education, specifically teaching and learning in higher education [1, 2]. A number of researches have highlighted the usage of mobile technologies in a bid to improve teaching and learning in higher education [3–8]. Applying technologies such as electronic learning (e-learning) can improve the teaching and learning and counteract some of the educational challenges that higher education institutions (HEIs) in emerging economies are facing [9].

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 171–180, 2019. https://doi.org/10.1007/978-3-030-03577-8_20

172

G. Mwandosya et al.

HEIs in emerging economies such as Tanzania face many challenges that hinder the success of teaching and learning. These challenges include poor teaching and learning infrastructure in classrooms (e.g. whiteboard, local area network, chairs, tables, internet, etc.), insufficient physical space for classrooms, high enrollment of students causing difficulty in managing classes, lack of sufficient assistance to teachers, and limited educational related resources (books, manuals, etc.), just to mention a few. A study by Tedre et al. [9], add on the challenges by highlighting lack of computer laboratories, ICT equipment, system administration, and funding. Another challenge comes from the teachers themselves lacking the willingness to use technology in their teaching. For instance, Lim et al. [10], and Oparaocha et al. [11], point out that teachers are not using technology for instructional purposes because they prefer the instructional or traditional method instead. Teachers in HEIs are directly feeling the pressure of these challenges, as they are the ones interacting with the students on daily basis. Their duty, among many, is to make sure teaching and learning are as successful as possible [12]. Given that the teachers in HEIs are the experts in their different fields and that students rely on their expertise to acquire skills and knowledge, the use of technology in teaching and learning contexts by the teachers is important [13, 14]. A number of studies have reported on the enhancement of teaching and learning activities in higher education using mobile devices. They showed how wireless communication and network technologies (WiTEC) enhanced teaching and learning so that teachers and students could easily concentrate on teaching and learning continuously [15]. WiTEC enabled teachers and students to engage in innovative learning and teaching activities seamlessly. Similarly, Mtega et al. [1], showed extensively how mobile wireless technologies benefit teachers and students in higher education by accessing, sharing and exchange of the learning contents in higher education institutions. In Tanzania, there is a gradual introduction of mobile technologies to support learning in HEIs [1, 16] including learning management systems (LMS) such as Moodle, simple messaging system (SMS) and student academic register information system (SARIS) application. Most of the teachers (95%) at the College of Business Education (CBE), Tanzania, own at least one mobile device including smartphones, tablets, iPad, and other similar items [17]. This represents an opportunity for the application of mobile technologies specifically for teaching. However, it is important to understand how the technology solutions put forward are accepted and used in order to make their deployment more impactful. That is, simple access to hardware and software seldom lead to a widespread usage of technologies in classrooms as intended and as such, teachers should be trained in technology use [18]. Some studies on mobile technologies have incorporated the technology acceptance model (TAM) in assessing the users’ attitude, usage behavior and intention to use mobile technologies [1, 12, 19, 20]. In our work, we use the TAM model to understand how the SARIS system is accepted as a tool for enhancing teaching and learning at CBE. Therefore, this study aims at investigating acceptance of the SARIS application by teachers of CBE according to TAM. Specifically, the study seeks to answer the following research question: What are the characteristics of SARIS application in terms of perceived usefulness, and perceived ease of use that influences the CBE teachers’

Exploring the Acceptance of Mobile Technology Application

173

attitude and usage behavior for the acceptance of the application for enhancing teaching and learning? The implication of our study is that the information obtained provides a base for proper intake of technology rather than deploying technology that is not suitable for the environment. The paper is divided into 7 sections; Sect. 2 is the literature review, Sect. 3 describes the methodology used, Sect. 4 describes the findings, Sect. 5 provides a discussion of the findings and Sect. 6 explains the conclusion and future studies.

2 Literature Review Research in emerging economies points out that though teachers possess one or more mobile devices and are aware of the ICT capability still do not utilize ICT as the pedagogical tool to enhance teaching and learning [19]. The effective outcome of the technology to be used for improving or enhancing a certain undertaking will much depend on the willingness of the individual in embracing that kind of technology [21] and their competencies in using the technologies effectively for teaching and learning [11]. Also, involving them in the design processes of the technological solution, investigating their requirements proves to be giving them ownership of the technology [17]. In Tanzania, a study by Tedre et al. [9], outlined ways where e-learning is developed in Tanzanian HEIs and how ICT have facilitated this development. Presently, elearning is slowly being overtaken by mobile learning (m-learning) using wireless technologies and the mobile devices. This means teaching and learning can be done anywhere anytime [22]. Mobile technologies and mobile devices (for example smartphones) introduction is an important asset that enables teaching and learning to its best. Use of mobile phones in teaching and learning has been reported by [1]. Specifically, the use of SMS platform by teachers and students enhanced teaching and learning at the Sokoine University of Agriculture (SUA) [1]. Specifically, for CBE, it is worth noting that though teachers have at least one or more mobile devices, the mobile associated technologies for enhancing teaching and learning are not yet fully realized [17]. The reason being that teachers at CBE had their mobile devices used mostly for social media, blogs and other related applications [17] until the introduction of SARIS. The use of theories and models attempt to explore and explain the factors that cause users, for example, to accept, reject or continue using ICT or mobile technologies for enhancing teaching and learning. The TAM [23], is widely recognized as a highly useful model found to involve measures of how users accept or reject the idea of introducing a new technology to be used in for example, higher education. Davis [23] proposed the TAM for explaining and predicting user acceptance of an information technology (IT), by identifying two key beliefs: perceived usefulness and perceived ease of use. Davis defines perceived usefulness as “the degree to which a person believes that using a particular system would enhance his or her job performance,” and perceived ease of use as, “the degree to which a person believes that using a particular system would be free of effort.” Hsiao and Yang [24] describe TAM to be the most influential model for testing an information system. A study by Munguatosha et al. [25] used the TAM to underpin the development of a social networked learning model for developing countries of Africa. helps to assess how potential users of a particular

174

G. Mwandosya et al.

technology come to accept and use it. The model explains the key determinant factors towards TAM including perceived ease of use of new learning media, perceived usefulness in education, and user readiness to use new learning media. We used an extended TAM for assessing the acceptance of SARIS application usefulness for enhancing teaching and learning through the teaching staff at CBE usage behavior. The model is used to assess perceived ease of use, perceived usefulness, attitude towards use, and usage behavior of SARIS at CBE (see Fig. 1).

User involvement in requirements’ design (UI)

Perceived Usefulness (PU) Attitude towards SARIS’ Use (ATSU)

Usage Behavior (UB)

Acceptance of SARIS

Perceived Ease of Use (PEOU)

Fig. 1. Extended technology acceptance model (TAM) to assess users’ acceptance of SARIS

3 Methodology 3.1

Research Design

This study was conducted at the College of Business Education in Dar es Salaam city, Tanzania. CBE is one of higher education institutions offering courses from certificate level to master’s level. It is strategically having campuses in 4 famous cities of Tanzania. A case study strategy was used in this study. According to Saunders et al. [26], the case study strategy is most often used in explanatory and exploratory research. 3.2

The Instrument

This study assessed the acceptance of the existing SARIS application by the CBE teachers for further introduction of mobile technologies in enhancing teaching and learning at CBE. An administered survey questionnaire was developed to get views of the teachers at CBE on their SARIS use and acceptance. A questionnaire comprised two sections, the first was about background information of the participants and the second contained 12 statements. Each item except for the demographic part was measured on a five-point Likert scale with 1 = strongly disagree and 5 = strongly agree. An example of the statement “Using SARIS have improved my work”. 3.3

Participants and Procedures

A purposive sampling of 50 teachers was selected for the study due to their expertise in different mobile applications and willingness to participate. Each questionnaire was sent to participants with an attached cover letter describing the objectives of the survey

Exploring the Acceptance of Mobile Technology Application

175

and guaranteeing respondent privacy. Participants were informed of the right to withdraw from the study at any time if they so wish during the study. 3.4

Data Analysis

The analysis of data done using the statistical package for social sciences (SPSS) IBM version 23. The tools used were descriptive statistics.

4 Results 4.1

Background Details

The open-ended questionnaire had questions to explore ranging from their background to their acceptance of the SARIS application. The education background details of teachers in the sample were found to be 4 (8%) teachers with PhD, 34 (68%) with master’s degree, and 12 (24%) having bachelor degree. Those having bachelor degree were said to be tutors (see Table 1). Table 1. Education background Education level Ph.D. Masters Bachelor Total number of teachers

4.2

No. of teachers Percentage 4 8% 34 68% 12 24% 50

Statements Results

The study sought first explore whether teachers were involved providing requirements so that the application fulfill their needs prior to the application deployment in their HEI. The response 70% of the teachers indicated that they were not involved in the requirement design of SARIS (see Fig. 2). From the responses of the teachers, SARIS application was installed first and then from there, teachers were trained on how to use it. It was observed that teachers were urged to stop manual processing of examinations, so every teacher were required to learn as quick as possible and use SARIS in all educational activities. According to the teachers’ responses, the perceived usefulness (PU) was found to be high as 45 teachers (90%) agreed that SARIS have enhanced their effectiveness, and 36 teachers (72%) agreed that SARIS has improved their work. However, from the perceived ease of use (PEOU) perspective, only 31 teachers (62%) have responded that interaction with SARIS is clear and unambiguous, whereas less than half agree that the system is easy to use and easy to get educational information (see Fig. 3).

176

G. Mwandosya et al.

I was involved in the requirements design of SARIS 50% 40% 30% 20% 10% 0% Strongly disagree

Disagree

Neutral

Agree

Strongly agree

Fig. 2. Responses of the teachers on the involvement in requirements design of SARIS

Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Using SARIS Using SARIS Using SARIS My interaction I find it easy to I SARIS easy to have improved have enhanced have increased with SARIS is get educational use my work clear and information my effectiveness my productivity unambiguous through SARIS

Strongly disagree

Disagree

Neutral

Agree

Strongly agree

Fig. 3. Summary of CBE teacher’s responses regarding SARIS’ PU and PEOU

Regarding the attitude towards SARIS’ use (ATSU) and usage behaviors (UB), 39 teachers (78%) agreed that working with SARIS is a fun activity and 33 teachers (66%) agree that they can access SARIS anywhere, anytime and that they not necessarily have to be at the campus (see Fig. 4). It is important to notice that the 50 teachers participating in this work were very familiar with technology use through their mobile devices and the results in some way reflect this, showing how they have taken in the technology despite it being new to them.

Exploring the Acceptance of Mobile Technology Application

177

Attitude Towards SARIS Use (ATSU) and Usage Behaviour (UB) 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Strongly disagree

Disagree

Neutral

Agree

Strongly agree

SARIS makes uploading/downloading of academic information interesting Working with SARIS is fun I access SARIS anywhere anytime SARIS have changed the way I used to access online imformation I look forward to those aspect of my job that could be available through SARIS

Fig. 4. Teachers’ responses from the ATSU’s and UB’s perspectives

5 Discussion This study assessed the acceptance of the existing SARIS application by CBE teachers for further introduction of mobile technologies in enhancing teaching and learning at the institution. The characteristics of the SARIS application usage was assessed in terms of perceived usefulness, perceived ease of use, attitude towards its use, and usage behavior through TAM. Results have shown that in general teachers at CBE accept SARIS as a tool to enhance their teaching. As highlighted by Rienties et al. [18] in their study, technology usage increases teachers’ skills and confidence so that they are able to balance and integrate technology within their pedagogical design and discipline. However, teachers reported that they were not involved in the design of SARIS, even though eventually they continue to use it as mandated by the institution. From this exploratory work, it is understood that under administrative mandate and with continual usage of technology, teachers at CBE became used to it and were able to adapt. It is speculated that the CBE teachers reported to have accepted SARIS because it is the only information system they were told to use for uploading coursework and examination results. According to the teachers, some functionalities of SARIS such as students’ invoices are not useful to them in their teaching practices and could have been left out. Other functionalities, such as an improved timetable interface could be useful, but are not available. These issues could have been understood if the teachers were involved in the requirements design of the system.

178

G. Mwandosya et al.

A limitation of our work is that the study was conducted at CBE where SARIS is being deployed and used. However, other HEIs in Tanzania such as the Dar es Salaam Institute of Technology, the Institute of Finance Management, and the Tumaini University Dar es salaam College have reported to have abandoned using SARIS application. Instead they use an online management information system (OSIM) (http:// osim.tudarco.ac.tz) after noting the problems of SARIS. For example, SARIS show a number of issues of accessibility, missing students’ record and course works, etc. that could be due to lack of understanding how the system works. In the end, many users prefer to go back to manual data reporting for confirmation and corrections. Therefore, this study highlights the need to involve the teachers as stakeholders in HEIs when designing the technological solutions to their problems in the teaching experience.

6 Conclusion and Future Studies Our study has highlighted the importance of involving the stakeholders in the process of introducing a new technology. Although the CBE teachers have mostly accepted SARIS according to the results obtained from the TAM instrument, they have also shown the ability to provide suggestions for improving SARIS. Mostly, the teachers confirmed that they were not involved in the requirements definition of SARIS prior its deployment. It is recommendable to consider involving key stakeholders, i.e. teachers, for gathering requirements of technology solutions [17] in order to achieve a smooth and successful introduction of new technologies. In addition, defining the requirements should start before designing education related systems so that the application effectively enhances teaching and learning in HEIs. An interesting approach to consider is the design science research (DSR), which advocates the solving of real-world problems in a context through artifacts [27]. In our case, an artifact is an information system or a mobile education tool for enhancing teaching. We also recommend a more rigorous use and test of hypothesis through the TAM model for more comprehensive outcomes.

References 1. Mtega, W.P., Bernard, R., Sanare, R.: Using mobile phones for teaching and learning purposes in higher learning institutions: the case of Sokoine University of agriculture in Tanzania. In: 5th UbuntuNet Alliance Annual Conference, Dar es Salaam (2012) 2. Herrington, J., Herrington, A., Mantei, J., Olney, I., Ferry, B.: New Technologies, New Pedagogies: Mobile Learning in Higher Education. University of Wollongong, Wollongong (2009) 3. Kim, S.H., Mims, C., Holmes, K.P.: An Introduction to current trends and benefits of mobile wireless technology use in higher education. Assoc. Adv. Comput. Educ. J. 14(1), 77–100 (2006) 4. Kukulska-Hulme, A.: How should the higher education workforce adapt to advancements in technology for teaching and learning? Internet High. Educ. 15(4), 247–254 (2012) 5. Lonn, S., Teasley, S.D.: Podcasting in higher education: what are the implications for teaching and learning? Internet High. Educ. 12, 88–92 (2009)

Exploring the Acceptance of Mobile Technology Application

179

6. Kirkwood, A., Price, L.: Examining some assumptions and limitations of research on the effects of emerging technologies for teaching and learning in higher education. Br. J. Educ. Technol. 44(4), 536–543 (2013) 7. Kukulska-Hulme, A., Viberg, O.: Mobile collaborative language learning: state of the art. Br. J. Educ. Technol. 49(2), 1–12 (2017) 8. Kirkwood, A., Price, L.: Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review. Learn. Media Technol. 39(1), 6–36 (2014) 9. Tedre, M., Ngumbuke, F., Kemppainen, J.: Infrastructure, human capacity, and high hopes: a decade of development of e-learning in a Tanzanian HEI. Redefining Digit. Divid. High. Educ. 7(1), 1–15 (2010) 10. Lim, C.P., Chai, C.S., Churchill, D.: A framework for developing pre-service teacher’ competencies in using technologies to enhance teaching and learning. Educ. Media Int. 48(2), 69–83 (2011) 11. Oparaocha, G.O., Pokidko, D.H.: Educating the 21st-century learners: are educators using appropriate learning models for honing skills in the mobile age? J. Entrep. Educ. 20(2), 1–15 (2017) 12. Teo, T., Ursavaş, Ö.F., Bahçekapili, E.: Efficiency of the technology acceptance model to explain pre-service teachers’ intention to use technology: a Turkish study. Campus-wide Inf. Syst. 28(2), 93–101 (2011) 13. Baran, E.: A Review of research on mobile learning in teacher education. Educ. Technol. Soc. 17(4), 17–32 (2014) 14. Hooker, M., Mwiyeri, E., Verma, A.: ICT competency framework for teachers (ICT-CFT): contextualization and piloting in Nigeria and Tanzania: synthesis report, UNESCO (2011) 15. Liu, T.C., Wang, H.Y., Liang, J.K., Chan, T.W., Ko, H.W., Yang, J.C.: Wireless and mobile technologies to enhance teaching and learning. J. Comput. Assist. Learn. 19, 371–382 (2003) 16. Mahenge, M.P., Sanga, C.: ICT for e-learning in three higher education institutions in Tanzania. Knowl. Manag. E-Learning 8(1), 200–212 (2016) 17. Mwandosya, G.I., Suero Montero, C.: Towards a mobile education tool for higher education teachers: a user requirements definition. In: Proceedings of the 2017 IEEE Science Technology & Innovation Africa Conference, Cape Town (2017) 18. Rienties, B., Brouwer, N., Lygo-Baker, S.: The effects of online professional development on teachers’ beliefs and intentions towards learning facilitation and technology. Teach. Teach. Educ. 29, 122–131 (2013) 19. Ndibalema, P.: Teachers’ attitudes towards the use of information communication technology (ICT) as a pedagogical tool in secondary schools in Tanzania: the case of Kondoa District. Int. J. Educ. Res. 2(2), 1–16 (2014) 20. Teo, T., Lee, C.B., Chai, C.S., Wong, S.L.: Assessing the intention to use technology among pre-service teachers in Singapore and Malaysia: a multigroup invariance analysis of the technology acceptance model (TAM). Comput. Educ. 53, 1000–1009 (2009) 21. Ertmer, P.A., Ottenbreit-Leftwich, A.T., Sadik, O., Sendurur, E., Sendurur, P.: Teacher beliefs and technology integration practices: a critical relationship. Comput. Educ. 59, 423–435 (2012) 22. Oyelere, S.S., Suhonen, J., Shonola, S.A., Joy, M.S.: Discovering students mobile learning experience in higher education in Nigeria (2016) 23. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–339 (1989) 24. Hsiao, C.H., Yang, C.: The intellectual development of the technology acceptance model: a co-citation analysis. Int. J. Inf. Manag. 31, 128–136 (2011)

180

G. Mwandosya et al.

25. Maleko Munguatosha, G., Birevu Muyinda, P., Thaddeus Lubega, J.: A social networked learning adoption model for higher education institutions in developing countries. On Horiz. 19(4), 307–320 (2011) 26. Saunders, M., Lewis, P., Thornhill, A.: Research Methods for Business Students, 5th edn. Prentice Hall, Essex (2009) 27. Johannesson, P., Perjons, E.: An Introduction to Design Science. Springer International Publishing Switzerland, Stockholm (2014)

Analysis of Atmospheric Monitoring Data Through Micro-meteorological Stations, as a Crowdsourcing Tool for Technology Integration Maritza Aguirre-Munizaga1(&) , Katty Lagos-Ortiz1 , Vanessa Vergara-Lozano1 , Karina Real-Avilés1 , Mitchell Vásquez-Bermudez1 , Andrea Sinche-Guzmán1 , and José Hernández-Rosas2,3 1

2

Faculty of Agricultural Science, School of Computer Engineering, Universidad Agraria del Ecuador, Av. 25 de Julio y Pio Jaramillo, P.O. BOX 09-04-100, Guayaquil, Ecuador {maguirre,klagos,vvergara,kreal, mvasquez,asinche}@uagraria.edu.ec Faculty of Agricultural Science, School of Environmental Engineering, Universidad Agraria del Ecuador, Av. 25 de Julio y Pio Jaramillo, P.O. BOX 09-04-100, Guayaquil, Ecuador [email protected] 3 Escuela de Biología, Facultad de Ciencias, Universidad Central de Venezuela, Caracas, Venezuela

Abstract. This paper highlights the use of information technologies for atmospheric variables monitoring, oriented to the research in areas such as agriculture, forecasts, and climate indexes. Emphasis is placed on the use of free tools that allow spatial analysis of monitored data and help to get free access to information for different studies and investigations. This research uses as a case study, data of approximately six months from meteorological stations of the Agrarian University of Ecuador, located in Guayaquil and Milagro. It is presented as part of this study, the current situation of the data generated as a starting point of the project “Platform for real-time atmospheric data monitoring of the network of meteorological stations of Agrarian University of Ecuador, Guayaquil and Milagro headquarters.” Keywords: Crowdsourcing  Meteorology Meteorological stations  Vigilance

 Opendata

1 Introduction In Ecuador, there are data about conventional stations since approximately 1990 as recorded on the website of the Meteorological Service of Ecuador [1].

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 181–187, 2019. https://doi.org/10.1007/978-3-030-03577-8_21

182

M. Aguirre-Munizaga et al.

Ecuadorian institutions have different weather stations in different parts of the country, and they are dedicated to monitoring these parameters in order to obtain realtime information for meteorological surveillance systems and warning systems. In 2015, Polytechnic School of Chimborazo (ESPOCH) in partnership with the National Institute for Energy Efficiency and Renewable Energy (INER), implemented nine automatic meteorological stations in the Central-Andean region of Ecuador, these stations have electronic sensors for solar radiation global and diffuse to measure air and soil temperature, speed and direction of wind, rainfall, and barometric pressure; such information is recorded in a database stored on a server to be displayed on the Internet, the transmission is performed by an RS-232 serial link to a GPRS, which allows establishing a historical of variables for determining solar, wind and geothermal potential in the Central Andean Zone of Ecuador. According to the analyzed at the national level, INAMHI has the most significant number of meteorological stations. However the real-time data are available at a specific cost, and once a written request has been made, the other institutions get the information, and they are used privately and usually for own research purposes. What makes that getting meteorological information a cumbersome and bureaucratic process for external people, is often the visible unwillingness to transmit data and that these are not accessible to the general public, also the lack of dedication to the responsibilities of collection data as mentioned by the author of “Meteorological data acquisition in Ecuador, South America: problems and solutions” [2]. Nowadays, there are many collaborative tools to share information freely. The crowdsourcing is a platform that uses methods of teamwork that allows sharing data for different purposes. The term crowdsourcing has many definitions depending on the authors and contexts. However, Estellés and Gonzalez have analyzed more than 40 definitions proposing an inclusive idea in which they consider that crowdsourcing is a type of activity that suggests to a group of individuals to perform a free and voluntary task with shared benefits. In this context the research “Crowdsourcing roles, methods and tools for data-intensive disaster management” [3] can be cited, in which georeferencing technologies, the use of mobile phones and crowdsourcing methods allow to participate in various ways in disasters management and emergencies in order to contribute to the study of limits and advantages in these areas; In the same way, other researchers have used crowdsourcing methods for atmospheric data, testing the quality of these data and their possible applications in the context of weather and climate [4]. Such platforms used for climate change analysis [5], agriculture [6], public health [7], among others. The Agrarian University of Ecuador maintains as its mission since 1992 to form agricultural and environmental professionals at the highest level, reason why a research project is generated that allows to implement a collaborative open platform based on crowdsourcing, with atmospheric data from stations installed on their properties, and obtaining a complete and total access to information respecting the Organic Law of Transparency and Public Information Access, in force in Ecuador.

Analysis of Atmospheric Monitoring Data Through Micro-meteorological Stations

183

2 Related Work For the approach of this research it is mainly taken as a reference the concept mentioned in [4] “Crowdsourcing for climate and atmospheric sciences: current status and future potential”, where it is said that crowdsourcing is now focused on incorporating distributed networks of portable sensors that can be activated and maintained through the traditional crowdsourcing protocol, as an open call for participation, as well as the reuse of data from large pre-existing sensors networks, therefore it is suggested that the data generated from the platform be used by meteorologists and other researchers related to the area, as establishing the possibility of adding nodes and sensors to the same platform, which can be commercial or designed in academia. Agriculture has also benefited from the use of this new technology, in Tanzania there were a number of problems including a limited number of workers and a limited capacity to acquire, solve and synthesize related issues and solutions from different actors, that is why it was proposed a framework for an agronomic advisory service system (crowdsourcing platform called ‘Ushaurikilimo’). The proposed structure uses farmers who have the ability to report any problems, and a researcher can respond after reassignment. This process is called crowdsourcing space [8]. Atmospheric science has been slow to take advantage of crowdsourcing technology, developments in this technology have reached the point where the benefits of the approaches simply cannot be ignored because they have potentially far-reaching consequences in the way that measurements in these sciences are collected and used [9]. Such is the case, that accurate observations of air temperature in urban areas are essential for planning energy demand and the effect of high temperatures on human health [5]. This innovative technology is already a valuable tool for involving the public, and if appropriate procedures for validation and quality control are adopted and implemented, it has excellent potential to provide a useful source of high temporal and spatial resolution, real-time data, especially in regions where there are currently few observations, which adds value to science, technology and also society [4].

3 Technology Architecture At the present time, the Agrarian University of Ecuador keeps two installed meteorological stations in Guayaquil and Milagro, each station measures values of temperature, humidity, rainfall and barometric pressure. 3.1

Technology Used

The preparation of a description of the existing environment for monitoring meteorological data is described in this report, through a set of alternatives that are measured to meet the needs identified. The current data transfer pattern shows that the data-logger is directly connected to the micro-meteorological station, and it is expected that through the proposed model in Fig. 1 to achieve the transfer of data through a web server that acts as a link between

184

M. Aguirre-Munizaga et al.

the stations in Milagro and Guayaquil; and at the same time as the first phase of the project to have access to this data in real time via FTP. Implementation and start-up were carried out based on the method of minimum network approved by OMM, with this architecture there is available a system for transferring information from the places where information is collected to any other site in the area, having means to estimate the amount of data taken from the stations and avoid uncertainty. Monitoring of the stations is done through a remote access software and with in-situ visits to the place where the equipment is already installed. 3.2

Proposed Architecture

The information captured by the sensors from the meteorological stations has like a goal to be processed by the web application to remotely monitor the meteorological stations using GPRS (General Packet Radio Service) to access to data collected by data loggers. GPRS is a mobile data service-oriented packages used in 2G and 3G cellular communications in the global system of mobile communication systems (GSM) [10], being a stable communication, reliable with real-time data transmission and reception. It is used for data transmission, a modem (GPRS) in each meteorological station [11]. Figure 1 presents the raised architecture based on GPRS for connectivity of meteorological stations from Guayaquil and Milagro, with the data server via FTP to obtain data from the datalogger.

Fig. 1. Architecture using Raspberry

Guayaquil and Milagro meteorological stations are connected directly to their respective datalogger, to which a RaspBerry Pi device is adapted and performs the function of receiving data from the datalogger and automatically sends these to the

Analysis of Atmospheric Monitoring Data Through Micro-meteorological Stations

185

modem GPRS to be transmitted from the RaspBerry to the server through the mobile network via FTP of the application layer of the OSI (Open System Interconnection) model. 3.3

Results

The results generated by the monitoring platform are shown in the link1 where it is denoted that the site is structured in three parts in its first phase: data downloading, statistical data generation and general information. It is important to mention that graphics displayed within the tool allow users to analyze the parameters in a friendly way, as it is the case of the rose of the winds arising within the statistics option and the heat index that is in WMS service. Variables Included in the Monitoring The used micro-meteorological station has sensors to monitor every little, several variables and also estimates in real time other variables from those already determined. These are maximum temperature, minimum and average, instantaneous and accumulated precipitation, relative humidity, atmospheric pressure, spray, direction and speed of wind gust, wind speed. Over time, the collection of large volumes of climate information, allows the realization of climate forecasts, creating alerts and providing information for the development of the activities that man performs, from agricultural activity, through tourism and recreation, to the routine of the day-to-day. Transmission and Download Speed The theoretical transmission speeds of GPRS correspond between 44 to 171.2 kbps. In our project, the rate of transmission and data reception from the mobile corresponds to approximately 45 kbps. In Fig. 2 can see the data download options. Statistics with the Analysis of Atmospheric Data Comparison of Temperatures In this graph, we can observe a control tool where we identify in monthly basis the fluctuation of both minimum and maximum temperatures and determine the average value of each and thus verify which days of the month fall outside the determines range by an allowance of ðl  2rÞ. Monthly Humidity In this graphic, it is monthly identified the humidity fluctuation during the winter season in Ecuador. Verifying which month is the one that has more amount of humidity has had, and which month has been the driest during this stage.

1

http://meteorologiauae.uagraria.edu.ec/.

186

M. Aguirre-Munizaga et al.

Fig. 2. Data download options

4 Conclusions and Future Work An in-depth review is provided about the critical issues and trends of meteorological data in Ecuador, the challenges we face when reasoning and making decisions with real-time data of crowdsourcing (as problems of information overload, “noise”, disinformation, bias, and trust), core technologies and Sensor Web Enablement standards and Open GeoSMS, as well as some outstanding examples of implementation of projects worldwide. Making remarkable research on data of network collection and meteorological data dissemination, through the data platform for data analysis, considering in this way that once all meteorological information is available it can be implemented in different types of products such as maps and seasonal statistics analysis from various variables such as climate monitoring and social conditions [12]. Having the more data before making final decisions is a critical factor in these systems, so it is expected to generate in the future a physical architecture like the one proposed in this research [13] “A Cloud Computing Based Framework for Storage and Processing of Meteorological Data.” Because of the role played by the UAE supporting agriculture in Ecuador, in the future it is expected to provide the farming community of the Coast, enough support in terms of climate information, enabling better technological management of this human activity.

Analysis of Atmospheric Monitoring Data Through Micro-meteorological Stations

187

References 1. INAMHI: La Meteorología en el Ecuador, Instituto Nacional de Meteorología e Hidrología 2. Trapasso, L.M.: Meteorological data acquisition in Ecuador, South America: problems and solutions. GeoJournal 12, 89–94 (1986) 3. Poblet, M., García-Cuesta, E., Casanovas, P.: Crowdsourcing roles, methods and tools for data-intensive disaster management. Inf. Syst. Front. 1–17 (2017) 4. Muller, C.L., Chapman, L., Johnston, S., Kidd, C., Illingworth, S., Foody, G., Overeem, A., Leigh, R.R.: Crowdsourcing for climate and atmospheric sciences: current status and future potential. Int. J. Climatol. 35(11), 3185–3203 (2015) 5. Overeem, A., Robinson, J.C., Leijnse, H., Steeneveld, G.J., Horn, B.K.P., Uijlenhoet, R.: Crowdsourcing urban air temperatures from smartphone battery temperatures. Geophys. Res. Lett. 40, 4081–4085 (2013) 6. Van Etten, J.: Crowdsourcing crop improvement in sub-saharan Africa: a proposal for a scalable and inclusive approach to food security. IDS Bull. 42, 102–110 (2011) 7. Brabham, D.C., Ribisl, K.M., Kirchner, T.R., Bernhardt, J.M.: Crowdsourcing applications for public health. Am. J. Prev. Med. 46, 179–187 (2014) 8. Sanga, C.A., Phillipo, J., Mlozi, M.R.S., Haug, R., Tumbo, S.D.: Crowdsourcing platform ‘Ushaurikilimo’ enabling questions answering between farmers, extension agents and researchers. Int. J. Instr. Technol. Distance Learn. 13, 19–28 (2016) 9. Chapman, L., Bell, C., Bell, S.: Can the crowdsourcing data paradigm take atmospheric science to a new level? A case study of the urban heat island of London quantified using Netatmo weather stations. Int. J. Climatol. 37, 3597–3605 (2017) 10. Al-Ali, A.R., Zualkernan, I., Aloul, F.: A mobile GPRS-sensors array for air pollution monitoring. IEEE Sens. J. 10, 1666–1671 (2010) 11. Parvez, S.H., Saha, J.K., Hossain, M.J., Hussain, H., Shuchi, N.Z., Islam, A., Hasan, M., Paul, B.: A novel design and implementation of electronic weather station and weather data transmission system using GSM network. WSEAS Trans. Circ. Syst. 15, 21–34 (2016) 12. IRI: Climate: Analysis, Monitoring and Forecasts. http://iridl.ldeo.columbia.edu/maproom/? bbox=bb%3A-90%3A-60%3A-30%3A15%3Abb 13. Aguirre-Munizaga, M., Gomez, R., Aviles, M., Vasquez, M., Recalde-Coronel, G.: A cloud computing based framework for storage and processing of meteorological data pp. 90–101 (2016)

Infra SEN: Intelligent Information System for Real Time Monitoring of Distributed Infrastructures and Equipments in Rural Areas Bala Moussa Biaye1 ✉ , Khalifa Gaye1, Cherif Ahmed Tidiane Aidara1, Amadou Coulibaly2, and Serigne Diagne1 (

)

1

2

Laboratory of Computer Science and Engineering for Innovation (LI3), Assane Seck University of Ziguinchor, Diabir, BP 523 Ziguinchor, Senegal {b.biaye3299,c.aidara3345}@zig.univ.sn, {kgaye,sdiagne}@univ-zig.sn Laboratory of Sciences of the Engineer, Computer Science and Imaging (Icube – UMR 7357), National Institute of Applied Sciences of Strasbourg, University of Strasbourg, CNRS, Strasbourg, France [email protected] Abstract. The management policy of infrastructures and equipments (e.g.: hydraulic, solar, sanitary, educational, etc.) disseminated throughout the country, particularly in rural areas, generally difficult to access, is a major challenge for the technical services of the State. The remote monitoring System proposed in this paper aims to offer to organizations in charge of the management of infra‐ structures and equipments a platform that allows them to find out in real time how the equipments work and to detect any failures. In the present study, this paper presents a general framework for Infra Sen approach for real time monitoring of remote installations. We use the FMECA method in order to identify vulnerable equipments and analyze their Effects and their criticality. The solution proposed has been applied to the hydraulic installation at Niamone in the department of Bignona, in Ziguinchor region, in Senegal. Keywords: Database · GIS · Connected objects · Remote monitoring FMECA

1

Introduction

As part of the development of rural areas, the State of Senegal through national agencies and helped by NGOs has launched in rural areas a vast program of equipments in various domains like hydraulic infrastructures, schools, health and energy. However, once these infrastructures and equipments are installed, unfortunately, they do not benefit from effective monitoring despite the huge budgets invested for their implementation. This paper proposes an intelligent information system for remote monitoring of distributed hydraulic infrastructures and equipments. The system proposed consists of a central server for processing measures connected to an acquisition unit that monitors a set of © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 188–193, 2019. https://doi.org/10.1007/978-3-030-03577-8_22

Infra SEN: Intelligent Information System

189

sensors. In Sect. 2, we make the state of the art of remote monitoring techniques and failures detection in hydraulic installations. In Sect. 3, the paper presents the architecture of the Infra SEN platform. In Sects. 4 and 5 we successively present the approach used and the implementation of the platform, as well as the first results obtained. Finally, we outline some prospects for the development of the Infra SEN project.

2

State of the Art and Positioning

The exploitation of drinking water distribution networks around the world suffers from numerous failures that can arise in arbitrary places that are difficult to determine. In addi‐ tion to the enormous economic losses bind with faults, there is also the risk of epidemics caused by leaks that constitute a great danger to public health. A study conducted by the International Association of Water Distribution (IAWD) shows that the amount of water lost through the distribution networks would be between 20 and 30% of total production. This has led network operators to think of using more efficient ways to detect these leaks in record time. In the field of leak detection, there are several methods and techniques. Currently used detectors can be classified into two main categories: • acoustic noise-based detectors that require the operator to move to locate the exact location of these leaks and acoustic correlation based detectors that allow remote leak detection, and which give the place of escape with great precision. • acoustic correlation detectors are used to detect leaks. Indeed, this technique is the subject of several works and implementations [1–5]. It is used for leak detection by Osama [1]. In the works of Miloud [2], this same method was used to implement a leak detection algorithm in distribution networks on the TMS320C6201 processor. The National Directorate of Drinking Water and Sanitation of Haiti [3] in its docu‐ ment entitled “control of water loss leak detection” has used the acoustic correlation method for precise location of leaks. In the works cited, formulas, algorithms and architecture have been proposed for the detection of leaks. But, these works apply only to metal tubes. The method of acoustic correlation, although effective, finds its limits. The acoustic method becomes problematic in the case of plastic tubes [4]. The acoustic leak detection equipment was designed primarily for small diameter metal pipes. However, the signals emitted by the leaks in the plastic tubes have acoustic characteristics that are substan‐ tially different from those produced by leaks in metal pipes. Materials such as HDPE or PVC absorb vibrations enormously. A recent study conducted by the Canadian Institute for Research on Construction (IRC) and funded by the American Water Works Research Foundation found that leaks in plastic tubes can be detected using acoustic techniques, but that present many difficulties. 2.1 Positioning of Our Contribution However, in Senegal plastic tubes are the most used in water pipes, it would be more effective to use non-acoustic techniques for leak detection.

190

B. M. Biaye et al.

Leaks in plastic tubes can also be detected using non-acoustic techniques such as tracer gas, infrared imaging and radar. However, the use of these techniques is still very limited and their effectiveness is not as well established as in the case of acoustic methods [4].

3

Infra SEN Platform Architecture

The general objective of our work is to design an Infra SEN platform for remote moni‐ toring of distributed infrastructures and equipments. This equipments are been instru‐ mented with sensors capable of acquiring measurements. The sensors must deliver measurements that are recorded by an acquisition system. The installations to follow being positioned in different localities (distributed), to find the locality of the defective equipment we will exploit the spatial analysis capabilities of the ArcGIS software. Thus, maintenance teams can be informed about the locality where they will have to intervene to restore the proper functioning of the installation. Equipment likely to fail will be connected to one or more sensors that deliver information on the operation of this equipments. This information dynamically feeds the Infra SEN database, which is linked to the ArcGis database. Using ArcGIS’ spatial analyst, we can reference and map all equipments including their failures (Fig. 1).

Fig. 1. Core architecture of the SIGI platform

Infra SEN: Intelligent Information System

4

191

Description of Our Approach

We use the technique based on the measurement of the debit of water. In this method, we calculate the variation of the debit between two measurements and the Linear Leakage Index (LLI). A good approach to this index is obtained by measuring the minimum night debit (usually between 1 am and 4 am, after deduction of heavy nocturnal consumers [5]). It is calculated in this way:

LLI = Volumes lost in distribution (m3∕j)∕length of the pipeline (km)

(1)

Volume lost in distribution (water loss) = volume put in distribution − volumes consumed.

(2)

Areas with significant leakage can be determined by the method of test by step. This consists of subdividing the sector and then measuring the debit. In recent years, there has been a tendency to permanently install flow sensors connected to the system. The values of the debit thus transmitted are automatically analyzed and allows detecting the leaks. 4.1 Infra SEN Module for Failure Detection Related of Leaks Debit The objective is to obtain measured flow rate (M) equal to its normal debit (C). If M is not equal to C, we will have M = C − e, with e difference between measurement signal and normal debit. To establish this algorithm, we based ourselves on the method of the straight line of linearity. When two quantities are such that the variations of one are proportional to the variations of the other, then the values y of one express themselves according to the values x of the other by a relation of the type y = ax + b, where a and b are two real numbers. Given a straight line, we consider on this line a fixed point, final measurement Mf(xf, yf) and an arbitrary point Mi(xi, yi). According to the properties of similar trian‐ gles, the quotient (yf − yi)∕(xf − xi) does not depend on the point Mi chosen on the right, so this quotient is equal to a constant. This constant a is called the steering coefficient (or slope) of the line; (yf − yi)∕(xf − xi) = a.

(3)

The steering coefficient gives the direction of the right. In our practical case, the steering coefficient indicates the debit of water. In fact, in both cases, y varies by the same quan‐ tity Δy = aΔx. We will hold back the writing a = Δy∕Δx. Three cases possible: If a > 0, so y increases when x increases (the function is increasing). y increases all the more rapidly as a is large. The suction cups are poorly closed and the air that enters increases the output debit. If a < 0, so y decreases when x decreases (the function decreases). y decreases all the more rapidly as the absolute value of x is large. There is a leak in the network, this justifies that the output debit is lower than the input debit. If a = 0, so y is constant, so the debit measured by the different sensors is the same.

192

5

B. M. Biaye et al.

Deployment of the Infra SEN Platform

Failing to have physical installations, we used Scilab numerical analysis software to simulate the measurements acquisition. In the case of a physical installation, the proce‐ dure and algorithms remain unchanged. Scilab, is a free multi-domain simulation software that provides a graphical platform and a set of libraries that allow modeling, simulation, the implementation and control of systems in different areas of application. To simulate, we need a description of the program. In the execution of this program, we have respected the various stages of operation of the central. The time step is managed by the multithreads programming technique. We used the function rand (n, m) which automatically generates values that simulate measurement sensor outputs as a function of time. n is the measurement time and m, the measurement output of the sensors. The thread function. Sleep (z) is used to manage the time step. This allows to asleep the program for a desired time Z. Every day measurements are made. The total number of experiments is 100 days. Every hour the sensors send measurements to the acquisition central. We get the result of the Fig. 2.

Fig. 2. Sensor measurements versus the time

Infra SEN: Intelligent Information System

6

193

Conclusions and Perspectives

We propose a real time remote monitoring system of equipments based on a GIS. This system significantly improves the quality of service; reduces wasted time and costs related to equipments maintenance. However, if the system proposed allows monitoring of the equipments remote, it does not yet solve the problem of maintenance to remote. This falls under the problematic of remote maintenance that we have not discussed here. The application to the monitoring of hydraulic equipment in the municipality of Niamone in the department of bignona validated the mapping and algorithmic aspect of failure detection. Future work should allow us to perform full-scale tests for the whole territory. Based on the Infra-SEN project approach, many remote monitoring applica‐ tions may be possible, in various sectors like health, education, renewable energies including solar panels.

References 1. Hunaidi, O.: Leaks detection in water pipes, constructive solution n°40. Institute for Research on Construction, Canadian National Research Council, 6 p. (2000) 2. Bentoumi, M., Chikouche, D., Bouamar, M., Khelfa, A.: Implementation for real time a leak water detection algorithm distribution networks on the TMS320C6201 processor using acoustic correlation. In: 4th International Conference on Computer Integrated Manufacturing, CIP 2007, 3–4 November 2007 3. (NDDWS, 2013) National Directorate of Drinking Water and Sanitation of Haiti «loss control of water - search for leaks», Version 23, September 2013 4. Gao, Y., Brennan, M.J., Liu, Y., Almeida, F.C.L., Joseph, F.: Improving the shape of the crosscorrelation function for leak detection in a plastic water distribution pipe using acoustic signals. Appl. Acoust. 127, 24–33 (2017) 5. Almeida, F.C.L., Brennan, M.J., Joseph, P.F., Gao, Y., Paschoalini, A.T.: The effects of resonances on time delay estimation for water leak detection in plastic pipes. J. Sound Vibr. 420, 315–329 (2018)

Product-BPAS, A Software Tool for Designing Innovative and Modular Products for Agriculture and Crafts Chérif Ahmed Tidiane Aidara1(&), Bala Moussa Biaye1, Serigne Diagne1, Khalifa Gaye1, and Amadou Coulibaly1,2 1 Laboratory of Computer Science and Engineering for Innovation (LI3), Assane Seck University of Ziguinchor, BP 523 Diabir, Ziguinchor, Senegal {c.aidara3345,b.biaye3299}@zig.univ.sn, {sdiagne,kgaye}@univ-zig.sn, [email protected] 2 Laboratory of Sciences of the Engineer, Computer Science and Imaging (Icube – UMR 7357), National Institute of Applied Sciences of Strasbourg, University of Strasbourg, CNRS, Strasbourg, France

Abstract. Farming equipment plays a very important role in the performance of farmers in African countries. Particularly in Senegal, the design of tools is generally done in a traditional way. Very often these tools suffer from reliability problems, modularity to facilitate their maintenance. Ergonomic problems are also encountered to adapt farming tools to the size, the age, and the morphology of users. In this paper, we propose a methodology for general approach for designing modular products. This methodology is then implemented in ProductBPAS, a software aimed to help designers and to evaluate the behavioral performance of products in the preliminary phases. Keywords: UML modeling  Software engineering  Databases Product-BPAS  Early design  Design methodology  Semantic modeling Modularity

1 Introduction The manufacture of a product requires consideration of several factors that will reflect the final product. Indeed, this complexity of the design lies in the fact that the products must be more reliable, resistant, easy to assemble and disassemble, etc. It is in this sense that it is stated in [1] that the designer must opt for product solutions that are simple to manufacture, ergonomic, very reliable, safe, easy to maintain and have an overall cost over the entire life cycle that is attractive to the consumer. Being more and more complex, products imply an interaction between several actors [2, 3]. Managing all the factors in the design process requires taking into account the environment and assessing the performance of the future product or system [4, 5]. For example, we plane to carry out a scientific and technical study of the existing agricultural tools in order to propose innovative solutions, which take into account the modularity. The article successively presents in Sect. 2 the state of the art, in Sect. 3 © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 194–204, 2019. https://doi.org/10.1007/978-3-030-03577-8_23

Product-BPAS, A Software Tool

195

Object-oriented modeling of complex products, in Sect. 4 the evaluation of the modularity M of a product, then in Sects. 5 and 6 the Product-BPAS software architecture and its implementation.

2 State of the Art After a bibliographical study [6–8], we note that in Africa and Senegal’s context, the forge and carpentry are two artisanal domains strictly linked to the agricultural tools (manual and harnessed). On the one hand, they play an important role in the maintenance and manufacture of harnessed cultivation equipment, and on the other hand, they participate in the production of small-scale agricultural tools such as “kadiandou”, “hilaire”. Most blacksmiths work on farm equipment. Therefore, for us, if the blacksmith or carpenter trades were an integral part of the craft industry and the tools used by the farmers had made by these artisans, it would be obvious that there is a clear link between agriculture and crafts. Moreover, according to Homam ISSA in [9], the design process in done in five stages in the most cases [10, 11]. In addition, there are several tool’s design model in witch V design, spiral design, unified methods, agile methods, and more. We note UML, SysML formalism, DSM matrices, etc. [12] in many modeling methodologies. Being the result of the decomposition of a product or a system, the modularity allows solving several aspects of the manufacture of products. In fact, several studies have been carried out in this direction [13, 14] and approaches allowing to grow modularity of systems and to simplify their links are proposed. We note generative grammars for classifying components or the use of the concept of “holons” to model connections between structure and functions. In most of these works, the authors present a structural and functional modularity of the products or systems in order to arrive at a final product with a more flexible configuration. Our goal is to evaluate the modularity of products family or a range of products for hand tools, in this paper we propose an object-oriented modeling of complex products (Products classes, components classes, classes linkages), an evaluation of the modularity of a product, an architecture and implementation of product-BPAS.

3 Object-Oriented Modeling of Complex Products In the Product-BPAS, we use UML and X-CDSM matrices to arrive at a fine modeling of the product. Then we will talk about product classes, component classes, and link classes. 3.1

Classes of Products and Components

Modularity is defined by the grouping of modules satisfying a function well defined in the specifications. The modules (product) are groupings of components (functional subassemblies). Subassemblies are sets of non-functional parts. Parts under single parts of a subset. In the figure below the product is represented as a product class. Indeed each instance represents a product with its different components and the links between the components (Fig. 1).

196

C. A. T. Aidara et al.

Fig. 1. Product class/component class.

3.2

Classes of Links

A connection between two components can be physical or functional. Indeed one can have interactions of friction, torsion, and support between two components as illustrated in the following figure (Fig. 2).

Fig. 2. Class of links.

3.3

Semantic Matrix

The semantic matrix shown in Fig. 3. Allows you to link the component and link classes of the product. Indeed we will have the different components Ci with their bonds (Ci, Cj) in the cells of the matrix. Note that the diagonal cells will allow the characterization of the component (name, size, …). Thus we will find in the last line of

Product-BPAS, A Software Tool

197

the matrix the number of links. The semantic matrix makes it possible to construct the product’s link graph that have used to identify the related components needed for calculating the modularity.

Fig. 3. Semantic matrix.

4 Evaluation of the Modularity M of a Product Starting from the conceptual design, the work in [2] defined a field of eligible solutions (instances, SS (Space of Solution), classes of solutions) respecting certain requirements and constraints as illustrated in the Fig. 4. We will consider a solution instance in the area of eligible solutions to highlight the modular aspect (Fig. 5).

Fig. 4. SS and DES

198

C. A. T. Aidara et al.

Fig. 5. Representation of a product instance by a graph G (C, L)

A solution instance of solution will be represented by a graph G (C, L) with C the set of components and L the set links. In this graph G (C, L), we have: C = {X1, X2, X3, X4} and L = {{X1, X2}, {X1, X3}, {X2, X4}, {X4, X2}}}. Moreover, to find the connected components, we will first build the adjacency matrix denoted M of the graph G and the matrix of the transitive closure of the same graph noted M*.

X1 M = X2 X3 X4

X1 X2 X3 X4 1 0 1 1 0 B1 0 0 1 C B C B C @1 0 0 0 A 0

0

1

0

0

The adjacency matrix is obtained by noting: • 1 if there is a relationship between two components • 0 otherwise Thus we will be able to determine the matrix of the transitive closure denoted M*. M* = 1 + M1 + M2 + M3

I: Mn−1:

matrix unit (put on the main diagonal of the 1 and the rest we put 0) adjacency matrix (n−1) we stop at 3, since n = 4 for this graph

ð1Þ

Product-BPAS, A Software Tool

199

So:

From the matrix of the transitive closure, we deduce the following connected components: {X1}, {X2}, {X3} and {X4} because the lines are not alike, which is why each line is considered as a component related in itself. To calculate the modularity of an instance, we will use analogy based on the cyclomatic number to determine the complexity of a program (computer) by counting the number of paths. The cyclomatic number is defined by the following equation: M ¼ EN þ 2P with M= E= N= P=

ð2Þ

cyclomatic complexity the number of edges of the graph the number of nodes of the graph the number of connected nodes of the graph

In software engineering, a simple code with a low cyclomatic number is theoretically considered as easier to road, test and maintain. It is in this same logic that fits our approach to define a product modularity with the same equation (Eq. 2) where M = Modularity of the product E = the number of links between of the graph N = the number of components of the graph P= the number of connected components of the graph For that we will take into account two graphs representing two instance of products, then will determine the most modular between the 2 products. If we consider a first graph G (C, L) in Fig. 6, the connected components of the graph G are: {X1}, {X2}, {X3}.

Fig. 6. Graph G(C, L):{X1}, {X2}, {X3}

200

C. A. T. Aidara et al.

Then, consider a second graph G’ = (C’, L’) whose connected components: {X1, X2, X3} Graph G’(C’, L’): {X1, X2, X3} • For the graph G we will have:

• For the graph G’ we will have:

M’ < M so the solution M’ is theoretically more modular

5 Product-BPAS Software Product-Behavioral Performance Assessment System (BPAS) features are performed by three main modules (Fig. 7): • Design Review • Semantic enrichment • Behavioral Performance Engineering The Design Review allows the creation of new projects, it facilitates the import of templates (BOM files (Text/Excel Table), the input of components via a graphical interface + viewer (eDrawings, DWG TrueViews, Design Review, …), files in neutral formats of CAD (STEP, IGES, …), file in native formats (SolidWorks, CATIA, …)

Fig. 7. Product-BPAS architecture

Product-BPAS, A Software Tool

201

The semantic enrichment creates the X-CDSM matrix which is made from the nomenclature retrieved by the preceding module in order to arrive at a consistency check of the links and components. The Behavioral Performance Engineering will evaluate the behavioral performance of the product as modularity M. It is in this sense that we can draw the graph of the product (Fig. 7).

6 Implementation of Product-BPAS Product-BPAS is software designed for designers that allows them to analyze modularity during the design of innovative products. Thus we will first realize the model import module with the input of the components via a graphical interface and an additional software to visualize the 3D models of the products (here, we use the eDrawings visualizer). By coupling visualization with the importation of BOM files, we build the list of components of the product with the links between the components (Fig. 8).

Fig. 8. Design review module

The list of entered components is stored in an object-oriented database. Then, we build the semantic matrix which then allows to define the product link graph. The graph thus obtained serves as a support for evaluating the modularity by calculating the cyclomatic number. This procedure is illustrated in the following figure (Fig. 9): This figure above highlights the product in the software. Indeed it allows to know the number of components of the product, its subcomponents, the number of connections in order to evaluate the modularity M. Figure 10 allows us to draw the graph corresponding to the number of components and links of the product. Once traced, we determine the number of connected components, here colored.

202

C. A. T. Aidara et al.

Fig. 9. Semantic enrichment module and performance engineering

Fig. 10. Graph development to determine modularity

As for Fig. 11 it tells us about the exact level of modularity of the product. Thus, after evaluation of two or more products, we can say which is the most modular and make the choice on this one.

Product-BPAS, A Software Tool

203

Fig. 11. Evaluation of modularity

7 Conclusions and Perspectives In Senegal, local manufacturing of innovative and reliable agricultural tools is an excellent opportunity to promote crafts, especially for blacksmith and carpenter trades. Such an evaluation tool could help to improve the design of craft products. In addition, these products could be exported to the sub region. This work will facilitate the use and maneuverability of agricultural tools. In our future work, we plan to carry out an ergonomic tooling study to alleviate the hard work of farmers and minimize the risk of injury.

References 1. Menye, John the Baptist: Validation of maintainability and design availability of a multi-component system. Laval University (2009). http://www.exercicescorriges.com/i_ 107544.pdf 2. Diagne, S., Coulibaly, A., De Beuvron, F.: Towards a conceptual design for mechatronic product’s family development. In: Proceedings of the 2014 International Conference on Innovative Design and Manufacturing (ICIDM), pp. 94–99. IEEE (2014). doi: https://doi. org/10.1109/IDAM.2014.6912677 3. Casner, D., Houssin, R., Knittel, D., Renaud, J.: An approach to design and optimization of mechatronic systems based on multidisciplinary optimization and based on the feedback of experiences. In: 21st French Mechanics Congress, August 26–30, 2013, Bordeaux, France (FR) (2013). http://documents.irevues.inist.fr/handle/2042/52520 4. Casner, D., Renaud, J., Knittel, D.: Design of mechatronic systems by topological optimization. In: 12th AIP-PRIMECA National Conference, AIP-Priméca (2011). https://hal. archives-ouvertes.fr/hal-00843025/ 5. Coulibaly, A., De Bertrand De Beuvron, F., Renaud, J.: Maintainability assessment at early design internship using advanced CAD systems. In: Proceedings of IDMME-Virtual Concept, pp. 20–22 (2010)

204

C. A. T. Aidara et al.

6. UNESCO and International Trade Center (ITC): Final Report of the International Symposium on Crafts and the International Market: Trade and Customs Codification (1997) 7. Alioune, B.: Artisans Without Borders, Market and Organizations, 2006/1 No. 1, pp. 121– 152 (2006). doi: https://doi.org/10.3917/maorg.001.0121 8. Sophie, B., Claude, F.: Presentation: society and handicrafts. from theory to economic reality. Market and Organizations, 2006/1 No. 1, pp. 13–16. doi: https://doi.org/10.3917/ maorg.001.0013 9. Homam_ISSA thesis dissertation, Contributions to the Design of Configurable Products in Advanced CAD Systems, December 2015 10. Hubka, V., Ernst Eder, W.: Science Design: Introduction to the Needs, Scope and Organization of Engineering Design Knowledge. Springer, London (2012) 11. Pahl, G., Beitz, W.: Engineering design: a systematic approach. NASA STI / Recon Technical report 89: 47350 (1988) 12. Serigne, D.: Conceptual semantic modeling for the behavioral performance engineering of complex products University of Strasbourg, Graduate School of Mathematics Information and Engineering Sciences (MSII MSII ED 269) UdS INSA of Strasbourg Laboratory of Design Engineering (LGeCo EA 3938) Doctor of the University of Strasbourg Discipline Engineering Sciences Specialty Computer Engineering Mechanical Engineering, July 2015 13. Baldwin, C., Clark, K.: Modularity in the Design of Complex Engineering Systems. In: Braha, D., Minai, A.A., Bar-Yam, Y. (eds.) Complex Engineered Systems, Science Meets Technology, pp. 175–205. Springer, Berlin (2006) 14. Houssin, R., Coulibaly, A.: An approach to problem solving for safety in innovative design process. Comput. Ind. 62(4), 398–406 (2011)

An Efficient Next Hop Selection Scheme for Enhancing Routing Performance in VANETs Driss Abada1(B) , Abdellah Massaq2 , and Abdellah Boulouz1 1

Laboratory LABSIV, Faculty of Sciences, Ibn Zohr University, Agadir, Morocco [email protected] 2 Laboratory OSCARS, National School of Applied sciences, Cadi Ayyad University, Marrakech, Morocco

Abstract. Link stability is one of the key parameters for routing in vehicular ad hoc networks. Estimating this metric basing only on the mobility parameters, and disregarding the impact of the wireless fading channel may cause degradation in the routing performance. In this paper, when channel fading is taken into consideration, we propose to estimate first communication range, where the fading is above some threshold, and then use it to measure link stability. A novel estimated stability metric will be combined with link signal received quality to select potential next hop, in order to make paths selected from vehicles to Internet gateway more reliable in the network. The proposed approach is compared with an existing mobility parameter based stability routing protocol. The results show that there is a significant improvement in routing performance in term of throughput and overhead. Keywords: VANET Mobility

1

· Routing · Link stability · Fading · RSS

Introduction

Making efficient vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications in VANETs plays an important role in Internet access applications. IEEE 802.11p [1–3], also known as Wireless Access in Vehicular Environment (WAVE) protocol is an enhancement to the 802.11 physical layer (PHY) and medium access control (MAC) to make inter-vehicular communication more efficient in VANETs. This standard is used as groundwork for Dedicated Short Range Communication (DSRC). It operates in the 5.9 GHz band and supports both V2V and V2I communications. The maximum data rate supported by this standard is 27 Mbps. The mobility supported is up to 200 kmph, making this suitable for VANETs applications involving highway scenario. IEEE 802.11p provides a short-range radio communication of approximately 300 m. The dynamic topology in VANETs may be caused by the mobility of nodes as well as by fading of the wireless link. Hop count-based routing protocols select c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 205–215, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_24

206

D. Abada et al.

the shortest path length in term of a number of hops. However, these protocols do not typically select a route with sufficient lifetime to maintain the longest possible duration of communication with the gateway, which makes existing routing protocols basically designed for MANETs not suitable for VANETs. That’s why many routing protocols are proposed to utilize a metric characterizing link stability to choose the most stable route in the network. In the one side, there are protocols [5,6] that have based only on the mobility features given by GPS such as, location, speed, direction and the fixed value of transmission range to measure link lifetime, and they ignored the impact of fading and quality of received signal. As consequence, the routes selected suffer from continuous packet losses and an increase of bandwidth consummation in the PHY, MAC, and network layer. In the other side, recently, many contributions [7,8] focus their studies on the inter-vehicle communication channel propagation models, such as Shadowing model, Rician, and Rayleigh distributions. They have taken into consideration the mobility and fading to estimate link stability, but those models are not more appropriate to simulate the communication in VANETs. In this work, we have modified contention-based forwarding scheme [6], to take into account to channel fading and the quality of the received signal of a link. We have integrated into the relay selection scheme, tow important features: link stability and link quality in term of the received signal. Link stability is measured using routing metric called effective link expiration time. This new metric is measured using vehicles mobility information and effective communication range which is estimated accurately taking into account to the fading channel statistics, instead of the fixed value of transmission range. Next, effective link expiration time is combined with link received signal quality (LRSQ), in order to select potential relays in the networks. In this paper, we have based on the fuzzy logic system to estimate the quality of link received signal. The rest of the paper is structured as follows. Our proposed approach is detailed in Sect. 2. In Sect. 3, we discussed the performance of our protocol. Finally, we give the conclusion and future work directions in Sect. 4.

2

Route to the RSU Discovery

The RSU1 route discovery and maintenance procedure adopted in our routing protocol are similar to that of routing protocol proposed in [6]. The RSUs implemented in the road are used as gateways between a wired network (Internet) and VANET, and each node in VANET wants to connect to the Internet, must initially find a route to RSU. RSUs broadcast periodically an advertisement message in its restricted geographic zone to inform the vehicles for its existence. If one vehicle doesn’t receive any advertisement message from RSU or its neighbors, a reactive discovery must be executed. In this case, an RSU solicitation message is broadcasted by exactly the same mechanism as RSU advertisement message until it is received by an RSU or any vehicle that is already aware of a route to an RSU. 1

Road Side Unit is a fixed gateway used to connect VANETs to a wired network.

An Efficient Next Hop Selection

2.1

207

Effective Communication Range

The radio communication range is a key parameter of how long two vehicles can keep connected. On one side, direct communication over large distance will allow vehicles to transmit messages from source to destination in less number hop. As consequence, access channel contention will increase, transmission rate will be low and many MAC retransmissions will be introduced. On the other side, relaying data through a short-range occupies more bandwidth but reduces interfering traffic and access channel waiting time than direct communication. Nakagami-m distribution seems to be most suitable to model communication in VANET networks [9]. Assuming that the received power denoted P is a random variable that follows a Nakagami distribution under the fading channel model in vehicular environments. The probability density function (PDF) for a signal to be received with power x for a given average power at distance d, of the received signal power x can be expressed as:  m  m m−1 Ωd x x exp − , x ≥ 0. (1) fP (x) := m Γ (m) Ωd where Γ (.) is the incomplete gamma function, and m denotes the fading parameter. The probability noted Pr that a signal x is successfully received, is deduced from the probability that the packet’s received signal is stronger than the reception threshold Pth  m  +∞  +∞  m m−1 Ωd x x exp − Pr {P ≥ Pth } := fP (x)dx = dx (2) m Γ (m) Ωd Pth Pth Moreover, if m is considered a positive integer, we can pass from continue to discreet domain, and the probability Pr can be written as:   m−1  1  mPth k mPth Pr {P ≥ Pth } := exp − . (3) k! Ωd Ωd k=0

In this paper, we assume that all vehicles have the same transmission power and that the transmission power is constant Pt . For the path loss, we adopt a quadratic path-loss according to the Friis model (path loss exponent = 2). Therefore, in the absence of interferences, the average received power Ωd at a distance d, and the reception threshold Pth which should, in average, be detected in a distance equal to maximum communication range R are: Ωd := KPt d−2 and Pth := KPt R−2 2

(4)

t Gr λ where K is a constant value K = G(4π) 2 L , λ is the wavelength of the transmission, Gt and Gr are the transmitters and receivers antenna gains respectively, and L is the path loss factor, usually set to 1. Substituting Ωd and Pth , the final expression of formula 3 will be:    2 k  2  m−1  1 d d −m Pr {d, R} := exp −m . (5) × R k! R

k=0

208

D. Abada et al.

We define effective communication range (ECR) noted Re as the expected value of wireless communication range R that can be derived as follows,  +∞ Re := E [R] = (1 − FP (x)) dx (6) 0

where FP (.) represents the cumulative density function (CDF) which can be written : (7) FP (x) := 1 − Pr {P ≥ Pth } Due to the mobility of nodes, the relative distance d varies at the times; consequently, the probability varies with node movement. To account for this random variation, we replace d in (5) with a continuous random variable Z, which represents the distance between the sender and the receiver. Originated from formula 5, the effective communication range can be reorganized as the following:   +∞  z 2  m−1  z 2  k  1  exp −m dz. (8) × −m Re := R k! R 0 k=0

Noted that the value of Re can be simply determined if the values of parameter m and R are known. 2.2

Relay Selection Metrics

2.2.1 Effective Link Stability The radio communication range and mobility parameters are the main factors used to measure the duration that can be two vehicles remained in connection. In this paper, we have considered routing metric called effective link expiration time. This metric is defined as the time duration in which two vehicles at each end of the link are within each other’s effective communication range in the VANET network. In the other words, link lifetime in which the received signal power is above an acceptable threshold. Assuming that vehicles are all equipped with GPS devices, and each vehicle can know its position, its speed, and its direction parameters through GPS system [6]. Therefore, we can predict the period of time that the connection can be maintained between these two vehicles. We consider two vehicles i and j which are identified by coordinates (xi , yi ) and (xj , yj ), and move with speed vi and vj , in directions θi , θj with respect to the x-axis, respectively. For measuring effective link expiration time denoted Te , we modified the formula in [6] that is often used to measure link expiration time as:

(a2 + c2 )Re2 − (ad − bc)2 − (ab + cd) (9) Te := a2 + c2 where, a := vi cos θi − vj cos θj b := xi − xj c := vi sin θi − vj sin θj d := yi − yj .

An Efficient Next Hop Selection

209

Here, instead of using a fixed value of communication range R, we have replaced it with an estimated value of effective communication range. Assumed that the effective communication range is symmetric (i.e the value of Re estimated in node j from i is the same value of Re measured in node i from j Reij = Reji ). Since the values used in the selection mechanism must be between 0 and 1. For this purpose we use a new function called effective stability function noted Se that is dependent on effective link expiration time. So, we take advantage of an exponential function proposed in [6] that satisfies the given criteria. Denote the effective stability function:   Te Se := 1 − exp − (10) a where a is a constant that defines the rate at which the function is rising. Note that the longer is the Te , the closer should be the result of this function to 1, and conversely, the smaller is the Te , the closer to 0 this function should be. Upon reception of a message, vehicles compute the minimum effective link expiration time and consider it as the lifetime of the route from them to the gateway. 2.2.2 Effective Horizontal Distance Rate Using effective link stability as routing metric, we will be sure that route selected is most stable and lowest fade one. This will improve certainly network performance especially in term of data throughput. However, such a path might have many numbers of hops than the shortest one. When packet relaying involves more hops, since the radio channel is shared among neighboring nodes in the network, it will increase medium access contention, interference, congestion, and packet collisions. Therefore, path length should also be considered when selecting a suitable path based on stability. For this purpose, another metric will be incorporated. This metric called effective horizontal distance rate. It calculates the amount of progress that the advertisement message has made in the opposite direction of the movement. Therefore, the farthest vehicle, which received the message successfully, will have a higher probability to retransmit than nearby nodes. As consequence, integrating this metric will allow vehicles to select the shortest path in term of hops. The effective horizontal distance rate noted De between two vehicles i and j is defined as: De :=

M in(dRSS ∗ cos (θij ) , Re ) Re

(11)

where dRSS is the inter distance calculated based on the average received signal strength, θij is a relative angle direction θij := θi − θj , and Re is the effective transmission range of vehicles. 2.2.3 Link Received Signal Quality As it is explained previously, our relay selection mechanism aims to improve relay selection by considering received signal strength (RSS). The relative speed

210

D. Abada et al.

between sender and receiver is a metric among others which has a good consistency with RSS. The faster source vehicle moves towards to receiver, the faster will be the increase in the link RSS. Similarly, the faster the source vehicle moves away from the receiver, the faster will be the decline in the link RSS [10]. For this purpose, we take advantage of the fuzzy logic system [11] which received signal strength and mobility speed factors are the fuzzy inputs and link received signal quality (LRSQ) is fuzzy output. Upon reception of advertisement message from the previous node, each vehicle measures RSS of the packet and calculates RSS factor (RSSF ) and mobility factor (M F ) as shown in the following formulas: RSSF := 1 − M F := 1 −

RSSth RSS

(12)

|vrel | vmax

(13)

where RSSth is the received signal strength threshold, vrel is relative speed between receiver and source and vmax is the maximum speed in the network. We assume that vehicles are all equipped with GPS devices and keep their speed during prediction link lifetime. The input fuzzy variables of RSSF and MF are classified into three levels. This grouping strategy gives more clues on the weakness and strength of input variables and helps to generate more accurate output data. ⎧ if RSSF > 80% ⎨ Good RSS Level = M edium if 40% < RSSF ≤ 80% (14) ⎩ Bad if 10% < RSSF ≤ 40% ⎧ if M F ≤ 25% ⎨ F ast M obility = M edium if 25% < M F ≤ 75% (15) ⎩ Slow if M F > 75% Linguistic values of quality of link received signal depending on the mobility regarding if the sender and receiver move towards or away from each other and RSS level. Once the fuzzy values of MF(Mobility Factor) and RSSF (RSS Factor) have been calculated, the receiver vehicle uses the IF/THEN rules2 to calculate the linguistic value of LRSQ. Table 1 is used to mapping each linguistic value of LRSQ to minimal and maximal numerical values. Upon reception message and after decision if LRSQ is very good, good, medium, low or very low, the receiver determines the values Min and Max according to Table 1 and then computes the value of a coefficient of RSS (CRSS) which is integrated into the relay selection as the metric selection. We define a CRSS as follows: CRSS := rand(M in, M ax)

(16)

The rand (A,B) function generates a random value between A and B to reduce the probability of two or more candidates, to reply at the same time and thus we will reduce packets duplication which is a major problem of CBF. 2

The fuzzy rules are given to Table 1 in our recent previous work [1].

An Efficient Next Hop Selection

211

Table 1. Numerical values corresponding to LRSQ LRSQ Very low Low Medium Good Very good

2.3

Min

0.01

0.21 0.41

0.61

0.81

Max

0.2

0.4

0.8

0.99

0.6

Next Hop Selection and Re-broadcasting Scheme

In this paper, the selection of next hop is performed by means of contention, by adapting the contention-based forwarding (CBF) to our proposed approach. Several approaches [12] are proposed to improve contention-based forwarding in VANETs especially in the routing and emergency messages dissemination. Routing in VANETs plays crucial role in performance of networks [13,14]. As discussed previously, the contention-based forwarding (CBF) is more suitable for VANETs than another type of routing approaches such as topology-based routing, position-based forwarding. The CBF is a timer-based approach which lets receiving nodes implicitly and independently participates in the relay selection procedure. All receiving nodes are considered as candidates and do not forward the receiving message immediately, but postpone their broadcasting by a given timer and enter in contention phase. The first receiving node, which timer expires, immediately broadcasts the message and any node overhearing that transmission cancels its timer and does not forward. As a consequence, only a specific number of nodes in the network are allowed to forward the message, which will reduce the overhead and collisions, and prevent broadcast storms in the network. In order to set a waiting time of each node, we have replaced default parameter of contention in CBF by a new function that satisfies our criteria. The replacement function denoted F is constructed based on three routing metrics explaining previously as follows: F = (α × Se + (1 − α) × De ) × CRSS

(17)

where α is a factor selected in [0, 1] to give more weight to one metric to another. For the contention, each node when it receives advertisement message, it computes F and set its timer t(.) using following formula: t(F ) = W Tmax × (1 − F )

(18)

where W Tmax is a maximum waiting time. Note that more F takes high values, more than the waiting time is small. Thus the node will have big chance to be potential relay.

3

Simulation and Result

To evaluate the performance of our proposed approach, we have implemented our routing protocol in Network Simulator NS2.33. We have compared our protocol

212

D. Abada et al.

with the protocol developed recently in [6] for connecting VANETs to Internet. We have performed some simulations in order to evaluate our proposed approach in term of throughput, and overhead by investigating the impact of varying the mobility of nodes and the number of vehicles on the road. 3.1

Simulation Environment

Paper [4] gives insights on some measurements of the IEEE 802.11p MAC and physical layer using NS2. The data rate is fixed to 6 Mbit/s. Using MOVE [4] and SUMO [4] we have created our highway scenario of 8000 m with two lanes. The simulation period in this work is 460 s and we wait for 100 s after the beginning of the simulation as the warm-up period. All vehicles move from the one end of the highway to another end in the same direction and 10 vehicles are selected randomly to send CBR data at rate 20 packets/s to a node that is part of the wired network and is connected to all the base stations which are connected in the wired network. To simulate protocols we have scheduled RSU to broadcast the advertisement message every 5 s in the predefined broadcast geographic zone which has been considered to be a circle with a radius of 1000 m, and the message is broadcasted in the opposite movement of nodes. All simulation environment parameters are the same taken in our previous work [1]. 3.2

Simulation Results

3.2.1 Varying Number of Vehicles Firstly, we compare the performance of the routing protocols by changing the number of nodes in the network. The maximum speed of vehicles is fixed at 30 m/s and the number of vehicular sources is fixed at 10 vehicles. The simulation results for network throughput and overhead are shown in the Fig. 1. It can be seen that while the network performance for all routing protocols decreased with the increased number of vehicles on the road. The performance degradation is due to the increased interference and congestion when the number of vehicles increases. As it is shown in the figures, our enhanced approach has better performances in term of throughput and control overhead in the contrast of protocol proposed in [6]. This is due to the considering the impact of fading statistics on the link stability. Moreover, combining link stability with RSS based distance rate metrics in proposed next hop selection scheme makes vehicles to connect to RSU through most stable, shortest and strongest route, consequently, reduce network failure and increase network throughput. As shown in the figure, integrating LRSQ as relay selection metric improves significantly network performance, this is because the packets will arrive at the destination with enough reception power, which makes the route more reliable. Consequently, the network performances, especially in term of throughput, have been enhanced by reducing packet loss.

An Efficient Next Hop Selection

a) Throughput

213

b) Normalize routing overhead

Fig. 1. Throughput and overhead comparison under different number of vehicles

3.2.2 Varying Maximum Speed Secondly, we fixed the number of nodes at 200 vehicles and the number of vehicular sources at 10 sources, to evaluate the performance of the routing protocols with increasing maximum speed. Figure 2 illustrates the network throughput and the normalized overhead routing with varying maximum speed. For all of the routing protocols, performances decrease with increasing vehicle mobility. As it is shown, the protocol proposed in [6] is less efficient than our scheme, because, the former selects paths composed of links with longer lifetimes, but the paths might include more fading. As consequence, the cost will be increased in term of packet loss and control overhead routing. As shown in the figure, our scheme with RSS has significant improvement of network performance. The reason behind this enhancement is because of the selective processing of signals. Only strongest received signals, will be treated at the routing layer, this improves significantly the network performance especially in term of throughput.

a) Throughput

b) Normalize routing overhead

Fig. 2. Throughput and overhead comparison under different maximum speed

214

4

D. Abada et al.

Conclusion and Future Work

In this paper, we have enhanced relay selection scheme taking into account stability, fading, path length, and quality of received signal. Simulation results show that our enhancing relay selection scheme achieves better performance than protocol based only on mobility, over a range of network performance measures. In the future work, we plan to evaluate our approach in more realistic scenarios (highway and also the city) by increasing a speed of vehicles, a number of vehicles and a number of vehicular sources. Although its easy deployment and low cost, IEEE 802.11p technology provides very limited communication range, which leads to install many RSUs in the road for connecting VANETs to the Internet, especially in long highway and urban scenarios. We think in the future, to couple our approach based on IEEE 802.11p with another wireless technology offering large coverage areas, such as WiMAX, Long Term Evolution (LTE) or 802.11ad (5G) using mobile and fixed gateway in an heterogeneous network.

References 1. Abada, D., Massaq, A., Boulouz, A.: Improving routing performances to provide internet connectivity in VANETs over IEEE 802.11p. IJACSA 8(4), 545–553 (2017) 2. Xie, Y., et al.: The Modeling and Cross-Layer Optimization of 802.11p VANET Unicast. IEEE Access 6, 171–186 (2017). https://doi.org/10.1109/ACCESS.2017. 2761788 3. Shaik, S., et al.: An efficient cross layer routing protocol for safety message dissemination in VANETS with reduced routing cost and delay using IEEE 802.11p. Wirel. Pers. Commun. 100(4), 1765–1774 (2018) 4. Abada, D., Massaq, A., Boulouz, A.: Connecting VANETs to Internet over IEEE 80211p in a Nakagami fading channel. In: International Conference WITS. IEEE, Fez, Morocco (2017) 5. Ding, Z., Ren, P., Du, Q.: Mobility based routing protocol with MAC collision improvement in vehicular Ad Hoc networks. In: Cornell University Library (2018) 6. Benslimane, A., Barghi, S., Assi, C.: An efficient routing protocol for connecting vehicular networks to the Internet. Pervasive Mob. Comput. 7(1), 98–113 (2010) 7. Yadav, A., Singh, Y.N., Singh, R.R.: Improving routing performance in AODV with link prediction in mobile Adhoc Networks. Wirel. Pers. Commun. 83(1), 603–618 (2015) 8. Chen, S., Jones, H., Jayalath, D.: Effective link operation duration: a new routing metric for mobile ad hoc networks. In: International Conference SPCS (2007) 9. Bhoyroo, M., et al.: Performance evaluation of Nakagami model for Vehicular communication networks in developing countries. In: EmergiTech, Balaclava (2016) 10. Benslimane, A., Taleb, T., Sivaraj, R.: Dynamic clustering-based adaptive mobile gateway management in integrated VANET 3G heterogeneous wireless networks. IEEE J. Sel. Areas Commun. 29, 559–570 (2011) 11. Jadhav, R.S., Dongre, M.M.: Performance enhancement of Vanets using fuzzy logic and network coding. IJACEN 5, 39–42 (2017) 12. Rajendran, R.: Master Thesis in Embedded and Intelligent Systems. The Evaluation of GeoNetworking Forwarding in Vehicular Ad-Hoc Networks, November 2013

An Efficient Next Hop Selection

215

13. Cheng, J.: Routing in Internet of vehicles: a review. IEEE Trans. Intell. Transp. Syst. 16(5), 2339–2352 (2015) 14. Kaur, H., Meenakshi: Analysis of VANET geographic routing protocols on real city map. In: 2nd IEEE International Conference RTEICT, Bangalore (2017)

Comparative Performance Study of QoS Downlink Scheduling Algorithms in LTE System for M2M Communications Mariyam Ouaissa1(&), Abdallah Rhattoy2, and Mohamed Lahmer2 1

Research Team: ISIC High School of Technology, LMMI Laboratory ENSAM, Moulay-Ismail University, Meknes, Morocco [email protected] 2 Department of Computer Engineering High School of Technology, Moulay-Ismail University, Meknes, Morocco [email protected], [email protected]

Abstract. The introduction of Machine to Machine (M2M) communications in future cellular networks will be subject to considerable degradation in the performance of existing traditional Human to Human (H2H) applications. However, with the ever increasing data traffic, resource allocation and traffic management, while maintaining quality of service, are a major challenge in terms of terminal fairness and throughput. In this paper we are interested in the allocation of radio resources in downlink Long Term Evolution (LTE) network including a comparative study between six scheduling algorithms which are: Proportional Fair (PF), Exponential Proportional Fair (EXP/PF), Maximum Largest Weighted Delay First (MLWDF), Frame Level Scheduler (FLS), Exponential Rule (EXP-RULE) and Logarithmic Rule (LOG-RULE). We considered the real-time flows (video and voip) in terms of throughput, goodput, fairness index and spectral efficiency, in order to have a clearer view of the quality of experience provided by these algorithms. Keywords: M2M  H2H  LTE  Scheduling algorithms  Resource allocation Fairness  Goodput  Spectral efficiency

1 Introduction Machine to Machine (M2M) applications [1], also known as Machine Type Communication (MTC) applications [2], refer to automated applications involving communication between machines without the need for human intervention. A feature common to most M2M applications is that they generate much more signaling traffic than data traffic. In addition, M2M applications generally generate small data packets. The key objective for supporting M2M communications is to establish the conditions that allow a device to exchange information with an application over a communications network as efficiently as possible [3, 4]. The new generations of mobile broadband networks, including Long Term Evolution (LTE) and LTE-Advanced seem to be the most suitable networks to support M2M applications of different kinds. Nevertheless, activating M2M applications in the © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 216–224, 2019. https://doi.org/10.1007/978-3-030-03577-8_25

Comparative Performance Study of QoS

217

network is never an easy task. Network operators need to update their networks to support these types of applications without disrupting traditional Human to Human (H2H) applications. Numerous challenges will confront today’s mobile network operators given the heterogeneity of M2M communications scenarios. Among the main challenges are the heterogeneity of the traffic patterns and the diversity of the Quality of Service (QoS) requirements; certain real-time applications, have high requirements, especially with regard to the latency and reliability while other applications are timetolerant and have low priorities. The communication network must therefore offer several levels of QoS to the different MTC devices (MTCDs). Resource management while maintaining quality of service has been extensively addressed in the literature where several techniques have been proposed [5, 6]. In this work, it is planned to reuse the downlink scheduling algorithms namely Proportional Fair (PF), Exponential Proportional Fair (EXP/PF), Maximum Largest Weighted Delay First (MLWDF), Frame Level Scheduler (FLS), Exponential Rule (EXP-RULE) and Logarithmic Rule (LOG-RULE), currently used in LTE, for a hybrid M2M and H2H communication scenario. The rest of the paper is organized as follows: in Sect. 2, we present the downlink scheduling algorithms in LTE network. We describe in Sect. 3 the evaluation of the performances of the proposed algorithms, we analyze the results of the simulations and we indicate the best algorithm in terms of six metrics: throughput, goodput, fairness and spectral efficiency.

2 Scheduling Algorithms In this paper we are interested in an important task of the Evolved NodeB (eNodeB) in the architecture of the LTE network; it is the Radio Resource Management (RRM). Its goal is to accept or reject requests for connection to the network, ensuring optimal distribution of radio resources between User Equipments (UEs). It consists mainly of two elements Admission Control (AC) and Packet Scheduling (PS). In this work we will focus on the PS, which realizes an efficient allocation of radio resources in both directions, uplink and downlink. We considered in our case the downlink direction [7, 8]. The purpose of radio resource allocation algorithms is to improve system performance by increasing spectral efficiency and network equity. It is therefore essential to find a compromise between efficiency (increase in throughput) and equity between users. The main purpose of this type of algorithm is to maximize the overall system throughput. Several algorithms use this approach as: PF, EXP/PF, MLWDF, FLS, EXP-RULE and LOG-RULE [9]. 2.1

Proportional Fair

It is an opportunistic scheduling algorithm. This type of algorithm uses infinite queues; these queues are used in the case of non-real time traffic. Its purpose is to try to

218

M. Ouaissa et al.

maximize the overall throughput of the system by increasing the throughput of each user at the same time; it tries to ensure equity between users, the objective function representing the PF algorithm is: a¼

di ðtÞ di

ð1Þ

Where di ðtÞ is the rate corresponding to the CQI of the user i and di is the maximum rate supported by the Resource Blocks (RBs). 2.2

Exponential Proportional Fair

This is an improvement of the PF algorithm that supports real-time flows (multimedia); by the way, it prioritizes real time flows over others. A user k is designated for the scheduling according to the following relation:   di ðtÞ ai W i ð t Þ  X pffiffiffiffi k ¼ max ai exp i di 1þ X X¼

1X a Wi ðt Þ i i N

ð2Þ ð3Þ

Where Wi ðtÞ is the time tolerated by the flow and ai is strictly positive setting for all i. 2.3

Maximum Largest Weighted Delay First

M-LWDF is one of the algorithms considering time limits. This type of algorithm deals with delays in arriving and delivering packets, designed primarily to handle real-time flows (multimedia and voip). If a packet exceeds these tolerated delay values, it will be removed from the list of flows to schedule which significantly degrades the QoS. This algorithm supports flows with different QoS requirements; it tries to weight packet delays using channel state knowledge, at a time t, the algorithm chooses a user k for scheduling via the formula: k ¼ maxi ai

di ðtÞ Wi ðtÞ di

ð4Þ

This is practically the same formula of the EXP-PF algorithm, except that: ai ¼  logðpi ÞTi

ð5Þ

Where pi is the probability that the delay is not respected and Ti is the time that the user i can tolerate.

Comparative Performance Study of QoS

2.4

219

Frame Level Scheduler

FLS is a multi-class scheduling algorithm; this scheduler class considers classes of queued flows in order to execute the appropriate scheduling policy for each class. The type of flow (whether real time or non-real time) is a fundamental parameter for this type of algorithm. Before making the resource allocation decision, the service type must be inspected before allocating the appropriate resource blocks for the broadcast. On the other hand, in spite of the prioritization of the real-time flows, the classical flows will not be neglected and removed from the queue in the event of congestion. FLS is a scheduler that considers quality of service and is mainly used for real-time communications in LTE network. Its scheduling scheme is divided into two levels, which interact with each other to allow the dynamic allocation of resource blocks to users. This two-level modeling provides a compromise between system throughput and equity. In the following formula, the volume of data ui(k) for the ith flow in the kth LTE frame is the convolution in the discrete time between the level of the queue qi(k) and the impulse response of the linear filter used denoted hi(k). ui ðkÞ ¼ hi ðkÞ  qi ðkÞ

2.5

ð6Þ

Exponential RULE

The exponential scheduler is considered an enhancement of the EXP/PF scheduler; it uses at most the parameters used in EXP/PF: the spectral efficiency of the user is one of the optimization parameters that are set by report to the need of the network. The EXP Rule provides quality of service guarantees over a wireless link. EXP-RULE selects a single user per queue to receive the service in each scheduling instant. EXP_RULE is based on the following expression: 0 B ¼ bi expB wEXPrule i;k @

1 C i ai DHOL;i C:C ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r P A k 1 cþ D HOL;i i N

ð7Þ

rt

Where Nrt represents the number of active real time flows, DHOL;i represents the timeout of the packet in the queue and Cik is the spectral efficiency of the user i for the flow k. For optimal results, the parameters ai, bi and c are defined as follows:  ai 2

2.6

 5 10 1 ; ; bi ¼ i ; c ¼ 1 ð0:99si Þ ð0:99si Þ E C

ð8Þ

Logarithmic RULE

LOG Rule is a scheduler that ensures a balance in the quality of service parameters in terms of average delay. This algorithm is based on the same parameters used in the

220

M. Ouaissa et al.

EXP-Rule scheduler; however, the scheduler metric is computed from the logarithm of the user’s flow. The LOG rule is represented as follows: ¼ bi log c þ ai DHOL;i :Cik wLOGrule i;k

ð9Þ

Where DHOL;i represents the timeout of the packet in the queue, and Cik is the spectral efficiency of the user i for the flow k. For optimal results, the parameters ai, bi and c are defined as follows: ai 2

5 1 ; bi ¼ i ; c ¼ 1:1 ð0:99si Þ E C

ð10Þ

3 Performance Evaluation In this section, the main scheduling algorithms used in LTE network are analyzed to evaluate the QoS of a network using an open source simulator LTE-Sim [10]. In our simulation, we considered the case of a single cell with interference, we used an environment of a cell with a radius of 1 km in which a set of M2M users set chosen in the range [20–100] and H2H users set at 30 are uniformly in mobility and are distributed in a cell. Each user receives a video and voip flows. The purpose of this simulation is to evaluate the performance of the LTE network in high congestion, our evaluation is based on schedulers implemented in the base stations of the LTE network which are PF, MLWDF, EXP/PF, FLS, EXP-RULE and LOG-RULE by the measurement of throughput, goodput, fairness index and spectral efficiency [11, 12], the simulation parameters are illustrated in the following Table 1: As shown in Fig. 1(a), the proposed FLS algorithm offers the largest throughput for the video flow case, followed by the EXP-RULE. The EXP-PF, MLWDF and LOGRULE algorithms are very comparable in terms of throughput performance that starts to decrease from 80 devices. For the PF algorithm, its bit rate also decreases in the same direction as the other algorithms but from 40 devices and its value is much lower. The decrease of the flow is due to the unavailability of the resource blocks sufficient to serve all the users. Figure 1(b) shows an increase in packet rate for the voip flow and for all scheduling algorithms. The latter have the same behavior for a number of users between 20 and 80 devices; from 80 devices the FLS algorithm begins to have a decrease in the bit rate. Figure 2 shows the goodput which is the average rate of a successful transmission of data over a communication channel, this metric measure only the total data rate on the network, ignoring all other headers. In Fig. 2(a), the FLS algorithm followed by EXP-RULE offers better performance in terms of goodput for a video flow compared to PF, MLWDF, EXP-PF and LOG-RULE which have almost the same values where the goodput decreases according to the number of users in the cell. We also note that the PF is the least efficient with regard to the metric PF. For video traffic, it requires a very high data rate so for voip users, it can be seen in Fig. 2(b) that the goodput is low for all schedulers, due to the low speed required for voip.

Comparative Performance Study of QoS

221

Table 1. Simulation parameters Simulation parameters Simulation duration Flow duration Frame structure Mobility

Values 120 s 120 s FDD Fixe for M2M 3 km/h for H2H Cell radius 1 km System bandwidth 20 MHz RB bandwidth 180 kHz Time slot 0.5 ms Scheduling time (TTI) 1 ms Number of RBs 100 RBs Maximum delay 0.1 s Video bit rates 242 kbps Voip bit rates 8.4 kbps Number of H2H users 30 Number of M2M users 20 à 100

Fig. 1. Average throughput

The fairness Index is obtained by considering the rate achieved by each flow at the end of each simulation, Fig. 3(a) shows the calculated fairness rate for the video flow, it is noted here that the fairness rates across the EXP-PF, MLWDF and LOG-RULE are almost similar and degrades depending on the density of the cell to achieve almost 50% equity between different users. Figure 3(b) shows that the algorithms have an almost similar fairness, ranging from 67% to 63%, due to an increase in devices using voip flow in the same cell.

222

M. Ouaissa et al.

Fig. 2. Average goodput

Fig. 3. Video fairness index

As shown in Fig. 4, the MLWDF algorithm has a better spectral efficiency compared to that measured for the other algorithms; the spectral efficiency is defined as the maximum user rate divided by the bandwidth of the channel. This efficiency decreases for all the algorithms as the number of users increases.

Comparative Performance Study of QoS

223

Fig. 4. Spectral efficiency

4 Conclusion In this paper, we evaluated the performance of downlink schedulers in the LTE network proposed in the literature for M2M communications. We presented a comparative analysis of six radio resource allocation algorithms in LTE network for a hybrid M2M and H2H scenario. We focused on the impact of each of the resource allocation algorithms on video and voip applications under overloaded conditions. In order to establish the impact of these resource allocation algorithms on the quality of service of the various applications in the LTE network, simulations have been made in terms of throughput, goodput, fairness index and spectral efficiency. We can conclude that the FLS and EXP-Rule downlink schedulers have better performance for real time services than others this is due to their characteristics of high throughput, goodput, fairness and spectral efficiency. This could be an opportunity for future research, where new algorithms and mathematical models could be developed.

References 1. Kim, J., Lee, J., Kim, J., Yun, J.: M2M service platforms: survey, issues, and enabling technologies. IEEE Commun. Surv. Tutor. 16(1), 61–76 (2014) 2. Taleb, T., Kunz, A.: Machine type communications in 3GPP networks: potential, challenges and solutions. Commun. Mag. 50(3), 178–184 (2012) 3. Biral, A., Centenaro, M., Zanellan, A., Vangelista, L., Zorzi, M.: The challenges of M2M massive access in wireless cellular networks. Digit. Commun. Netw. 1, 1–19 (2015) 4. Ghavimi, F., Chen, H.H.: M2M communications in 3GPP LTE/LTE-A networks: architectures, service requirements, challenges, and applications. IEEE Commun. Surv. Tutor. 17, 525–549 (2014) 5. Zheng, K., Hu, F., Xiangy, W., Dohler, M., Wang, W.: Radio resource allocation in LTE-A cellular networks with M2M communications. IEEE Commun. Mag. 50, 184–192 (2012) 6. Fritze, G.: SAE: The Core Network for LTE (2012)

224

M. Ouaissa et al.

7. Coupechoux, M., Martins, P.: Vers les systèmes radio mobiles de 4e génération - de l’UMTS au LTE (2013) 8. Bouguen, Y., Hardouin, E., Wolff, F.X.: LTE et les réseaux 4G (2012) 9. Monikandan, S.B., Sivasubramanian, A., Babu, S.P.K.: A review of MAC scheduling algorithms in LTE system. Int. J. Adv. Sci. Eng. Inf. Technol. 3, 1056–1068 (2017) 10. Piro, G., Grieco, L.A., Boggia, G., Capozzi, F., Camarda, P.: Simulating LTE cellular systems: an open source framework. IEEE Trans. Veh. Technol. 60(2), 498–513 (2010) 11. Fouziya, S.S., Nakkeeran, R.: Study of downlink scheduling algorithms in LTE networks. J. Netw. 9(12), 3381 (2014) 12. Sahibzada, A.M., Khan, F., Ali, M., Khan, G.M, Faqir, Z.Y.: Fairness evaluation of scheduling algorithms for dense M2M implementations In: IEEE WCNC 2014 - Workshop on IoT Communications and Technologies (2014)

Uberisation Business Model Based on Blockchain for Implementation Decentralized Application for Lease/Rent Lodging Saleh Hadi1(&), Alexandrov Dmitry2, and Dzhonov Azamat3 1

National Research University Higher School of Economics (NRU HSE), Vladimir State University named after Alexander and Nikolay Stoletov, Vladimir, Russia [email protected] 2 National Research University Higher School of Economics (NRU HSE), Bauman Moscow State Technical University (Bauman MSTU), Moscow, Russia [email protected] 3 National Research University Higher School of Economics (NRU HSE), Moscow, Russia [email protected]

Abstract. Currently, there are digital services that provide transactions between private contractors and their customers. A third party provides information, safety and security of transactions. The article is devoted to introduction of blockchain technology in business model of ‘People-to-People Economy’ (P2PE). Following the results of the work, an application for lease/rent lodgings for a short term was implemented. The paper describes the domain overview, choice of architectural solution and description of the application implementation. Keywords: Decentralized application (Dapp)  Blockchain  Uberisation People-to-People Economy (P2PE)  Smart-contracts  Ethereum

1 Introduction Blockchain is a linked list of blocks containing information. Blockchain network unites network of participants (miners), each of which contains a copy of data. A new block creation takes place in process of mining. A mechanism for accepting a miner that will create a new block is described in blockchain consensus (proof-of-work, proof-of-state etc.). Blockchain ensures security of funds through use of transaction signature by a wallet owner. Data in blockchain is distributed between all miners. Mining is the process of creating a new block of information. Other participants of network verify a new created block. The full review of blockchain technology is presented in the sources [1–5]. Smart contracts are code sections that execute in mining process when a miner check transactions. Based on data from a blockchain and smart contracts there is opportunity to implement business logic of an application [6, 7].

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 225–232, 2019. https://doi.org/10.1007/978-3-030-03577-8_26

226

S. Hadi et al.

Uberisation is a transition to an economic system where contractors conduct transactions directly with customers through information and telecommunication technologies (People-to-People Economy (P2PE)). A service conducts all stages of a transaction from provision of communication between a contractor and a customer to receipt of payment. More materials about P2PE and Uberisation are available in the sources [8–12]. The services such Uber, Airbnb, Tudou, TaskRabbit etc. act as a third party in transactions and receives a fee from each transaction. There is a demand in a solution that would reduce cost of transactions and increase security. Blockchain is one of the most dynamically developing technologies in the world. The technology is being implemented in all fields of activity at the level of information technologies and information security issues. The work describes the introduction of blockchain in the current model of P2PE services market. Blockchain will allow to get rid of a third party and make the business model more transparent for all participants.

2 Project Description 2.1

Problem Statement

It was decided to implement blockchain based application based on the existing applications. The application field is lodging sharing for a short period of time between private contractors and their customers. The main tasks that should be solve in implementation of the prototype are as follows: (1) Choice of a software architecture that would provide decentralization and secure transaction without involving a third party. The chosen architectural solution should be applicable in other P2PE services fields; (2) Perform a comparative analysis and select available methods and tools to implement the application; (3) Learn about implementation and deployment of smart - contracts; (4) Implementation business logic of application that would provide a full P2PE service; (5) The client application implementation; (6) Draw conclusion how blockchain improve P2PE market services. 2.2

Application Architecture Solution

Criteria for choosing a software architectural solution: (1) (2) (3) (4) (5)

Lack of a central point of decision making; Data should be distributed, which can ensure safety and immutability; Fast data write speed; Fast data access speed; No additional user requirements (full node client, data store etc.).

Uberisation Business Model Based on Blockchain

227

OpenBazaar is an example of fully decentralized application. The application works on principle of peer-to-peer network between sellers and buyers. Therefore, no one can disable or block a network member [13]. However, there are drawbacks in OpenBazaar. Each seller acts as a network node. Data stores directly on a seller’s computer. Ready blockchain cores use to avoid each client acting as a network node. A blockchain core provides the ability to use a network of miners to develop decentralized applications. Blockchain can provide an opportunity to get rid of a third party and move to the P2P market of services without intermediaries. All transaction details can be described as mathematical rules and implemented in smart contracts. Thus, there will be a blockchain core consisting of miners who carry out business logic, implemented in blockchain and clients. The first architecture solution is client-blockchain model (Fig. 1). All data is store in blockchain. The client application sends requests to blockchain and call smart contracts in which business logic implemented. The approach is completely decentralized. However, this approach has significant drawbacks. First, all entries in Blockchain are slow. It can be fast for transferring funds, but it takes too long for apps with a rich data flow. Second, not all data can be placed in blockchain, for example photos. Thirdly, it is data immutability. Blockchain contains all data about all previous entries. Users of the application can change or add some minor information, new

Fig. 1. Architectural solution with storing data in blockchain

228

S. Hadi et al.

Fig. 2. Architectural solution with storing data in blockchain and off-chain

entries will be inserted in blockchain, which will lead to the growth of data. Miners will have to spend their disk space on insignificant data. The last, smart-contracts cannot self-emerge, it is impossible to implement a code that will be called automatically when a certain result is reached. The second architectural solution is hybrid network (Fig. 2). All additional application data stores in a centralized storage outside blockchain (off-chain). Important data for business logic store in blockchain. Accuracy of off-chain data is provided by storing hashes from entities in blockchain. This will reduce costs and not lose benefits of blockchain technology. Business logic can also be implemented in smart contracts and use not only data from blockchain but also from off-chain. Transfer and receipt of data from blockchain occur using REST requests [14]. However, there is no central decision point, but communication with the blockchain takes place through a centralized server. There are advantages of this platform solution such as: (1) All data not store in a client application (such in OpenBazaar); (2) The server is responsible for processing communication with blockchain. A client application sends a request to a server. A server processes a blockchain transaction execution with blockchain. Therefore, a client may not care about controlling blockchain transactions processing. (3) Ability to implement client application to other platforms (REST API) (Table 1).

Uberisation Business Model Based on Blockchain

229

Table 1. Comparison of architectures Architecture

Criteria Lack of a central point of decision making +

Data is distributed

Fast writing speed

Fast reading speed

Client – + – + Blockchain (P2P network) + * * + Client – Blockchain/Offchain (Hybrid network) Note: Asterisk sign (*) indicates partial satisfaction of a requirement. Plus (+) indicates full satisfaction of a requirement. Minus (−) indicates complete unsatisfaction of a requirement.

2.3

No additional user requirements –

+

Tools and Implementation

Based on the described architecture (Fig. 2) minimal viable product was implemented (Fig. 3). The main use-case of lodging renting is represented on Fig. 4. Ethereum was used as a blockchain core. Ethereum has a large network of participants and allows to implement smart-contracts. Ethereum Ropsten test network was used. Solidity programming language was used to implement the smart-contracts. IDE Remix and wallet Metamask was used to deploying smart-contracts. Infura service was used as a node for Ethereum. Smart-contracts are execute when send transacation with data to blockchain. If a smart contract function does not cause blockchain entry such functions do not require a fee for their execution. At the moment, the payment of all transactions for writing data to the blockchain takes place using a special account, users only pay for the transfer of funds. Below presents an excerpt of Solidity smart-contract code in which data is inputed and received from blockchain.

Fig. 3. The application use-case diagram

230

S. Hadi et al.

Fig. 4. Business Process Model and Notation (BPMN) diagram – lodging renting use-case

Uberisation Business Model Based on Blockchain

231

Node.js software platform was used as backend. Node.js has high performance, an active community and is supported by large companies. Json format was used for transferring data. Module web3.js was used for backend communication with Ethereum blockchain. Module Ethereumjs-tx was used for transaction signature. A client signs transaction for a transfer with private key, which is store on the client application. MySQL was used as a database management system (Off-chain storage). HTTPS protocol was used to provide a secure connection between backend and client application. Android operating system was chosen as client platform. Android standard development kit (SDK) and Model-View-ViewModel (MVVM) architectural pattern were used.

3 Discussion of Results Based on the work done, we can conclude that blockchain can be implemented in the current model of the P2PE services market for the following reasons: (1) Secure not anonymous funds transfer between two wallets; (2) Provided data from general blockchain registry secure; (3) Cryptocurrency transactions reduce the costs such as financial institutions fees (such Visa, MasterCard) (3%), foreign transaction fees (3%) and currency conversion fees (1%). A client should pay only a little fee for transaction execution. The architectural solution described in this paper can be used in other areas where there is a market for P2PE services (car renting etc.). The results of the project show how blockchain technology can be applied to reduce the costs of transactions and increase the level of confidence in data. However, at the moment blockchain technology is at the stage of elaboration. It is not even possible to implement fully autonomous smart-contracts that will be called by themselves. Blockchain existing protocols cannot provide fast operation with client applications. Also, use of blockchain and cryptocurrencies as liquid currency requires legal regulation. Currently legal regulation takes place in different countries. In the future, the application can be modified to complete decentralized. The application video using experience is available on link [15].

References 1. Watanabe, H., Fujimura, S., Nakadaira, A., Miyazaki, Y., Akutsu, A., Kishigami, J.J.: Blockchain contract: A complete consensus using blockchain. In: IEEE 4th Global Conference on Consumer Electronics (GCCE), pp. 577–578 (2015) 2. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system (2009) 3. Underwood, S.: Blockchain beyond bitcoin. Commun. ACM 59(11), 15–17 (2016) 4. Puthal, D., Malik, N., Mohanty, S.P., Kougianos, E., Yang, C.: The blockchain as a decentralized security framework. IEEE Consum. Electron. Mag. 7(2), 18–21 (2018) 5. Zyskind, G., Nathan, O.: Decentralizing privacy: using blockchain to protect personal data. In Security and Privacy Workshops (SPW), pp. 180–184 (2015)

232

S. Hadi et al.

6. Biswas, K., Muthukkumarasamy, V.: Securing smart cities using blockchain technology. In High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 1392–1393 (2016) 7. Kosba, A., Miller, A., Shi, E., Wen, Z., Papamanthou, C.: Hawk: the blockchain model of cryptography and privacy-preserving smart contracts. In: IEEE Symposium on Security and Privacy (SP), pp. 839–858 (2016) 8. Nurvala, J.P.: ‘Uberisation’ is the future of the digitalised labour market. Eur. View 14(2), 231–239 (2015) 9. Uberisation of economic, The theory of the correct “Uber”. https://pavlyuts.ru/posts/360 10. Zervas, G., Proserpio, D., Byers, W.: The rise of the sharing economy: estimating the impact of Airbnb on the hotel industry. J. Market. Res. 54(5), 687–705 (2017) 11. Daunorienė, A., Drakšaitė, A., Snieška, V.: Evaluating sustainability of sharing economy business models. Procedia-Soc. Behav. Sci. 213, 836–841 (2015) 12. Rauch, D., Schleicher, D.: Like Uber, but for local government law: the future of local regulation of the sharing economy. Ohio St. LJ. 76, 901 (2015) 13. Raval, S.: Decentralized Applications: Harnessing Bitcoin’s Blockchain Technology. O’Reilly Media Inc., Sebastopol (2016) 14. Saleh, H.M, Dzhonov, A.T.: Design of decentralized applications based on Blockchain technology. In: International Conference: Science Today: Reality and Prospects, pp. 61–63 (2018) 15. Decentralized application for lease/rent lodging based on blockchain. https://youtu.be/ sTAZFgZdj04

New Failure Detection Approach for Real Time for Hydraulic Networks Using the Non-acoustic Method Bala Moussa Biaye1(&), Cherif Ahmed Tidiane Aidara1, Amadou Coulibaly1,2, Khalifa Gaye1,2, Serigne Diagne1,2, and Edouard Ngor Sarr3 1

Laboratory of Computer Science and Engineering for Innovation (LI3), Assane Seck University of Ziguinchor, BP 523 Diabir, Senegal {b.biaye3299,c.aidara3345}@zig.univ.sn, [email protected], {kgaye,sdiagne}@univ-zig.sn 2 Laboratory of Sciences of the Engineer, Computer Science and Imaging (Icube – UMR 7357), National Institute of Applied Sciences of Strasbourg, University of Strasbourg, CNRS, Strasbourg, France 3 Laboratory TIC-SI UCAO-SI, Dakar-Senegal University of Thies, Thies, Senegal [email protected]

Abstract. The management policy of infrastructures and equipments (e.g.: hydraulic, solar, sanitary, educational, etc.) disseminated throughout the country, particularly in rural areas, generally difficult to access, is a major challenge for the technical services of the State. The Infra-SEN intelligent Geographic Information System proposed in this paper aims to offer to organizations in charge of the management of infrastructures and equipments a platform that allows them to find out in real time how the equipments work and to detect any failures. In the present study, this paper is a contribution for analyzing the conditions of remote monitoring of hydraulic equipments. We use the failure mode, effects, and criticality analysis (FMECA) method in order to identify vulnerable equipments and analyze their Effects and their criticality. The solution proposed has been applied to the hydraulic installation at Niamone in the department of Bignona, in Ziguinchor region, in Senegal. Keywords: Database  GIS  Connected objects Measurements acquisition system

1 Introduction As part of the development of rural areas, the State of Senegal through national agencies and helped by NGOs has launched in rural areas a vast program of equipments in various domains like hydraulic infrastructures, schools, health and energy. However, once these infrastructures and equipments are installed, unfortunately, they do not benefit from effective monitoring despite the huge budgets invested for their © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 233–244, 2019. https://doi.org/10.1007/978-3-030-03577-8_27

234

B. M. Biaye et al.

implementation. This paper proposes an intelligent geographic information system for remote monitoring of distributed hydraulic infrastructures and equipments. The system proposed consists of a central server for processing measures connected to an acquisition unit that monitors a set of sensors. In Sect. 2, we make the state of the art of remote monitoring techniques and failures detection in hydraulic installations. In Sect. 3, the paper presents the architecture of the Infra-SEN platform. In Sects. 4 and 5 we successively present the approach used and the implementation of the platform, as well as the first results obtained. Finally, we outline some prospects for the development of the Infra-SEN platform.

2 State of the Art and Positioning 2.1

Remote Monitoring of Hydraulic Networks

We mean by remote monitoring, the monitoring via a telecommunications network. Many methods and solutions have been proposed in the literature [1–9]. Igor’s thesis works [1] on the development of a model of the infrastructure management assistance tool, particularly the drinking water network of the city Chrisinau of Moldova (1200 km of pipeline). This work has two components, namely the diagnostic aspect and the decision support aspect. The methodology developed in this work uses different tools and methods: temporal databases, spatial analysis and GIS, cognitive reasoning and hydraulic modeling of flows etc. Blaise’s works [2] deals with the sequential remote monitoring of the water distribution network. The objective of this work is to study the problem of the safety of drinking water by monitoring the distribution network from the water tower to the private residences. The proposed approach is based on observation of the residual chlorine concentrations provided by the sensor network. A criterion based on minimizing the probability of detection missed provided that the false alarm rate be superior has been used in the document. A suboptimal detection algorithm has been designed. Theoretical analysis and simulation results are provided. In order to avoid water wastage, Isenmann et al. [5] worked on the evaluation of the discharge from the overflow of a pumping station by the measurement of water heights. This work describes a calculation method that establishes the relationship between the water level above the base of a pump station overflow pipe and the flow discharged. The tables height/debit constructed can then be implemented in transmitters or interpolated for post processing. Karim et al. [3] propose an approach to pre-localize physical losses on a drinking water distribution network by optimizing the hydraulic model via an evolutionary algorithm, to pre-localize areas with high leakage debit. Their approach is based on the resolution of the FAVAD (Fixed and Variable Area Discharge) equation by optimizing its parameters (coefficients and exponent of the transmitter) via the use of Genetic Algorithms (GA) coupled to an interfaced hydraulic modeling. As to cheifetz works, [4] proposes a greedy algorithm for the positioning of quality sensors on a large water distribution network. This approach uses a large number of contaminations, simulated by a hydraulic modeling software and iteratively selects the best positions according to

New Failure Detection Approach for Real Time for Hydraulic Networks

235

a criterion set to optimize. The method is evaluated for the deployment of multiparameters sensors measuring chlorine, temperature, pressure and conductivity on the network of the Water Authority of Ile-France (Sedif), the largest French drinking water distribution network. In all cases presented above, remote monitoring is not used as a warning system but aims to adapt the treatment according to the values found on the measured parameters [10]. This only makes it possible to detect failures without necessarily geolocating faulty equipment. Except in the case of a central system where all the equipments is on a single site, the location of the failed equipment is already known. In our case, we are interested in equipment distributed on the territory, where the need to geolocate them with a GIS in case of occurrence of failures (multi-site remote monitoring system). 2.2

The Detection of Failures Related to Leaks

The exploitation of drinking water distribution networks around the world suffers from numerous failures that can arise in arbitrary places that are difficult to determine. In addition to the enormous economic losses bind with faults, there is also the risk of epidemics caused by leaks that constitute a great danger to public health. A study conducted by the International Association of Water Distribution (IAWD) shows that the amount of water lost through the distribution networks would be between 20 and 30% of total production. This has led network operators to think of using more efficient ways to detect these leaks in record time. In the field of leak detection, there are several methods and techniques. Currently used detectors can be classified into two main categories: • acoustic noise-based detectors that require the operator to move to locate the exact location of these leaks and acoustic correlation based detectors that allow remote leak detection, and which give the place of escape with great precision. • acoustic correlation detectors are used to detect leaks. Indeed, this technique is the subject of several works and implementations, [11–15]. It is used for leak detection by Osama [11]. In the works of Miloud [12], this same method was used to implement a leak detection algorithm in distribution networks on the TMS320C6201 processor. The National Directorate of Drinking Water and Sanitation of Haiti [13] in its document entitled “control of water loss leak detection” has used the acoustic correlation method for precise location of leaks. In the works cited, formulas, algorithms and architecture have been proposed for the detection of leaks. But, these works apply only to metal tubes. The method of acoustic correlation, although effective, finds its limits. The acoustic method becomes problematic in the case of plastic tubes [16]. The acoustic leak detection equipment was designed primarily for small diameter metal pipes. However, the signals emitted by the leaks in the plastic tubes have acoustic characteristics that are substantially different from those produced by leaks in metal pipes. Materials such as HDPE or PVC absorb vibrations enormously. A recent study conducted by the Canadian Institute for Research on Construction (IRC) and funded by the American Water Works Research Foundation found that leaks in plastic tubes can be detected using acoustic techniques, but that present many difficulties.

236

2.3

B. M. Biaye et al.

Positioning of Our Contribution

However, in Senegal plastic tubes are the most used in water pipes, it would be more effective to use non-acoustic techniques for leak detection. Leaks in plastic tubes can also be detected using non-acoustic techniques such as tracer gas, infrared imaging and radar. However, the use of these techniques is still very limited and their effectiveness is not as well established as in the case of acoustic methods [16].

3 Description of Our Approach We use the technique based on the measurement of the debit of water. In this method, we calculate the variation of the debit between two measurements and the Linear Leakage Index (LLI). A good approach to this index is obtained by measuring the minimum night debit (usually between 1 am and 4 am, after deduction of heavy nocturnal consumers [13]). It is calculated in this way: LLI ¼ Volumes lost in distribution ðm3=j)=length of the pipeline ðkmÞ

ð1Þ

Volume lost in distribution ðwater loss) ¼ volume put in distribution  volumes consumed:

ð2Þ

We will retain as guide value: Rural area LLI * = 2 m3/j/km Peri-urban area LLI * = 5 m3/j/km Urban area LLI * = 10 m3/j/km Areas with significant leakage can be determined by the method of test by step. This consists of subdividing the sector and then measuring the debit. Balance sheets by sector require a lot of work because they are done at night. In recent years, there has been a tendency to permanently install flow sensors connected to the system. The values of the debit thus transmitted are automatically analyzed and allows detecting the leaks. For the implementation of the flow rate treatment algorithm (Fig. 1) measured by the sensors, we are based on the linearity method. The primary state quantity, which one wishes to control the value, is the debit level. The objective is to obtain measured flow rate (M) equal to its normal debit (C). If M is not equal to C, we will have M = C − e, with e difference between measurement signal and normal debit. To establish this algorithm, we based ourselves on the method of the straight line of linearity. When two quantities are such that the variations of one are proportional to the variations of the other, then the values y of one express themselves according to the values x of the other by a relation of the type y = ax + b, where a and b are two real numbers. Given a straight line, we consider on this line a fixed point, final measurement Mf (xf, yf) and an arbitrary point Mi (xi, yi). According to the properties of similar triangles, the quotient (yf − yi)/(xf − xi) does not depend on the point Mi

New Failure Detection Approach for Real Time for Hydraulic Networks

Fig. 1. Algorithm of measurement processing to detect leaks

237

238

B. M. Biaye et al.

chosen on the right, so this quotient is equal to a constant. This constant a is called the steering coefficient (or slope) of the line; ðyf  yiÞ=ðxf  xiÞ ¼ a:

ð3Þ

The steering coefficient gives the direction of the right. In our practical case, the steering coefficient indicates the debit of water. In fact, in both cases, y varies by the same quantity Dy = aDx. We will hold back the writing a = Dy/Dx. Three cases possible: If a > 0, so y increases when x increases (the function is increasing). y increases all the more rapidly as a is large. The suction cups are poorly closed and the air that enters increases the output debit. If a < 0, so y decreases when x decreases (the function decreases). y decreases all the more rapidly as the absolute value of x is large. There is a leak in the network, this justifies that the output debit is lower than the input debit. If a = 0, so y is constant, so the debit measured by the different sensors is the same.

4 Functioning of the System For data acquisition by sensors, we use classical sensors who send the message to the monitoring center via the acquisition central (Fig. 2). For each installation, we target a critical equipment that will be connected by an intelligent sensor. In our example, it is the submerged pump. For this object, it is the technology of the Internet of connected objects that will be use to communicate with the central remote monitoring station.

Fig. 2. Operation the infra SEN platform

New Failure Detection Approach for Real Time for Hydraulic Networks

4.1

239

Measurement Acquisition by the Central

The acquisition central operates with a program that requires the implementation of subprogram or functions. It is the arrangement of these functions, one with respect to the other, in a succession of phases, which constitute the program. Each function has associated parameters. Figure 3 shows the flowchart of the basic program, the minimum necessary for the operation of the central.

Fig. 3. Algorithm for taking measurements, memorizing and transfer datas.

240

4.2

B. M. Biaye et al.

Sensor Control by Algorithm

For our experiment, we were able to associate with the basic program of the central a minimum of local intelligence. It involves setting up an algorithm that will be implemented for the surveillance of sensors deployed in the field, in addition to equipments monitoring. This algorithm highlights several sensors installed in a given equipment. The choice to use several sensors is justified in particular to overcome the possible malfunction of one of them. This algorithm (Fig. 4) follows the triggering phase and begins with the incrementation of a time counter, which is compared with the duration of the execution step of the phase 1 program (Fig. 3). If between two run times of the program the sensor does not provide any information, one can think of a corresponding sensor failure. For global monitoring of network sensors, we adopt the global network detection algorithm: T ¼ MaxfCjg 1\ ¼ j\cn

ð4Þ

With T = total failure rate of the sensors, cn = the total number of sensors and Cj, the probability of finding a faulty sensor in the network.

Fig. 4. Inactivation algorithm or sensor failure

New Failure Detection Approach for Real Time for Hydraulic Networks

241

5 Deployment of the Infra SEN Platform Failing to have physical installations, we used Scilab numerical analysis software to simulate the measurements acquisition. In the case of a physical installation, the procedure and algorithms remains unchanged. Scilab, is a free multi-domain simulation software that provides a graphical platform and a set of libraries that allow modeling, simulation, the implementation and control of systems in different areas of application (Fig. 5).

Fig. 5. Sensor measurements versus the time

To simulate, we need a description of the program. In the execution of this program, we have respected the various stages of operation of the central. The time step is managed by the multithreads programming technique. We used the function rand (n, m) which automatically generates values that simulate measurement sensor outputs as a function of time. n is the measurement time and m, the measurement output of the sensors. The thread function. Sleep (z) is used to manage the time step. This allows to asleep the program for a desired time Z. Every day measurements are made. The total number of experiments is 100 days. Every hour the sensors send measurements to the acquisition central. We get the result of the Table 1.

242

B. M. Biaye et al. Table 1. Results of measurement values by the sensors during a day Times (in hour) Sensor1 1 43.11733 2 61.453848 3 92.589621 4 9.9381728 5 42.805786 6 94.31831 7 3.2739527 8 92.132671 9 94.490244 10 90.070699 11 80.943161 12 2.5195429 13 0.1964506 14 50.752213 15 40.76043 16 84.080461 17 50.172657 18 91.287808 19 44.357295 20 59.83784 21 77.418426 22 79.220083 23 55.046049 24 40.850437

Sensor2 72.174381 47.685359 63.930574 99.638653 15.747883 53.506937 21.290646 55.914506 43.04966 2.2805485 57.614598 71.491304 93.21636 12.326993 28.655522 1.2479957 57.694048 39.386961 68.885837 97.023218 85.157643 33.933045 87.725318 11.314025

Sensor3 52.641283 52.973941 92.917561 97.654303 62.25464 98.225833 75.429888 54.547881 72.86016 2.5259695 40.251685 9.8313199 26.086253 36.363423 17.466178 92.341395 76.051409 56.402041 37.970652 87.762262 82.174258 67.870581 8.2200981 25.527314

Sensor4 74.444567 22.695036 68.369308 93.650726 50.530174 25.248146 68.188398 28.363682 14.094857 67.591096 45.126776 75.430292 13.702143 66.082405 38.900542 70.018205 91.680057 21.229 26.978331 31.998894 2.3218025 72.654473 15.340586 23.552638

Since the flow is never stable with the needs of consumers in water, so we have variations in the curves. The generated values will be saved to a file and retrieved by a java program. After executing the code, the program displays all the measurements including those that present leak (Fig. 6).

Fig. 6. Zoom on the measured values and leaks detected in the Infra Sen application

New Failure Detection Approach for Real Time for Hydraulic Networks

243

These values will also be saved in the Infra SEN database in the measurement table. Indeed, once the failures are detected, ArcGis features are triggered automatically to produce the equipment cards that present failure.

6 Conclusions and Perspectives We propose a real time remote monitoring system of equipments based on a GIS. This system is efficient and requires fewer resources than those found in the literature. This system significantly improves the quality of service; reduces wasted time and costs related to equipments maintenance. However, if the system proposed allows monitoring of the equipments remote, it does not yet solve the problem of maintenance to remote. This falls under the problematic of remote maintenance that we have not discussed here. The application to the monitoring of hydraulic equipment in the municipality of Niamone in the department of bignona validated the mapping and algorithmic aspect of failure detection. Future work should allow us to perform fullscale tests for the whole territory. Based on the Infra-SEN project approach, many remote monitoring applications may be possible in various sectors like health, education, renewable energies including solar panels.

References 1. Blindu, I.: Help tool for the diagnosis of the drinking water network for the city of Chisinau by spatial and temporal analysis of hydraulics dysfunctions. Ph.D., Jean Monnet University of Saint Etienne (2004) 2. Guépié, B.K.: Sequential detection of transient signals: application to the monitoring of a drinking water network. Ph.D., technology University of Troyes (2013) 3. Karima, S., Abdelhamid, S., Moula, Z.: Pre-localization approach of leaks on a water distribution network by optimization of the hydraulic model using an evolutionary algorithm. In: 2018 Proceedings of 3rd EWaS International Conference on “Insights on the Water-Energy-Food Nexus”, Lefkada Island, Greece, 27–30 June 2018, vol. 2, no. 11, p. 588 (2018) 4. Cheifetz, N., Sandraz, A.-C., Feliers, C., Gilbert, D., Piller, O., Heim, V.: A greedy algorithm for quality sensor placement on a large-scale water distribution network. In: TSM 2017, vol. 11, pp. 55–63 (2017) 5. Isenmann, G., Bellahcen, S., Vazquez, J., Dufresne, M., Joannis, C., Mose, R.: Evaluation of the discharge in an overflow pipe of a pumping station from the measurement of water depths. In: TSM 2016, 1–2, 71–83 (2016) 6. Butterfield, J.D., Meyers, G., Meruane, V., Collins, R.P., Beck, S.B.M.: Experimental investigation into techniques to predict leak shapes in water distribution systems using vibration measurements. J. Hydroinf. 20(4), 815–828 (2018) 7. Aslam1, H., Kaur, M., Sasi, S., Mortula, Md.M., Yehia, S., Ali, T.: Detection of leaks in water distribution system using non-destructive techniques. In: 8th International Conference on Future Environment and Energy (ICFEE 2018). Earth and Environmental Science, vol. 150, p. 012004 (2018)

244

B. M. Biaye et al.

8. Seyoum, S., Alfonso, L., van Andela, S.J., Koole, W., Groenewegen, A., van de Giesen, N.: A Shazam-like household water leakage detection method. Procedia Eng. 186, 452–459 (2017) 9. Butterfield, J.D., Krynkin, A., Collins, R.P., Beck, S.B.M.: Experimental investigation into vibro-acoustic emission signal processing techniques to quantify leak flow rate in plastic water distribution pipes. Appl. Acoust. 119, 146–155 (2017) 10. Dary, P.: Remote monitoring in heart failure: Feasibility and results of a limited 14-day follow-up of 83 patients. Eur. Res. Telemed. 3, 125–132 (2014) 11. Hunaidi, O.: Leaks detection in water pipes, constructive solution no. 40. Institute for Research on Construction, Canadian National Research Council, 6 p. (2000) 12. Bentoumi, M., Chikouche, D., Bouamar, M., Khelfa, A.: implementation for real time a leak water detection algorithm distribution networks on the TMS320C6201 processor using acoustic correlation. In: 4th International Conference on Computer Integrated Manufacturing, CIP 2007, 03–04 November 2007 (2007) 13. (NDDWS, 2013) National Directorate of Drinking Water and Sanitation of Haiti loss control of water - search for leaks, Version 23, Septembre 2013 14. Almeida, F.C.L., Brennan, M.J., Joseph, P.F., Gao, Y., Paschoalini, A.T.: The effects of resonances on time delay estimation for water leak detection in plastic pipes. J. Sound Vibr. 420, 315–329 (2018) 15. Gao, Y., Brennan, M.J., Liu, Y., Almeida, F.C.L., Joseph, P.F.: Improving the shape of the cross-correlation function for leak detection in a plastic water distribution pipe using acoustic signals. Appl. Acoust. 127, 24–33 (2017) 16. Hunaidi, O. : Acoustic strategy of leaks on water distribution pipes, constructive solution n° 79. In: Canadian Institute for Research on Construction, Canadian National Research Council (2012)

Fault-Tolerant Communication for IoT Networks Abdelghani Boudaa1(B) and Hocine Belouadah2(B) 1

2

Universit´e Med BOUDIAF, BP 166, 28000 M’sila, Alg´erie [email protected] Ecole Normale Sup´erieure de Bou Saada, 28001 Bou Saada, Alg´erie [email protected]

Abstract. The Internet of Things (IoT ) is a large number of diversified nodes. Nodes are supplied with operations that are carried out on data and communicate with each other. These things are applicable for applications such as smart healt, smart vehicule,... etc., and make our daily activity smarter. One of the important problems related to IoT, is fault-tolerance and energy-efficient communications. So, we designed a protocol that provides fault-tolerant communications using a reservationbased protocol. Keywords: IoT · Single-hop networks · Permutation routing Parallel communications · Reservation-based protocol Energy-efficiency · Fault-tolerance

1

Introduction

Internet of Things is a set of things which can be physical devices, people, animals etc. IoT can be imagined as a wireless network, IoT (T, n) for short, in which T denotes the thing nodes in the network and n denotes the data that is to be distributed in between them. The problem of permutation routing is involved when each node in the network needs to receive information (item) from other nodes. More precisely, the node cannot decide or do its task, because information that allows it to know what to do or decide are localized at the memory spaces of other nodes. Each node has to send what it has in its local memory to allow its neighbours progress. Thus, the nodes permute their information between them to solve the problem, while minimizing the total number of retransmissions [1]. We refer the reader to Fig. 1 for an illustration of the permutation routing problem with T = 8 nodes and n = 32 items. For simplicity, for each item, we only indicate its destination node. As an example, node S(1) initially stores n/T = 4 items destined to nodes S(3), S(5), S(6) and S(8). A solution for an application such permutation routing in IoT should take into account the constraints (energy-efficient, fault-tolerance) of these heterogeneous nodes [7], [8]. Nakano et al. [5] have designed a solution using DAMA c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 245–255, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_28

246

A. Boudaa and H. Belouadah 3 568

7 834

65 75

214 7

7168

5836

4242

2131

S(1)

S(2)

S(3)

S(4)

S(5)

S(6)

S(7)

S(8)

Initial 1 111

2222

3333

S(1)

S(2)

S(3)

4444

5555

S(5) S(4) Final

6666

7777

8888

S(6)

S(7)

S(8)

Fig. 1. Permutation routing with T = 8 and n = 32

protocol (Demand Assignment Multiple Access) [4] which avoids collisions and bandwidth waste by reserving k channels. The solution is both energy-efficient and fault-tolerant with the restriction that each node is the sender and recipient n items. of exactly T In this paper, we are interested in designing a fault-tolerant reservation-based DAMA protocol for permutation routing in single-hop IoT networks.

2

Related Works

In literature survey there is few published working about fault-tolerant in permutation routing problem. Datta and Zomaya [2] have shown that the energyT 2 2 efficient permutation routing protocol can be solved in 2n k + ( k ) + T + 2k slots 6n 2T and each station needs to be awake for at most T + k + 8k slots. In 2005, Datta [3] solves the fault-tolerant protocol for energy-efficient perT 2 T 3T 2 mutation routing in 2n k + ( k ) + k + 2 + 2k − k slots and each node needs to 4nfi 2n 2T T 2 be awake for at most T + T + k + k + 2 + 3k slots, where fi is the number T of faulty nodes in a group of nodes. k

3

Contributions

We present a protocol for permutation routing in IoT (T, n) network that works well regardless of faulty nodes. Unlike the work of [3], our protocol runs faster and the nodes may have an unequal number of items with random destination addresses. Our protocol works correctly when there are faulty nodes in IoT (T, n) network. We found that in the presence of fi faulty nodes per group of T nodes, the fault-tolerant permutation routing problem can be solved in k k + k 2 + fa + Tk + (2 Tk − 3fi )M AX(Mi ) time slots and all the thing nodes remain awake for the same time slots. Outline of the Paper: The rest of the paper is organized as follows: Sect. 4 presents the preliminaries. We proposed an overview of our routing protocol in Sect. 5. Next, we present our permutation routing protocol in Sect. 6. Section 7 details the simulation results. Finally, our conclusions are given in Sect. 8.

Fault-Tolerant Communication for IoT Networks

4

247

Preliminaries

We consider a network of thing with T thing nodes and n items, IoT (T, n) for short. In addition, the things ti , 1 ≤ i ≤ T , have different memory capacities; each Tthing ti has Mi items in its local memory. The sum of all Mi is egal to n, 1 Mi = n. We refer the reader to Fig. 2(a) depicting an IoT(8,32) network befor the permutation routing and Fig. 2(b) presenting the same network after the permutation routing where each thing node has its items and can proceed to execute its task [6].

38

78 3

t1

t2

11

222

t1

t2

487

2678

768

5847

246185

t6 t3 t5 t4 t7 (a) An IoT network befor permutation routing 333

4444

555

6 666

77 7 7 77

t3 t5 t4 t7 t6 (b) An IoT network after permutation routing

6 4 1 3 7 52 t8 888 8 888 t8

Fig. 2. An example of IoT before and after permutation routing

Let LT = {t1 , t2 , .., tT } be the list of T thing nodes in IoT (T, n) and Let Lm = {M1 , M2 , ..., MT } be the set of the memory spaces that hold the items of the thing nodes. With Mi the memory space of thing ti , for each 1 ≤ i ≤ T . We note, M IN (Mi ) (respectively M AX(Mi )) the lowest (respectively the largest) memory space for items that a thing node has in the IoT (T, n) network.

5

An Overview of Our Protocol

Our protocol uses O( nk ) memory on IoT (T, n). We assume that the permutation on IoT (T, n), where each thing has a memory space of O( nk ) is possible when: k≤



T

(1)

Our protocol consists of three steps: Grouping thing nodes, Broadcast items to groups and Broadcast items to the final destination. First Step: We divide the T things ti , 1 ≤ i ≤ T , into k groups with Tk nodes each, containing at least k nodes. Unlike the previous work of [6], our selection method depends on both two parameters at a time, the maximum memory spaces M AX(Mi ) and the cardinality of groups. Second Step: Each node ti determines all working and faulty nodes in its group, and gets all agent thing nodes. Each group has k agent nodes. The agent in G(j) receive nm items from G(m), 1 ≤ m ≤ k, composed of a set {Mi }, i ∈ {(m − 1) Tk + 1, .., m Tk }. In each group G(m), and in parallel, the Tk nodes use the channel C(m) to transfer all his items to their respective destination groups.

248

A. Boudaa and H. Belouadah

Lemma 5.1: √ Let Mi , i ∈ {1, 2, .., T }, memory space values of a group with the condition k = T ; then the largest size S(lg) of a group, will be with a size of: S(lg) ≤

T M AX(Mi ) k

(2)

proof: The best grouping for the largest group is when it contains only M AX(Mi ) for example, so we can write: T /k 

n=

i=1

S(lg) = n −

T 

T 

(M AX(Mi )) +

(Mi )

i=(T /k)+1 T 

(Mi ) ≤ n −

i=(T /k)+1

(M AX(Mi )) ≤

i=(T /k)+1

so the largest group S(lg), is: S(lg) ≤

T M AX(Mi ) k

T k M AX(Mi ).

Lemma 5.2: √ Let Mi , i ∈ {1, 2, .., T }, memory space values of a group with the condition k = T , then the smallest size S(ls) of a group, will be with a size of: S(ls) ≥

T M IN (Mi ) k

(3)

proof: The worst grouping for the smaller group is when it contains only M IN (Mi ), so we can write: n=

T /k  i=1

S(ls) = n −

T 

(Mi ) ≥ n −

i=(T /k)+1

T 

M IN (Mi ) +

(Mi )

i=(T /k)+1 T 

(M IN (Mi )) ≥

i=(T /k)+1

T M IN (Mi ) k

Therefore the least group S(ls), is: S(ls) ≥ Tk M IN (Mi ). With this condition (2) an agent will use in the worst case O( nk ) of memory T space because in each group the number of items is at most M AX(Mi ) items. k Third Step: We again assign channel C(i) to group G(i) and route at most T M AX(Mi ) items to their correct destination nodes within G(i). This routing k is done in parallel in all the groups. All the items destined for the faulty nodes can be dropped before they are sent to their final destinations.

6

Permutation Routing Protocol on IoT (T, n)

Our protocol consists of three steps: grouping thing nodes, broadcast items to its group and broadcast to the correct thing node.

Fault-Tolerant Communication for IoT Networks

6.1

249

Grouping thing nodes

We divided T nodes in k groups, Ln = {G(1), G(2),√ .., G(k)}. Each group must have at least k nodes, otherwise the condition, k ≤ T , is not verified. We use the grouping Algorithm 1 to achieve the grouping. This step is done locally and we can see that the overhead involved in assigning each node t into a group G(j) doesn’t take any time slot. Algorithm 1. Grouping thing nodes INPUT: set Lm = {M1 , M2 , .., MT } OUTPUT: LG = {G(1), G(2), .., G(k)} Ls ← Lm; Ln ← LG while Ls = φ do if |G(M IN (Ln))| < T /k then {If cardinality of G(M IN (Ln)) <

T , i.e., G(M IN (Ln)) is not full} k

G(M IN (Ln)) ← M AX(Ls) Ls ← Ls − M AX(Ls) else if Ln = φ then Ln ← Ln − G(M IN (Ln)) {remove minimum of G(i) from Ln} else i ← 1 {add the remaining things to each group} G(i) ← M AX(Ls) Ls ← Ls − M AX(Ls) i←i+1 end if end if end while return (LG)

6.2

Broadcast Items to Its Group

We denote the j th thing node in group G(i) by tj (G(i)), 1 ≤ j ≤ Tk ; and the j th agent thing node in group G(i) by taj (G(i)), 1 ≤ j ≤ k. There are three principal tasks in this procedure and they are as follows: Task 2.1. We determine all failed fi nodes in each group, and all k agents, of G(i), 1 ≤ i ≤ k. This task consists of two subtasks: Subtask 2.1.1. First, each node in G(i) broadcasts its ID one after another to all nodes in its group. This broadcast is done in parallel using channel C(i), 1 ≤ i ≤ k. Every node knows the slot when to broadcast its ID but the faulty node do not broadcast. Finally, each thing node assigns a correct serial number to itself among all the working thing nodes in group G(i).

250

A. Boudaa and H. Belouadah

We use for this subtask Algorithm 2: F aulty nodes. This task takes and each thing node remains awake for Tk slots.

T k

slots

Algorithm 2. F aulty nodes(ti (G(j))) {do in parallel for each G(j) on channel C(j)} OUTPUT: fi : Number of faulty nodes , if T ime to Broadcast(ti (G(j))) then Thing ti (G(j) broadcasts ID(ti (G(j)) on C(j) else if Thing ti (G(j) does not receive ID(tm (G(j)) then fi ← fi + 1 end if end if return fi Function T ime to Broadcast: ti (G(j)) d←i−1 return (d)

Subtask 2.1.2. We select k thing node agents for each group. The role of the k agents taj (G(i)), 1 ≤ j ≤ k, is to receive in their local memory spaces all the nm items of different groups, that have destination addresses in group G(i). We must choose, as agents, the k nodes that have the maximum memory spaces for items in Lm. This choice therefore consists of performing the least number of broadcast rounds in each group. Let t1 (G(j)), t2 (G(j)), .., t T (G(j)) the thing nodes in G(j), where k ta1 (G(j)), ta2 (G(j)), .., tak (G(j)) represent agents among the Tk nodes. Note that this selection can be done locally in each thing node and does not require any time slots. Task 2.2. We have to ensure that any items are not sent to faulty agents in G(i). All nodes in the same group G(i), have complete informations about all the working, faulty nodes and agents in it. However they don’t know the IDs of the working agents for the other groups. So, by applying the Algorithm 3: Select well agent, each agent taj G(i), 1 ≤ j ≤ k, in G(i) broadcasts its ID to all nodes in all network. We use one channel C(1) to send the IDs of agents. When agent taj (G(i)) broadcasts its ID, all the nodes listen to these broadcasts. If nodes don’t receive the ID of agent tap (G(m)) it means the agent is faulty, so we increment fa : the number of faulty agents in all IoT (T, n) network. Next, nodes of G(i) choose another agent for this group and we add one time slot for next broadcasting; but if the total number of working nodes is less than k then all the nodes in network may reinitialize the nodes in the whole network and restart the permutation routing protocol. This task takes on average k + fa /k slots to complete for each group, i.e., k 2 + fa slots overall, and each node remains awake for k 2 + fa slots.

Fault-Tolerant Communication for IoT Networks

251

Algorithm 3. Select W ell Agent(tj (G(i))) {use one channel C(1) for all thing nodes in IoT (T, n)} OUTPUT: fa : Number of faulty agent ; fi : Number of faulty node if T ime to Broadcast(tj (G(i))) AND tj (G(i)) is agent tap (G(m)) then Agent thing tj (G(i) broadcasts its ID on channel C(1) else if Thing node tj (G(i) does not receive ID of agent tap (G(m)) then fa ← fa + 1; fi ← fi + 1 Select new agent tap (G(m)) for G(m) if new agent Exist then Add one Time slot for broadcasting a new agent else RESTART the permutation routing protocol end if end if end if return fa ,fi Function T ime to Broadcast: tap (G(m)),fa d ← (m − 1) ∗ k + p − 1 + fa {add fa slots to Time} return (d)

Task 2.3. In this task, we transmit items to agent of destination groups. We need to specify the exact slots when ti will transmit its items in order to avoid list of memcollisions and subsequent item losses. Each node ti knows the i−1 ory spaces. This start time is the sum of this memory spaces p=1 Mtp slots. Therefore the process takes at most Tk and at least Tk − fi periods, if fi faulty thing nodes exist. Hence, each group transmit at most ( Tk − fi )M AX(Mti ) = k fi T ( k − i=1 )M AX(Mti ) items. k We use Algorithm 4: T ransmit items to Agents to perform this task. Each thing node has to remain awake for at most M AX(Mti ) slots to transmit its items an overall ( Tk − fi )M AX(Mti ) slots for transmission or reception. 6.3

Broadcast to the Correct Thing Node

At this point, all agents taj (G(i)), 1 ≤ j ≤ k, in G(i) hold at most S(i) ≤ T k M AX(Mi ) items with destination in G(i). Each agent in G(i) has complete information about the faulty nodes, all the items destined for the fi faulty nodes can be dropped before they are sent to their correct destinations. The main concern is managing the broadcast on each channel C(i) because node will be the destination of different agents taj (G(i)) at the same time. Therefore, the first task plays, in this procedure, a channel reservation role. The second task is to broadcast items in parallel for each group G(i) on channel C(i).

252

A. Boudaa and H. Belouadah

Algorithm 4. T ransmit items to Agents(ti (G(j))) {do in parallel for each G(j) on channel C(j)} INPUT: set L = {Mt1 , Mt2 , .., Mt T −f } in G(j) k

i

if T ime to Broadcast(ti (G(j))) then for s ← 1, s ≤ Mti do Thing ti (G(j)) broadcasts items to taj (G(m)) end for else if ti (G(j) is the agent taj (G(m)) then Agent taj (G(m)) copies items in its local memory else Thing ti (G(j)) drop items {not destined for it} end if end if Function T ime to Broadcast: ti (G(j)),Mti d←0 for h ← 1, h < i, h + + do d ← d + Mti {d = |Mt1 | + |Mt2 | + .. + |Mti−1 | } end for return (d)

Task 3.1. First, we determine the list Lp of memory spaces Ma1 , Ma2 , .., Mak of items in agent nodes ta1 , .., tak . Next, we can compute the exact time to broadcast them. We note that k broadcast rounds suffice to fill the list Lp = {Ma1 , Ma2 , .., Mak }. Now, a simple addition operation allows to each agent taj (G(i)), 1 ≤ j ≤ k, to know the exact moment |Ma1 | + |Ma2 | + .. + |Maj−1 | slots to broadcast. The details of the procedure are in the Algorithm 5: time to send. This task takes overall k time slots and the entire agent nodes remain awake for k time slots. Task 3.2. In this last task, we transmit items of agents taj (G(i)), 1 ≤ j ≤ k at their corresponding times, one by one. The details of this procedure are in the Algorithm 6: Broadcastto f inal destinations. This task takes (|Ma1 | + |Ma2 | + .. + |Mak |) ≤ ( Tk − fi )M AX(Mi ) − fi M AX(Mi ) = ( Tk − 2fi )M AX(Mi ) time slots to transmit/receive items (since this is the maximum number of items it holds as agent thing node), and all the thing nodes remain awake for at most ( Tk − 2fi )M AX(Mi ) time slots. The number of time slots for completion and the maximum awake time slots are shown in Table 1 for all protocol execution steps. Therefore, all steps of our fault-tolerant permutation routing protocol take overall k + k 2 + fa + Tk + (2 Tk − 3fi )M AX(Mi ) time slots and all the thing nodes remain awake for the same time slots.

Fault-Tolerant Communication for IoT Networks

253

Algorithm 5. P rocedure T ime to send(taj (G(i))) {do in parallel for each G(i) on channel C(i)} OUTPUT: set Lp = {} memory spaces of k agents Procedure: T ime to send: tj (G(i))), Lp if T ime to Broadcast(tj (G(i))) AND tj (G(i)) is agent taj (G(i)) then Lp ← Lp + {Maj } { put {Mam } in the set Lp } Agent taj (G(i)) broadcasts Maj in group G(i) else Agent taj (G(i)) receives Mam on channel C(i) Lp ← Lp + {Mam }{ put {Mam } in the set Lp } end if End Procedure Function T ime to Broadcast: taj (G(i)) d ← 12 ∗ j ∗ (j+) − 1 return (d) Table 1. The Max. number of time slots for completion and awake time. Step

7

Max. completion time slots Max. awake time slots

Step2 Task 2.1 Tk Task 2.2 k2 + fa Task 2.3 ( Tk − fi )M AX(Mi )

T k 2

k + fa ( Tk − fi )M AX(Mi )

Step3 Task 3.1 k Task 3.2 ( Tk − 2fi )M AX(Mi )

k ( Tk − 2fi )M AX(Mi )

Simulation Results

We have simulated our protocol using Python on a laptop with processor Intel(R) Core(Tm) i3, 2.50 Ghz with 4 GB of memory. In our simulations, we compare the performances of our protocol with the theorems theoretically proved, and we compare it with an other protocol [3] in the state of the art (Datta 2005). This is shown in Fig. 3. The total number of things is 100 and we vary the number of M AX(Mi ) items, receptively M IN (Mi ) items, to be routed from 35 to 105 items, receptively from 5 to 80 items, with two values of channel k (3 and 6). We assume that 20% of all things are faulty and fa = 0 for faulty agent nodes. For our protocol the M AX broadcast = k + k 2 + T T T T 2 k + 2 k M AX(Mi ), the M IN broadcast = k + k + k + (2 k − 3fi )M IN (Mi ) T T 2 and Random broadcast = k + k + k + (2 k − 3fi )Mi with fi = 20% Tk and M IN (Mi ) ≤ Mi ≤ M AX(Mi ). These results come to confirm our theoretical studies on the number of completion time slots. Also, it is clear that our protocol is more efficient in term of completion time than [3].

254

A. Boudaa and H. Belouadah

Algorithm 6. Broadcast to f inal destinations(ti (G(i))) {do in parallel for each G(i) on channel C(i)} INPUT: set Lp = {Ma1 , Ma2 , .., Mak } in G(i)

{Lp is calculated in Algorithm 5}

if T ime to Broadcast(tan (G(i))) AND tj (G(i)) is agent tap (G(i)) then for s ← 1, s ≤ Man do agent tap (G(i)) sends Items to final thing node end for else if Thing node tj (G(i) is final destination then Thing node tj (G(i) copies items in its local memory else Thing node tj(G(i) drop items end if end if Function T ime to Broadcast: tj (G(i)),Man d←0 for h ← 1, h < Man , h + + do d ← d + Man {d = |Ma1 | + |Ma2 | + .. + |Mak |} end for return (d)

Fig. 3. Number of time slots for completion with k = 3 and k = 6

8

Conclusion

We have presented a simple fault-tolerant and energy-efficient reservation-based DAMA protocol for permutation routing protocol for single-hop wireless network of things. Our√protocol performs well when the number channels satisfy the condition k ≤ T . However, there is a check in our protocol for this condition if it is violated, we do a reset of the entire network.

Fault-Tolerant Communication for IoT Networks

255

We show that the fault-tolerant permutation routing problem can be solved in k + k 2 + fa + Tk + (2 Tk − 3fi )M AX(Mi ) slots and each station should be awake for at most k + k 2 + fa + Tk + (2 Tk − 3fi )M AX(Mi ) slots, where fi is the average number of faulty things in each group and fa is number of faulty agents.

References 1. Radhakrishnan, A., Madhav, M.L.: A survey on efficient broadcast protocol for the Internet of Things. IJECS 5, 18838–18842 (2016). https://doi.org/10.18535/ijecs/ v5i11.21 2. Datta A., Zomaya, A.Y.: New energy-efficient permutation routing protocol for single-hop radio networks. In: Proceedings of 8th International Computing and Combinatorics Conference (COCOON 2002). LNCS, vol. 2387, pp. 249-258 (2002). https://doi.org/10.1007/3-540-45655-4 28 3. Datta, A.: A fault-tolerant protocol for energy-efficient permutation routing in wireless networks. IEEE Trans. Comput. 54, 1409–1421 (2005). https://doi.org/ 10.1109/ipdps.2003.1213175 4. Fine, M., Tobagi, F.A.: Demand assignment multiple access schemes in broadcast bus local area networks. IEEE Trans. Comput. 33, 1130–1159 (1984). https://doi. org/10.1109/TC.1984.1676391 5. Nakano, K., Olariu, S., Schwing, J.L.: Broadcast-efficient protocols for mobile radio networks. IEEE Trans. Parallel Distrib. Syst. 10, 1276–1289 (1999). https://doi. org/10.1109/71.819949 6. Lakhlef, H., Bouabdallah, B., Raynal, M., Bourgeois, J.: Agent-based broadcast protocols for wireless heterogeneous node networks. Comput. Commun. 115, 51–63 (2018). https://doi.org/10.1016/j.comcom.2017.10.020 7. Gubbi, J., Krishnakumar, K., Buyya, R., Palaniswami, M.: Iternet of Things (IoT): a vision, architectural elements, and future directions. J. Futur. Gener. Comput. Syst. 29(7), 1645–1660 (2013). https://doi.org/10.1016/j.future.2013.01.010 8. Iova, O., Theoleyre, F., Noel, T.: Using multiparent routing in RPL to increase the stability and the lifetime of the network. Elsevier Ad Hoc Netw. 19, 45–62 (2015). https://doi.org/10.1016/j.adhoc.2015.01.020

Emergency Navigation Approach Using Wireless Sensor Networks and Cloud Computing Najla Alnabhan1(&), Nadia Al-Aboody2(&), and Hamed Al-Rawishidy2(&) 1

Department of Computer Science, King Saud University, Riyadh, Saudi Arabia [email protected] 2 Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK {Nadia.Al-Aboody,Hamed.Al-Raweshidy}@brunel.ac.uk

Abstract. Emergencies can happen at anytime and anywhere. Governments around the world try to ensure public and private organizations’ preparedness for all types of potential emergencies. They usually rely on implementing autonomous systems to deal with unpredictable emergency scenarios. This paper proposes an adaptive emergency evacuation approach based on a wireless sensor network integrated with cloud. The proposed approach maximizes the safety of the obtained paths by adapting to the characteristics of the hazard, evacuees’ behavior, and environmental conditions. It also employs an on-demand cloudification algorithm that improves the evacuation accuracy and efficiency for critical cases. It mainly handles the important evacuation issue when people are blocked in a safe, dead-end area of a building. Simulation results show an improved safety and evacuation efficiency by an average of 98% over the existing time-based and single-metric emergency evacuation approaches. Keywords: Wireless sensor network

 Clouds  Emergency navigation

1 Introduction An emergency is a situation or condition that causes hazard to an environment, life, company, community, or property. Emergency management (EM) is vital for any organization today. It aims to create plans by which communities reduce their vulnerability to hazards and cope with disasters. It does not avert or eliminate the threats, instead, it focuses on creating plans to decrease the effect of disasters. Emergencies can be caused by several intentional or unintentional natural or man-made acts. In most cases, emergencies are unpredictable in terms of occurrence, scope, impact, and intensity which significantly increases their impact on safety, property, economy, infrastructure, and environment. Therefore, emergency planning, preparedness, and evacuation are quite important for safeguarding national security and economy to control the hazard and to provide autonomous evacuation solutions during an emergency. Emergency navigation (EN) concentrates on combining mathematical models or algorithms with the underlying sensing, communication, and distributed, real-time © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 256–268, 2019. https://doi.org/10.1007/978-3-030-03577-8_29

Emergency Navigation Approach Using Wireless Sensor Networks

257

computation to guide evacuees to safety in a built environment. Wireless sensor networks (WSNs) have been widely employed for environmental monitoring and control. Using WSNs, the deployed sensors collect and report results to a central repository. WSNs have been recently integrated with other communication and intelligent technologies, such as cloud computing, smartphones, and robots, in order to implement systems with more powerful, advanced, and accurate solutions. Such integration allows the efficient utilization of WSNs’ advantages and overcomes almost all WSNs’ limitations, including limited processing power, limited communication, and low accuracy for localized decisions [1–3]. The idea of integrating WSNs with cloud computing (CC) is quite promising. A typical WSN consists of a large number of low-cost, low-power, multifunctional, and resource-constrained sensor nodes. Cloud services are a powerful, flexible, and costeffective framework that provides real-time data to users with vast quality and coverage. A cloud typically consists of hardware, networks, services, storage, and interfaces that enable the delivery of computing as a service. Clouds are designed with the flexibility to withstand harsh environmental conditions. Integrating CC with WSNs allows virtualization, which facilitates the shifting of data from WSNs to a cloud. Accordingly, it also allows cost-efficient applications and service provisioning in WSNs. Using the cloud, all WSNs’ resources can be virtualized and provided as services to third parties depending on their demands. However, integration should be well-designed and modeled in order to provide efficient, robust, and scalable infrastructure for several critical applications, including emergency management. In an evacuation and emergency management context, the advantages of using a WSN include real-time oversight of complex first responses, advanced alerts, in-field data collection, communication, aggregation, collaborative processing and analysis, and configuration-dependent actuation. On the other hand, CC can provide complex/remote data and situation analysis, on-demand centralized processing, and high-performance, wide-range communication. As a result, while WSNs can be implemented as part of a short-time first-response system for rescue evacuations, CC helps in making accurate, informed, and centralized decisions as a second-response system for rescue and reconstructive evacuations. CC also offers more advanced services to be provided in the service plane for local authorities and agencies [3–6]. The trade-off between centralized decisions made remotely by the cloud and localized, distributed decisions calculated by sensor nodes is important. Centralized decisions are generally expensive in terms of time and communication costs, but they could minimize damage and fatalities, especially when localized decisions lack in making proper evacuation decisions. This trade-off is affected by many factors, including timing, intensity (or perception) of the hazard, evacuees’ behavior, and environmental conditions, all of which are considered in tackling time-critical evacuation tasks. Figure 1 shows a typical architecture for integrating WSNs with CC for emergency and evacuation management. The figure shows that WSNs act on the base plane, where low-cost sensing nodes are densely deployed in the targeted area. Sensing nodes collect and transmit data to control nodes. Control nodes are more capable than sensor nodes, as they have higher computation and communication capabilities. However, they are usually deployed less-intensely than sensor nodes to minimize the cost and

258

N. Alnabhan et al.

communication overhead. Connection to the cloud gateway on the middle plane is done through control nodes to tackle complex computations or provide remote information. The cloud plane is connected to the upper service (or control) plane through another gateway(s).

Fig. 1. A typical architecture of integrating WSNs and CC for emergency and evacuation management.

In this paper, an adaptive emergency evacuation approach that integrates cloud computing with wireless sensor networks in order to improve evacuation accuracy and efficiency is proposed. Our approach is designed to perform localized, autonomous navigation by calculating the best evacuation paths in a distributed manner using two types of sensor nodes. In addition to distributed path finding, sensor nodes identify the occurrences of a common evacuation problem that happens when evacuees are directed to safe, but dead-end areas of a building. These areas are characterized as safe because they are far from the incident, but they are also far from the exit. Eventually, these areas become no longer safe, especially if the incident is intense. When such a situation is identified, our approach employs cloudification to efficiently and carefully handle this problem. In this paper we also study and compare the performance of the proposed localized WSN-based evacuation approach to its cloudified version in terms of number of survivors, evacuation time, and efficiency. We also compare the performance of our two approaches to one of the existing, widely used evacuation approaches that relies on a distance metric to find the shortest path to the closest exit. Simulation shows both proposed approaches have improved evacuation efficiency and accuracy. The rest of this paper is organized as follows: The next section presents some of the existing related work in the literature. In Sect. 3, the design for the proposed approaches is presented. In Sect. 4, the analysis and simulation results are discussed. Finally, we conclude the paper with Sect. 5 on future work.

Emergency Navigation Approach Using Wireless Sensor Networks

259

2 Related Work Many emergency management and evacuation systems based on WSNs were proposed in the literature. Reference [7] presented various types of disaster detection and management approaches using WSNs. It discusses how WSN can be employed to provide early warning system for natural disaster which helps in minimizing the loss of life and property. It discusses many types of natural hazards in Himalayan state of Uttarakhand. This area is prone to many types of disasters including: flash flood, forest fire, landslide, and earthquake. For indoor emergency management systems, reference [8] implements a dynamic emergency system, which uses the already implemented sensors and cameras, supplemented with task oriented sensors, in a Distributed/Wireless Sensor Networks (DSN/WSN) architecture. It aims at gaining knowledge of the building environment conditions, such as fires. Then, it uses the collected information to dynamically generate the best paths to evacuate the premises. Path generation considers both data provided by sensors and the nature of the emergency. Performance evaluation shows that this approach avoids and minimizes potential risk and maximizes number of possible evacuation paths which improves safety and evacuation time. A novel monitoring and rescue system based on WSN is presented in [9], which combines environment monitoring, information transmission, and emergency localization. It uses a fast localization technique for searching and rescuing. Their approach performs an adaptive error correction to improve localization accuracy of WSN with sparse anchors. It employs received signal strength indication (RSSI). Reference [10] proposes an autonomous community architecture for constructing a special group of routers. Each group represent a community which includes main route for emergency information’s transmission, and barrier which protects emergency information’s transmission from influence of normal sensing information’s transmission. Results indicate an improved real-time performance for dynamically changing situations. An approach called CANS, Congestion-Adaptive and small stretch emergency Navigation algorithm with WSNs is presented in [11]. It leverages the idea of level set method to track the evolution of the exit and the boundary of the hazardous area. In this approach, evacuees in nearby hazard’s area are evacuated through mild congested but longer paths. Instead, evacuees distant from the hazard avoid unnecessary detours. CANS also considers the situation in the event of emergency dynamics by incorporating a local yet simple status updating scheme. An interesting feature about their approach is that it does not require location information, nor the reliance on any particular communication model. Its effectiveness was validated through experiments and simulations.

3 The Proposed Approach In this section, the discussion of the conceptual model and the design of the proposed solution, called adaptive real-time clouded wireless sensor network-based (ARTCWSN) emergency navigation approach, is presented. We first model the evacuation area. Then, we show the conceptual model representing the overall behavior of the approach and discuss the design considerations and approach design in detail.

260

3.1

N. Alnabhan et al.

Modeling the Evacuation Area

To model the underlying evacuation area, a similar building model to the one described in [1] has been used. A typical floor building model to the proposed approach includes two types of wireless nodes in order to sense and process the information needed to locally calculate safe paths for the evacuees. a. Sensor nodes (SNs) are used to sense the presence of a hazard (e.g., fire, gas) and to detect the presence of evacuees in their vicinity. In other words, these nodes present a combination of hazard and motion sensing. SNs communicate with their neighboring decision nodes to transmit the sensed data. b. Decision nodes (DNs) act as routers that execute the ARTC-WSN approach in order to calculate the best evacuation path to guide the evacuees in the nearby area. DNs are also able to communicate with the cloud in specific conditions, especially in the case of safe, dead-end areas, high personal risks, or detected fatalities. An assumption is made that DNs are connected to installed path signs (i.e., LCDs) in order to show the calculated evacuation directions to the evacuees. Figure 2 shows an exemplary model for the underlying area, which corresponds to the bottom-most plane in Fig. 1. The evacuation area has been divided into zones covered by SNs and DNs. SNs and DNs were deployed with alignment distances (Stepx, Stepy) and (Stepdx, Stepdy), respectively. Each zone has been covered by four DNs and at least four SNs to provide full sensing coverage for the building. The gray lines (grid) represent walls that block evacuees’ movements from one area (zone or room) to another. Moving from zone to another should be through the specified door of the zone. Two exits were suggested in this exemplary model, one at the top right corner and the other at the bottom left corner of the building. Different values of the above described environmental variables were examined and studied in the simulation.

Fig. 2. A Graphical representation of an exemplary building model where the ARTC-WSN evacuation approach could be employed.

Emergency Navigation Approach Using Wireless Sensor Networks

3.2

261

ARTC-WSN Conceptual Model

The ARTC-WSN emergency evacuation approach is triggered when a hazard is detected. As illustrated in Fig. 3, sensor nodes periodically collect and report data on hazard source, intensity, and evacuees’ movements to the DNs. Consequently, DNs gather and combine data received from the nearby sensor nodes with the information provided by the cloud. Then, DNs employ the ARTC-WSN approach to locally find evacuation paths.

Fig. 3. Conceptual model representing the overall behavior of ARTC-WSN

The calculation of evacuation paths at DNs is done in a distributed manner through the following steps, shown in Fig. 3: Step 1. At time t, each DN (di) evaluates its nearby paths in order to calculate its safety metric, S(i,t), as follows: WayOutIndicator ðdi Þ ¼ distanceðdi; EX Þ  ExitFactor

ð1Þ

RiskIndicator ðdi Þ ¼ distanceðdi ; IncidentÞ  RiskFactor  Intensity

ð2Þ

262

N. Alnabhan et al.

Sði;tÞ ¼

RiskIndicator ðiÞ WayOutIndicator ðiÞ

ð3Þ

where exit factor is a scalar number in the range [1, 2] and is calculated as (1 + (1/number of exits), distanceðdi ; exitÞ represents the distance between the decision node di and its nearest exit, distanceðdi ; IncidentÞ represents the distance between di and the incident, and intensity corresponds to the spreading out of the incident over time between any point in time t and its previous point t−1. Step 2. Each decision node di exchanges its safety metric S(i,t) with its neighboring decision nodes. Step 3. Each decision node di finds its best neighboring decision node, say dj, among its neighbors by comparing its safety metric Si with the safety metric of all neighboring decision nodes. The best neighboring decision node dj of any node di is the one that has the highest safety metric, including the node di itself. Step 4. At time t, each decision node di adjusts its controllable path signs to point toward the best decision node dj. In addition, each decision node cooperatively assesses the intensity of the hazard, personal risk, evacuation paths, and exists with its neighbor. Step 5. If high personal risk is detected, meaning the incident is closer or has reached a nearby area where an evacuee is present, decision node di communicates with the cloud to request help and update evacuation matrices. Step 6. If a dead-end area is detected, decision node di communicates with the cloud to perform reverse routing. 3.3

Cloudification Phase

In our model, all decision nodes periodically report important information to the cloud about the hazard and progress of evacuation. When high personal risk is detected, either by the cloud or locally by a decision node, the cloud acts either to rescue people in that area or to adjust evacuation metrics at specific decision nodes. The cloud also plays another important role in evacuating people at safe dead-end areas detected by any decision node di. Safe dead-end conditions occur when evacuees can’t be moved from their current location toward the exit because all other nearby areas are considered by the surrounding decision nodes to be more dangerous than their current location. These safe dead-end areas are safe at that moment, say at time t, however, the evacuees are, in fact, considered endangered because they have not been evacuated or reached the exit yet. Because DNs are performing in a distributed localized manner without human interaction, they cannot optimally solve this problem, which sometimes requires a global view of the whole evacuation area and communication with some authorities. In this situation, the cloud server executes the reverse routing phase of our approach (AC-RR), which attempts to find a route from the safest exit to the dead-end point, where evacuees are located, in order to find the best (shortest and safest) evacuation path for people in such areas. Figure 3 shows the interoperability between ARTC-WSN phase and cloudification phase.

Emergency Navigation Approach Using Wireless Sensor Networks

263

ARTC-RR path finding is a fast, greedy algorithm that executes the following steps: Step 1: When a dead-end point is detected by a decision node di, it communicates to the cloud to request help in updating evacuation matrices and executing reverse routing. Step 2: The cloud locates the nearest exit to the dead-end with highest safety metric as a starting point of the reverse routing. Step 3: Given the safety metric Sj and location of all decision nodes j, the cloud adds the decision node that is closer to the located dead-end area and has the highest safety among the other alternatives as a next hop. Step 4: AC-RR keeps adding the safest next hop towards the dead-end. Step 5: It terminates when the dead-end area is reached. Accordingly, all DNs along this path adjust the path signs based on the calculated reverse path. An important characteristic provided by ARTC-RR is that the evacuation matrices and calculated paths provided by the cloud can’t be overwritten locally by DNs. This characteristic avoids recreating dead-end areas and leading evacuees to these areas. When changes are needed locally, DNs communicate with the cloud to get updates, if any. This grantees avoiding any possible conflict between distributed decisions calculated cooperatively by DNs and global decisions that are calculated remotely by the cloud. Another interesting characteristic of this approach is that it only performs cloudification on demand when high personal risk and dead-end points are detected which eliminates the communication cost and delay encountered by centralization in normal situations.

4 Simulation Setup and Results In order to study and analyze the performance of the proposed approaches, we used our event-driven simulator, presented in [12]. It was implanted using MATLAB r2012a. This section presents the design and implementation of the simulation experiment. It discusses the different simulation scenarios, parameters, and performance factors. It also analyzes the most significant performance results in terms of survival percentage, evacuation time, and number of fatalities. 4.1

Simulation Design and Setup

A number of simulation variables are considered here in a way that mimics real-life problems, including location and intensity of the hazard properties, number of evacuees, evacuation area, and exits. The presented results in this paper represent an average of 30 simulation runs with different levels of randomness. Hazard location has a substantial impact on the performance of emergency navigation algorithms. A welldesigned evacuation approach is one that predicts the path safety with respect to hazard location. Hazard intensity is also important for any evacuation approach in order to distinguish between different forms and intensities of hazards. In this simulation, four

264

N. Alnabhan et al.

different intensity values were considered to represent the intensity of incident changes —3, 5, 7, and 9—in order to assess the behavior of the proposed approaches under different minor and major impact hazards. It means the hazard expands 9 units of area, i.e. meters, in each unit of time, i.e. seconds, in all directions. Number of evacuees allows a comparison of the performance of different approaches under different evacuee densities. For evacuation area, the performance of the proposed approach was studied in small, moderate, and large evacuation areas. More specifically, the performance was evaluated for a small evacuation area of 100  100 m, a moderate area of 200  200 m, and a large area of 300  300 m. For exit availability, as illustrated in the previous section, the proposed area was assumed to mimic the model in [1], which had two exits. In addition, another model in experiment 3 was examined to analyze the behavior of the proposed algorithm in several situations where only one exit was found. Based on the above-described scenarios and simulation variables, the performance of the proposed approach is compared to one of the evacuation approaches presented in [13], which is a modified Dijkstra’s shortest path (DSP) algorithm with time and distance metrics. The performance was examined in different experiments to investigate the potential improvements offered by the proposed algorithms. We describe these averaged results in terms of different performance metrics including: (1) The overall survival rate (or percentage of survivals), (2) Evacuation time, which is the time taken to evacuate the entire civilian population from the hazardous area and locate them in safe zones where help could be provided, and (3) Civilian casualties or fatalities (number of dead civilians). 4.2

Simulation Results

Figure 4, 5, 6, and 7 illustrate the average evacuation time for the different evacuation areas with hazard intensities of 3, 5, 7, and 9, respectively. Figure 8 shows the average number of fatalities for small, moderate, and large-scale evacuation areas for different hazard intensities. As shown in the figures, our approaches maintained the lowest death rate compared to the DSP. This behavior indicates that the performance of the proposed approaches is stable in different evacuation areas. The results also show that when hazard intensity was high and area was large, the DSP had the highest death rate compared to the ARTC-WSN and the ARTC-RR.

Fig. 4. Comparison of average evacuation time for different evacuation areas when hazard intensity = 3.

Fig. 5. Comparison of average evacuation time in different evacuation areas when hazard intensity = 5.

Emergency Navigation Approach Using Wireless Sensor Networks

Fig. 6. Comparison of average evacuation time in different evacuation areas when hazard intensity = 7.

265

Fig. 7. Comparison of average evacuation time in different evacuation areas when hazard intensity = 9.

Fig. 8. Comparison of number of fatalities of different approaches in different evacuation areas when number of evacuees = 300.

In experiment 4, the behavior of the three approaches was tested for evacuee densities ranging between 100, 300, and 500 evacuees when the hazard was randomly located with an intensity of 3. The results show that the ARTC-RR had the highest performance in terms of saving the largest number of evacuees, as seen in Table 1. However, with the large number of evacuees, a longer time was required to navigate all evacuees to the exit, as shown in Fig. 9 when the population increased. The ARTCWSN also had a high survival rate, reaching 97%, and an acceptable evacuation time compared to the DSP, which had the lowest survival rate and the highest death rate. Figure 10 illustrates the death rate of the three approaches in Experiment 4.

Table 1. The average percentage of survivals when the number of evacuees ranged between 100 and 500. Number of evacuees ARTC-WSN 100 96% 300 97% 500 97%

ARTC-RR 97% 98% 97%

Dijkstra 91% 93% 94%

266

N. Alnabhan et al.

Fig. 9. Comparison of evacuation time when number of evacuees ranged between 100–500.

Fig. 10. number number between

Comparison of of fatalities when of evacuees ranged 100–500.

For experiment 5, it was designed to study the performance of the proposed approaches when a smaller number of exits was available or if one or more of the exits was blocked by a hazard. The evacuation area considered was 100  100 m, which was occupied by 300 evacuees. In such severe circumstances, the highest priority is to evacuate the civilians with the minimum death rate. Under these circumstances, the simulation showed that the ARTC-RR had the highest survival rate and, hence, the lowest death rate, as shown in Table 2, Figs. 11 and 12. The ARTC-RR achieved good performance due to its priority assignment to the calculation of the safety of the path over the speed of evacuation. The DSP had the same fixed evacuation time and the highest death rate due to its priority assignment to the evacuation time over the safety of the path.

Table 2. The average percentage of survivals for different number of exits Exit availability ARTC-WSN ARTC-RR Dijkstra Two exits 96% 97% 91% One exit 79% 86% 81%

Fig. 11. Comparison of evacuation time for different exit availabilities.

Fig. 12. Comparison of death ratio for different exit availabilities for experiment 5.

Emergency Navigation Approach Using Wireless Sensor Networks

267

To conclude, in comparison with the DSP, our proposed approaches had overall higher survival rates as a result of their ability to tailor paths to evacuees with respect to the safety of the path leading them farthest from the hazard. This reflects the use of cloud communication in severe cases where evacuees trapped in safe dead-ends have a positive impact on the performance of the algorithm. The reason is that the use of cloud-centralized reverse routing can generate a safe distance between evacuees and the spreading hazard. In the proposed approaches, although they took a slightly longer time to evacuate all individuals, they considered the safety of the evacuation paths as higher in priority than the speed of the evacuation process. Thus, the proposed approaches might guide an evacuee to longer paths to avoid zones at higher risk of hazard. The main aim of the proposed approaches is to find the best (safest) path available, not the fastest path, like the DSP. Therefore, the DSP has a higher death rate mainly because it does not adapt to real-time changes to the hazard location and always directs evacuees to the nearest exit, searching for the fastest path with no prior hazard calculation.

5 Conclusions and Future Work This paper proposed simulation-based, real-time routing algorithms to increase the survival rate of an evacuation process. We employed a localized WSN-based solution to perform navigation and predicted safe dead-end problems. In addition, this paper proposed a cloud-based approach to address this problem remotely in a centralized manner in order to find the optimal evacuation paths for civilians. Moreover, a fire model was used to predict the hazard spread, and the calculation of safe routes was based on the initial distribution of evacuees, the distance from the hazard, the distance to the exit, and the intensity of the hazard. These factors differentiate our approaches from the existing algorithms, including the DSP, which normally calculate the fastest path regardless of the safety of the path. Our experimental results show that the proposed algorithms achieved better survival rates than the existing well-known DSP approach with time and distance metrics in terms of survival rate and evacuation efficiency. In the future we plan to study the performance of our approaches with multiple hazards occurring in the building or evacuation area. Acknowledgment. The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Research Project No R5-16-01-0.

References 1. Akinwande, O.J., Bi, H., Gelenbe, E.: Managing crowds in hazards with dynamic grouping. IEEE Access 3, 1060–1070 (2015) 2. Barrenetxea, G., Ingelrest, F., Schaefer, G., Vetterli, M.: Wireless sensor networks for environmental monitoring: the sensorscope experience. In: 2008 IEEE International Zurich Seminar on Communications, pp. 98–101, March 2008 3. Wang, W., Lee, K., Murray, D.: Integrating sensors with the cloud using dynamic proxies. In: Proceedings of IEEE 23rd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 1466–1471 (2012)

268

N. Alnabhan et al.

4. Perumal, B., Rajasekaran, M.P., Ramalingam, H.M.: WSN integrated cloud for automated telemedicine (ATM) based e-healthcare applications. In: Proceedings of the 4th International Conference on Bioinformatics and Biomedical Technology (IPCBEE 2012), vol. 29, pp. 166–170, February 2012 5. Ahmed, K., Gregory, M.: Integrating wireless sensor networks with cloud computing. In: 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks, Beijing, pp. 364–366 (2011) 6. Qiu, M., Ming, Z., Wang, J., Yang, L.T., Xiang, Y.: Enabling cloud computing in emergency management systems. IEEE Cloud Comput. 1(4), 60–67 (2014) 7. Pant, D., Verma, S., Dhuliya, P.: A study on disaster detection and management using WSN in Himalayan region of Uttarakhand. In: 2017 3rd International Conference on Advances in Computing, Communication & Automation (ICACCA) (Fall), Dehradun, pp. 1–6 (2017) 8. Munoz, J.A., Calero, V., Marin, I., Chavez, P., Perez, R.: Adaptive evacuation management system based on monitoring techniques. IEEE Lat. Am. Trans. 13(11), 3621–3626 (2015) 9. Lu, M., Zhao, X., Huang, Y.: Fast localization for emergency monitoring and rescue in disaster scenarios based on WSN. In: 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, pp. 1–6 (2016) 10. Wei, F., Zhang, X.: Autonomous community architecture for emergency information’s transmission. In: 2015 Sixth International Conference on Intelligent Systems Design and Engineering Applications (ISDEA), Guiyang, pp. 167–170 (2015) 11. Wang, C., Lin, H., Jiang, H.: CANS: towards congestion-adaptive and small stretch emergency navigation with wireless sensor networks. IEEE Trans. Mob. Comput. 15(5), 1077–1089 (2016) 12. Al-Nabhan, N., Al-Aboody, N., Rawishidy, H.: Adaptive wireless sensor network and cloudbased approaches for emergency navigation. In: Proceedings of IEEE LCN 2017, Singapore, October 2017 13. Bi, H., Gelenbe, E.: Cloud enabled emergency navigation using faster-than-real-time simulation. In: 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), pp. 475–480, March 2015

Security Scheme for IoT Environments in Smart Grids Sebastián Cruz-Duarte(&), Marco Sastoque-Mahecha, Elvis Gaona-García, and Paulo Gaona-García Faculty of Engineering, Universidad Distrital Francisco José de Caldasé de Caldas, Bogotá, D.C., Colombia {scruzd,mssastoquem}@correo.udistrital.edu.co, {egaona,pagaonag}@udistrital.edu.co

Abstract. The following paper proposes a security scheme applied to Smart Grids, using different security mechanisms to comply with confidentiality, authentication, and integrity aspects in a grid implemented with Raspberry Pi 3 nodes. The study presents the evaluation of different encryption modes to establish the final parameters in the construction of a security scheme, satisfying NTC 6079 specified requirements for smart grids infrastructure based on metric comparison developed on various performance criteria. Keywords: Cyber security  Confidentiality Smart grids  IoT  Encryption

 Integrity  Authentication

1 Introduction As for IoT and Smart Grids technologies, security is a key factor due to the type of information handled and the existing vulnerabilities in the communication protocols. New concepts in recent years in the world of information such as the Internet of Things (IoT) and smart grids in general entail the redefinition of standards for the coupling of new functions associated with the world of information and communication. This set of standards includes security in data transmission as it plays a fundamental and basic role in the acceptance of new technological approaches. Smart Grids aim to improve the capabilities of today’s grids by seeking to increase their efficiency in three fundamental aspects: capacity, reliability, and efficiency of the grids. However, this integration creates a new set of vulnerabilities caused by cyber intrusion and corruption, which can lead to devastating physical effects and large economic losses. Based on the set of technical standards described in NTC 6079 [1], three basic security principles are defined to be considered when designing a security system in electricity distribution networks, namely: confidentiality, integrity, and authentication. Based on these requirements, the following study presents the development of a security schema implemented in Python that is applicable to IoT and Smart Grids environments based on the TCP/IP model as a case study. The system was implemented on a communication environment composed of several Raspberry Pi 3 devices with a Quad Core © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 269–281, 2019. https://doi.org/10.1007/978-3-030-03577-8_30

270

S. Cruz-Duarte et al.

1.2 GHz Broadcom BCM2837 64 bit CPU (ARM v7 rev 4 [v71]) processor, 1 GB RAM memory, and Raspbian - Linux Raspberry 4.4.38-v7+ operating system. The rest of the article is organized as follows. Section 2 presents related research on the implementation of security mechanisms on IoT environments. Section 3 presents the methodology proposed for its development. Section 4 presents the security scheme defined for our study. Section 5 presents the results obtained. Section 6 analyzes the results obtained. Finally, Sect. 7 presents the conclusions and future research.

2 Related Research Several works have been carried out to determine the advantages and disadvantages of the different algorithms of information encryption. Khalid, Rihan, and Osmar [2] propose a research study for the comparison between AES and DES encryption algorithms on MAC and Windows platforms. The results show performance with the use of the AES algorithm with an average data processing rate of 27.76 Kb/sec. on the Windows platform and 31.65 Kb/sec. on the MAC platform; this performance was higher than that obtained with the use of the DES algorithm with data processing performance of approximately 10.13 Kb/sec. and 31.65 Kb/sec., respectively. Rani and Mittal [3] seek to enhance the performance of the AES algorithm by using artificial intelligence techniques to adapt it to different development approaches within security systems. Laue et al. [4] seek a focus on the use of CSRNG to highlight the advantages of AES over other algorithms. Mahajan and Sachdeva [5] follow the same idea as they determined the superior performance potential of the AES algorithm over the DES and RSA algorithms through various simulation processes; this hypothesis was also developed by Alahmadi et al. [6]; Liu and Baas [7]; Masoumi and Hadi [8]; Liu, Xu, and Yuan [9]; and Hun, Hee, and Hong [10]. Sarika et al. [11] identified fundamental characteristics in the design of security systems: confidentiality, integrity, authentication, non-repudiation, and anonymity. Based on these security requirements, there are studies describing different models of attacks such as those conducted by Biryukov and Khovratovich [12] and Eder-Neuhauser et al. [13]. In these studies, the authors analyze different models of security breaches which serve as a basis for preparing countermeasures for attacks on Smart Grids.

3 Methodology The research methodology used is based on an experimental and applied method, therefore the process comprise a first stage of study of variables, in this stage, main qualitative and quantifiably variables for metric evaluation of security scheme are established based on defined requirements, next stage is the choice of an experimental design and components of the system experimental replication of the proposed scheme on case study, after this comes a stage of data processing and recollection, to identify through a parametric analysis on defined variables the final design of the system and its different parameters, with a successive evaluation of proposed scheme performance, finally an analysis of experimented results is presented and compared to similar approaches.

Security Scheme for IoT Environments in Smart Grids

271

Therefore, before the scheme approach, the general operation description used to carry out the development of the security scheme is presented below. 3.1

General Description of the Scheme

The proposed scheme aims to meet the requirements of confidentiality, authentication, and integrity of information in a grid where the sending of messages is essential for its functioning. The scheme is based on a client-server architecture, where each node is represented by a Raspberry Pi 3. This is performed to centralize the distribution of keys for the encryption and decryption of messages between each node of the grid as shown in Fig. 1.

Fig. 1. General architecture of the proposed security scheme.

In order to determine the validity of this communication model, the following series of stages are proposed: (1) IP connectivity verification. Initially, the IP address of the client sending the request is verified in a whitelist on the server, where each IP address authorized to perform encryption key requests to the server is specified. All IP address connections that are not in the list are rejected. (2) Implementation of encryption algorithms. The confidentiality of the information sent and received by each node in the grid is integrated by applying the AES symmetric encryption algorithm to encrypt and decrypt messages sent between every node and by using digital signatures and encryption through the asymmetric RSA algorithm. (3) Communications channel encryption. This can be solved through the TSL/SSL application with a security certificate that allows all the data passing through the communications channel to be encrypted. 4) Verification of integrity of the communications channel. In order to comply with the data integrity requirement, HMAC was applied; this code works with a

272

S. Cruz-Duarte et al.

message authentication code (MAC) to ensure, by means of a hash function and a key, that the message sent has not been intercepted or modified on the way to its destination. Finally, once the previous work phases have been determined, all activity is recorded in a log file that can be found on both the client and the server. 3.2

Message Flow in the System

Figure 2 shows the message flow in the system between the client and the server.

Fig. 2. Message flow in the security scheme.

(1) The client sends a request to the server to send the two keys: the key of the symmetric AES algorithm, required to encrypt the message; and the key of the HMAC to verify the integrity of the sent message. This request message is digitally signed by the client. (2) The server verifies that the IP from which the request is being made is in its whitelist. If the IP is verified, it continues with the system flow. Otherwise, the request is rejected and the connection is closed. (3) The server verifies the digital signature of the client request message. (4) The server sends the requested keys, encrypts them with the client’s public key through the RSA asymmetric algorithm, and sends them to the client through a message that is digitally signed. (5) The client verifies the digital signature of the message sent by the server. (6) The client decrypts the received message containing the keys with its private key using the RSA algorithm, encrypts the data to be sent with the AES key received, and generates the message authentication code (MAC) through HMAC with the second key received. (7) Based on the encrypted message and the MAC generated in the previous step, a message that is digitally signed is created and sent to the server. (8) The server verifies the digital signature of the message sent by the client.

Security Scheme for IoT Environments in Smart Grids

273

(9) The server verifies the integrity of the message received through the MAC generated by the client and finally decrypts the sent message. Each step described above is recorded in a log file located on both the server and the client. In case the verification of the digital signature of any of the messages fails, the connection is closed.

4 Proposed Security Scheme 4.1

Authentication

One of the ways to guarantee the authentication of an entity is through digital signatures and a whitelist that consists of a registry of entities that have access to a service, privilege, among others. It is possible to ensure that each client in the grid is authenticated by the server and vice versa by applying these two methods. In most cases, digital signatures are used to ensure the integrity and authentication of messages, and their applications range from email to bank funds transfers [18]. RSA is an asymmetric cryptographic system, the first of its kind, named after its creators Robert Rivest, Adi Shamir, and Leonard Adleman [19]. Within the system, each node in the grid has its own RSA key pair, and the client knows the public key of the server and vice versa. With its own private key, each sender signs the message to be sent, and the receiver, knowing the sender’s public key, the message signature, and the signed message, verifies the message received, as specified in RFC 8017 [20]. Figure 3 shows the process of signature and verification.

Fig. 3. Signature scheme and verification of digital signatures.

Whitelist is a concept of cyber security that is used to ensure the authentication of users or entities and to control access services. The proposed system implemented a whitelist of IP addresses that have permission and are authorized to make requests for security keys and send messages to the server. This whitelist is encrypted and used every time the server receives a message.

274

4.2

S. Cruz-Duarte et al.

Integrity

Integrity in cyber security is the ability to guarantee the flow of information without alterations, so the structure of messages received is exactly the same as that of messages sent from one point. For this purpose, hash functions are used to generate authentication codes [14]. HMAC represents a message authentication code obtained through a hash key, calculated through a cryptographic hash function from a secret key [15]; for the specific case of this study, the selected hash function is SHA-256. This is one of the most recommended because of its performance [16] (see Fig. 4.).

Fig. 4. Transformation scheme with SHA 256.

4.3

Confidentiality

This security requirement is met by applying TLS/SSL protocol and AES and RSA algorithms. The TLS/SSL protocol is a basic element of the confidentiality service in security systems [17]. SSL certificates are used to ensure the main characteristics of confidentiality in security systems in information transmission. In the proposed security scheme, AES is used with a 256-bit key to encrypt and decrypt the messages that the nodes of the grid send to each other. In the proposed security scheme, hybrid cryptography is used to distribute the AES and HMAC key from the server to the clients. Its operation is described below: (1) The server encrypts the AES key by means of the RSA algorithm with the customer’s public key. (2) The server sends the encrypted key to the client. (3) The client receives the key and decrypts it through RSA with its private key. (4) Finally, the client encrypts the message to be sent through AES with the key it received from the server. This is intended to ensure that the distribution of the AES algorithm key is secure and confidential.

Security Scheme for IoT Environments in Smart Grids

275

5 Results Obtained 5.1

Performance in RSA and AES

The choice of the encryption algorithm to meet the needs of the security scheme is based on the performance comparison between AES and RSA. Figure 5 shows the difference between the AES and RSA algorithms in terms of performance. Since AES is much faster, it is the selected algorithm.

Fig. 5. AES symmetric encryption OFB vs. asymmetric encryption RSA (encryption process).

To define the AES algorithm scheme to be used, six operating modes (ECB, CFB, CBC, OFB, CTR, and OPENPGP) are compared qualitatively (see Fig. 6) within a range of values from 20 bytes to 400 bytes. Figure 6 shows that the OFB encryption mode has the highest performance for the exposed range. Equation (1) defines the behavior of the OFB mode in relation to the data length with linear behavior. t ¼ 2:08  106 b þ 2:18  106

ð1Þ

Where t represents the time in seconds of message encryption and b represents the size in bytes of the encrypted data. Based on the procedures for encryption, the analysis of the decryption process is done with the range from 20 to 400 bytes. According to the graph that represents the behavior of the algorithm (See Fig. 7), the behavior is similar to that observed in the encryption process, so the OFB mode has the highest performance for small length messages.

276

S. Cruz-Duarte et al.

Fig. 6. AES encryption modes for small data lengths

Fig. 7. AES decryption modes for small data lengths

Security Scheme for IoT Environments in Smart Grids

277

Equation (2) describes the decryption process for small data lengths using OFB mode. t ¼ 4:5  106 b  3:01  105

ð2Þ

Equation (3) describes the behavior in terms of speed in the process of encrypting and decrypting the information. t ¼ 4:15  106 b  4:31  106

ð3Þ

6 Analysis of the Results According to the parameterization of the encryption system for the confidentiality section within the designed security scheme, the AES encryption algorithm is the symmetric algorithm responsible for meeting the security needs based on its significantly superior performance compared to RSA. As for AES, a higher efficiency in the encryption process is determined with the use of the OFB mode; the approximate performance is 511,688.9127 bytes/sec. for message lengths of less than 100 bytes (see Fig. 8), while in the decryption process its performance is 517,570.3913 bytes/sec. for messages of the same length. When sending messages of longer length, the ECB mode has shorter processing times than those obtained with the OFB method. However, due to the security system requirements, the ECB mode is ignored.

Fig. 8. Behavior of the AES-OFB encryption and decryption process

278

S. Cruz-Duarte et al.

Using the security scheme proposed through the analysis of the unitary specifications of confidentiality, integrity, and authentication, it is possible to determine the design parameters of the security system with AES encryption algorithm, OFB mode, SHA.256 hash function, use of SSL protocol, and security certificate with digital signature. The system behavior for the total process of data processing and transfer presents constant behavior because the processing times for small information lengths are practically insignificant compared to the data transfer time between the system nodes. The final average performance obtained for the encryption and decryption process with the AES algorithm in OFB mode is approximately 511,688.9127 bytes/sec. and 517,570.3913 bytes/sec., respectively. As for the transmission of data between the nodes of the grid, including all the processes specified within the study, the performance of the system is 213.43 bytes/sec., which is marked mainly by the times of data transmission. Within the grid architecture scheme in which the security system is developed, the processing times are very short, so for small data ranges the information transmission time is not significantly altered; its behavior is constant due mainly to the data transfer time as shown in Fig. 9.

Fig. 9. Behavior of the system when sending information in relation to the change in the length of data sent

The proposed design considers the development of the security scheme in an environment of sending messages of no more than 100 bytes. Thus, the estimated processing time based on the function determined in (3) is approximately 3:88  104 sec. with the use of an OFB encryption mode in AES, which presents a higher

Security Scheme for IoT Environments in Smart Grids

279

performance than RSA encryption within the tests performed. For the length of messages specified within the context of this paper, the final evidenced performance of the security scheme, including confidentiality, integrity, authentication, and data transfer services, is approximately 213.43 bytes/sec. However, this performance is determined to a large extent by the characteristics of the information channel and the infrastructure of the grid. The performance review in terms of encryption and decryption of information is represented by an approximate value of 257,306.4239 bytes/sec. 6.1

Comparison to Other Designs

Few papers have been worked with similar approaches where different security mechanisms are used together, however, there are some studies which can be compared in nearly aspects as [21] and [22] where AES study is developed. In [21] an AES implementation is applied with a final performance around 434,000 bytes/sec for encryption process and 482,000 bytes/sec for decryption process. Therefore, the implemented security scheme is approximately 17.74% and 7.05% faster for the encryption and decryption process respectively, using AES OFB mode. Compared to experimental results showed in [22], our scheme is significantly faster using a similar model for AES data encryption. However, the experimental scenarios are distinct because software platforms used in mentioned paper are Windows and Mac.

7 Conclusions and Future Research The experimental results of this paper concludes that the message flow performance in the study case is not significantly affected by the proposed security scheme, therefore, it is suitable for smart grids and IoT environments. Also results determine that AES is the most efficient in comparison to other cypher algorithms and its OFB mode is faster than the others modes in execution time for data length of no more than 100 bytes. Finally, the security scheme satisfies the NTC 6079 specified security requirements (integrity, authentication and confidentiality) through the implementation of SSL certificates, symmetric and asymmetric encryption algorithms, hash functions and digital signatures. As future research, the scope of the proposed scheme should be extended to include the principle of security of availability in order to meet the need to protect information in the face of the imminent growth in the use of IoT technologies. Another future study proposed is aimed at the size of the data processed and transmitted within the case study of this paper in order to expand the application scenarios where the proposed security scheme can be adapted. Acknowledgment. This work was supported by COLCIENCIAS with the project entitled “Low and Medium capacity battery charger with low current THD, high power factor and high efficiency for electric vehicles” and GITUD research group.

280

S. Cruz-Duarte et al.

References 1. NTC 6079: Requisitos para sistemas de infraestructura de medición avanzada (ami) en redes de distribución de energía eléctrica. ICONTEC 2. Rihan, S.D., Khalid, A., Osman, S.E.F.: A performance comparison of encryption algorithms AES and DES. Int. J. Eng. Res. Technol. IJERT 4(12), 151–154 (2015) 3. Rani, H.M.S., Mittal, D.H., Director, S.: A compound algorithm using neural and AES for encryption and compare it with RSA and existing AES. J. Netw. Commun. Emerg. Technol. JNCET 3(1) (2015) 4. Laue, R., Kelm, O., Schipp, S., Shoufan, A., Huss, S.A.: Compact AES-based architecture for symmetric encryption, hash function, and random number generation. In: International Conference Field Programmable Logic and Applications, FPL 2007, pp. 480–484 (2007) 5. Mahajan, P., Sachdeva, A.: A study of encryption algorithms AES, DES and RSA for security. Glob. J. Comput. Sci. Technol. (2013) 6. Alahmadi, A., Abdelhakim, M., Ren, J., Li, T.: Defense against primary user emulation attacks in cognitive radio networks using advanced encryption standard. IEEE Trans. Inf. Forensics Secur. 9(5), 772–781 (2014) 7. Liu, B., Baas, B.M.: Parallel AES encryption engines for many-core processor arrays. IEEE Trans. Comput. 62(3), 536–547 (2013) 8. Masoumi, M., Rezayati, M.H.: Novel approach to protect advanced encryption standard algorithm implementation against differential electromagnetic and power analysis. IEEE Trans. Inf. Forensics Secur. 10(2), 256–265 (2015) 9. Liu, Q., Xu, Z., Yuan, Y.: High throughput and secure advanced encryption standard on field programmable gate array with fine pipelining and enhanced key expansion. IET Comput. Digit Tech. 9(3), 175–184 (2015) 10. Baek, C.H., Cheon, J.H., Hong, H.: White-box AES implementation revisited. J. Commun. Netw. 18(3), 273–287 (2016) 11. Sarika, S., Pravin, A., Vijayakumar, A., Selvamani, K.: Security issues in mobile ad hoc networks. Procedia Comput. Sci. 92, 329–335 (2016) 12. Biryukov, A., Khovratovich, D.: Related-key cryptanalysis of the full AES-192 and AES256. In: International Conference on the Theory and Application of Cryptology and Information Security, pp. 1–18 (2009) 13. Eder-Neuhauser, P., Zseby, T., Fabini, J., Vormayr, G.: Cyber attack models for smart grid environments. Sustain. Energy Grids Netw. 12, 10–29 (2017) 14. Krawczyk, H., Canetti, R., Bellare, M.: HMAC: keyed-hashing for message authentication (1997) 15. Bellare, M., Canetti, R., Krawczyk, H.: Keying hash functions for message authentication. In: Annual International Cryptology Conference, pp. 1–15 (1996) 16. Yung, M., Lin, D., Liu, P.: Information Security and Cryptology: 4th International Conference, Inscrypt 2008, Beijing, China, 14–17 December 2008, Revised Selected Papers. Springer Science & Business Media (2009) 17. Rescorla, E.: SSL and TLS: Designing and Building Secure Systems, vol. 1. AddisonWesley, Reading (2001) 18. Fei, P., Shui-Sheng, Q., Min, L.: A secure digital signature algorithm based on elliptic curve and chaotic mappings. Circ. Syst. Signal Process. 24(5), 585–597 (2005) 19. Somsuk, K., Thammawongsa, N.: Applying d-RSA with login system to speed up decryption process in client side. In: IEEE 3rd International Conference on Engineering Technologies and Social Sciences (ICETSS), pp. 1–5 (2017)

Security Scheme for IoT Environments in Smart Grids

281

20. Moriarty, K., Kalisky, B., Jonsson, J., Rusch, A.: PKCS #1: RSA Cryptography Specifications Version 2.2. RFC 8017 (2016) 21. Mahajan, P., Sachdeva, A.: A study of encryption AES, DES, and RSA for security. Global J. Comput. Sci. Technol. Netw., Web Secur. (2013) 22. Khalid, A.: A performance comparison of encryption algorithms AES and DES. Int. J. Eng. Res. Technol. (2015)

Dynamic Airspace Sectorization Problem Using Hybrid Genetic Algorithm Mohammed Gabli1,2(B) , El Miloud Jaara1,2 , and El Bekkaye Mermri1,3 1

2

Faculty of Science, University Mohammed Premier, Oujda, Morocco [email protected] Department of Computer Science, Laboratory of Research in Computer Science (LARI), Oujda, Morocco 3 Department of Mathematics, Laboratory of Arithmetic, Scientific Computing and Applications (LACSA), Oujda, Morocco

Abstract. In this paper, we are interested in a dynamic airspace sectorization problem (ASP) with constraints. The objective is to minimize the coordination workload between adjacent sectors and to balance the workload across the sectors. We modeled this problem in the form of multi-objective optimization problem that can be transformed into a mono-objective problem with dynamic weights between the objective functions. To solve the ASP problem we used a hybrid genetic algorithm. The proposed model is illustrated by a numerical example from a real life problem.

Keywords: Airspace sectorization Hybrid genetic algorithm

1

· Multi-objective optimization

Introduction

Sectorization is a fundamental architectural feature of the Air Traffic Control (ATC) system. The duties of ATC are to provide safe, regular and efficiency air traffic in the airspace in consideration. To carry out these duties the airspace is divided into a number of sectors, each of them is assigned to a team of controllers. Each sector has a certain capacity depending on several factors (see [1] for instance): ATC system, controller’s experience, traffic characteristics, scenarios (overflights, climbing, descending, military activity, ...), etc. The sector capacity can be defined in several ways. In this paper we consider a sector capacity as the maximum number of aircraft which can be served during a certain time period (in Europe during one hour [2]). Generally modern jet aircraft do not enable pilots to solve conflicts because of their high speed and their ability to fly with bad visibility [3,4]. Therefore pilots must be helped by air traffic controller. Controllers of a given sector have many tasks which induce a workload. There are three kinds of workload (see [5,6], for instance): the monitoring workload, the conflict workload, and the coordination c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 282–289, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_31

Dynamic Airspace Sectorization Problem Using GA

283

workload; the first two workloads occur inside the sector, and the third one between a sector and its adjacent sectors. In the literature, research concerning airspace sectorization is structured around (i) several approach, see for instance [7–12], (ii) several methods, see for instance [6,7,13–16], (iii) different frequencies [3,10,15,17], (vi) different dimensions [3,7,8,10,11,13,14] and (v) some constraints [3,7]. In this paper, we are interested in dynamic airspace sectorization problem (ASP) with constraints. The objective is to minimize the coordination workload between adjacent sectors and to balance the workload across the sectors. The second objective can be transformed in minimizing the standard deviation of the traffic inside sectors. We modeled this problem in the form of multi-objective optimization problem that can be transformed into a mono-objective problem with a trade-off parameter between the objective functions. To solve the ASP problem we used a hybrid genetic algorithm (GA). The paper is outlined as follows. In Sect. 2, we describe the problem and we present its formulation. In Sect. 3, we propose a GA approach using a dynamic trade-off parameter. In Sect. 4, we give an application of our approach to some problems, then we present the obtained numerical results. Finally, in Sect. 5 we give some concluding remarks.

2

Problem Statement and Model Presentation

Consider an airspace A of a country or a region to be divided into n small volume units (elementary sectors) denoted by x1 ,x2 ,. . . ,xn . The set of these units is denoted by E. A sector S is obtained by joining some of these elementary units. We denote by P a partition of the set E, for example a partition P of cardinality m can be defined as follows. P = {Sj | Sj is a sector, 1 ≤ j ≤ m,

m  j=1

Sj = E and

m 

Sj = ∅}

j=1

We denote by Π the set of all possible partitions of the set E. For example if E = {x1 , x2 , x3 }, the set of all partitions is given by: Π = {P1 , P2 , P3 , P4 , P5 } where P1 = {{x1 }, {x2 }, {x3 }}, P2 = {{x1 , x2 }, {x3 }}, P3 = {{x1 }, {x2 , x3 }}, P4 = {{x1 , x3 }, {x2 }} and P5 = {{x1 , x2 , x3 }}. In this paper, we study the dynamic airspace sectorization problem in 2D. The objective is to find, during a certain time period Δt, the partition P which minimizes the coordination workload between adjacent sectors and minimizes the standard deviation of the traffic inside sectors without exceeding the capacity. The number of sectors is not known in advance. The determination of the optimal partition P gives us the number of sectors during a time period Δt. Our model consider the two following constraints: – Connectivity constraint. The sector can not be fragmented. Figure 1a. shows a solution which is not feasible.

284

M. Gabli et al.

– Minimum stay time constraint. An aircraft has to stay at least a given amount of time in each sector it crosses. This constraint ensures the controller has enough time to control the aircraft, see Fig. 1b.

Traffic line

Sector B Tmin

Sector A

Sector A

Sector B

Sector B

a. Connectivity constraint

b. Minimum stay time constraint

Fig. 1. Constraints examples.

2.1

Basic Model

We denote by f (X, Δt) the number of overflight (traffic) inside the sector X during the time period Δt. This traffic induces the monitoring workload and the conflict workload. We denote by g(X, Y, Δt) the function which measures the coordination workload between sectors X and Y during Δt, and we denote by cap(Δt) the capacity of a sector during Δt. Since we wish to minimize the coordination workload between adjacent sectors and minimize the standard deviation of the traffic inside sectors without exceeding the capacity, the problem can be expressed as:  ⎧ 1 ⎪ ⎪ M inimize ⎪ (f (X, Δt) − μ)2 , ⎪ ⎨ P ∈Π n X∈P (1) ⎪ M inimize g(X, Y, Δt), ⎪ ⎪ P ∈Π ⎪ ⎩ X∈P Y ∈P Y =X

subject to:



f (X, Δt) ≤ cap(Δt), X ∈ P, Sector connectivity constraint,

(2) (3)

M inimum stay time constraint.

(4)

where μ = n1 X∈P (f (X, Δt) and n = |P | is the cardinality of the partition P . Whenever a solution P violates any of the constraints (2) or (4), a penalty will be awarded to the objective functions.

Dynamic Airspace Sectorization Problem Using GA

2.2

285

Improved Model: Dynamic Weights

We transform the multi-objective problem (1) into a mono-objective one as follows:  1 (f (X, Δt) − μ)2 + w2 g(X, Y, Δt), (5) M inimize w1 P ∈Π n X∈P

X∈P Y ∈P Y =X

subject to the constraints (2), (3) and (4), where the weights w1 and w2 are positive values satisfying 0 ≤ w1 ≤ 1 , 0 ≤ w2 ≤ 1 and w1 + w2 = 1.

(6)

To balance the objective function weights, we use dynamic weights based on genetic algorithm method, see our papers [18,19]. Then the problem becomes: ⎧     ⎪ ⎨ M inimize w1 (k) n1 X∈P (f (X, Δt) − μ)2 + w2 (k) X∈P Y ∈P g(X, Y, Δt), P ∈Π Y =X     ⎪ ⎩ |w1 (k) n1 X∈P (f (X, Δt) − μ)2 − w2 (k) X∈P Y ∈P g(X, Y, Δt)| ≺ ε, Y =X

(7) subject to the constraints (2), (3) and (4), where ε is a positif number in the vicinity of 0, k is an iteration step of genetic algorithm, w1 (k) and w2 (k) are dynamic weights satisfying w1 (k) + w2 (k) = 1.

3

Genetic Algorithm Approach and Hybridization

Consider the problem (7) presented in Sect. 2. In this section, we use genetic algorithms (GAs) as they are described in [20] and [21]. 3.1

GA Approach

Chromosome Encoding. Let n = |E| be the number of volume units (VUs). Each VU is characterized by its adjacent VUs and the time that an aircraft has to stay in this VU. We introduce a sequence of n digits, where each digit is an integer taking values between 1 and n. If the digit in a position j, takes a value k, dj = k, that means the V Uj is assigned to the sector k; For example, if n = 13, the code 4;3;1;7;1;7;4;7;7;3;4;11;11 means that V U1 is assigned to sector 4, V U2 to sector 3, · · · , V U13 to sector 11. We see that there are 5 sectors: {V U1 ,V U7 ,V U11 }, {V U2 ,V U10 }, {V U3 ,V U5 }, {V U4 ,V U6 ,V U8 ,V U9 } and {V U12 ,V U13 }. With this encoding method, it is certain that the intersection of sectors is empty, and the union of sectors is the set E. In the initial population of the GA, each chromosome is generated randomly as follows: Step 0. In the first position of chromosome, we generate randomly an integer (gene) g1 from the set {1, . . . , n}. So, the V U1 is assigned to the sector g1 .

286

M. Gabli et al.

Step 1. In the position i, 1 < i ≤ n, we generate a random gene gi from the set {1, . . . , n}. Step 2 (correction). In the position i, 1 < i ≤ n, for each gj , 1 ≤ j < i, if gi = gj and V Ui hasn’t any adjacent VU among the VU in the sector gj , then we generate a new random gene. Step 3. We repeat step 2 until a new gene gi is obtained, or to find an assignment of V Ui in a sector where there are adjacent VU. Step 4. We repeat steps 1, 2 and 3 until we generate all genes of the chromosome. With this construction of the chromosome we guarantee the connectivity constraint. To satisfy the Minimum stay time constraint, we calculate, in each sector, the time that an aircraft remains inside. If this time is insufficient, we penalize the objective function. Crossover and Mutation Operators. Figure 2 presents the crossover and mutation operators. For example, for the first chromosome obtained after crossover, if the gene (volume unit) number 10 (V U10 ) can not belong to the sector containing V U4 , V U6 and V U8 (connectivity constraint), then we use the previous correction (steps 2 and 3 defined in the previous subsection). Crossover position Crossover

32141434 42355

32141434 34413 21233211 42355

21233211 34413

Correction 32141434 32413 21233211 42355

Mutation position

3214143442355

Mutation

3214143422355

Correction 3214143432355

Fig. 2. Example of crossover and mutation operators.

Dynamic Airspace Sectorization Problem Using GA

3.2

287

Hybridization Approach

For genetic algorithms, local search (LS) is often used for the improvement of the solutions and the intensification of search. In this paper we exploit this hybridization as follows: – We take the best solution given by the GA method; – We set this solution as the initial configuration of LS; – We apply the LS method on this configuration. 3.3

GA and Dynamic Weights

Consider the optimization model (7), it is of the form : h = w1 f + w2 g. To balance the objective function weights, we consider dynamic weights based on genetic algorithm method as presented in [18,19]. Then, in each iteration k of GA we take: |f2 (yk−1 )| (8) w1 (k) = |f1 (yk−1 )| + |f2 (yk−1 )| and w2 (k) =

|f1 (yk−1 )| |f1 (yk−1 )| + |f2 (yk−1 )|

(9)

where yk−1 is the best solution of the iteration (k − 1) of the GA. If f1 (yk−1 ) = f2 (yk−1 ) = 0, then we take w1 (k) = w1 (k − 1) and w2 (k) = w2 (k − 1). It is easy to see that 0 ≤ w1 (k), w2 (k) < 1 and w1 (k) + w2 (k) = 1. This algorithm has two advantages: 1. It automates the choice of the weights w1 and w2 . 2. It ensures an equitable treatment of each objective function, so we have an equitable chance to minimize both functions f and g.

4 4.1

Application Data Description

In this paper, we consider the ASP problem in the airspace of Morocco. Data refer to the year 2014. We followed the traffic throughout the year. By analyzing these data, we decided to take Δt = 12 h (remarkable difference between day and night) and we took cap(Δt) = 80. Here, we consider that each airport is a volume unit. 4.2

Computational Results

The algorithms were coded in JAVA programming language. In the hybrid GA approaches we have used three selection methods: roulette wheel, scaling and sharing. The parameters of GA are set as follows: crossover probability pc = 0.5, mutation probability pm = 0.01, population size ps = 20, and maximum number

288

M. Gabli et al.

of generations 500. In the sharing selection method, the threshold of dissimilarity between two parents is taken as σs = ps/2, and α = 1. Several experiments are done. Figure 3, on the left, presents the Moroccan airspace sectorization in the day and Fig. 3, on the right, shows the Moroccan airspace sectorization at night. We found four sectors in the first case and three sectors in the second case.

x x x

x

x x

x

x

x

x

x

x x x

Fig. 3. Moroccan airspace sectorization in the day (left) and at night (right)

5

Conclusion

In this paper we have considered a dynamic airspace sectorization problem with constraints. The problem, which is formulated as optimizing two objectives functions f and g subjected to some constraints, is transformed into mono-objective optimization problem: w1 f + w2 g, where w1 and w2 are positive weights of the two objectives functions. We note that the choice of these weights is not an easy task for both decision maker and system analyser. In order to solve the formulated problem we have proposed a hybrid GA approach using a dynamic weights which varie at each iteration of the GA. The obtained results show the performance of our method. In future research, we will take into count psychological factors of workload and other constraints such as safety constraint and convexity constraint.

References 1. Babic, O., Kristic, T.: Airspace daily operational sectorization by fuzzy logic. Fuzzy Sets Syst. 116, 49–64 (2000) 2. Allignol, C.: Planification de trajectoires pour l’optimisation du trafic aerien. Ph.D. thesis, INP Toulouse (2011)

Dynamic Airspace Sectorization Problem Using GA

289

3. Kumar, K.: ART1 neural networks for air space sectoring. Int. J. Comput. Appl. 37, 20–24 (2012) 4. Riley, V., Chatterji, G., Johnson, W., Mogford, R., Kopardekar, P., Sieira, E., Landing, M., Lawton, G.: Pilot Perceptions of Airspace Complexity. Part 2, Digital Avionics Systems Conference DASC 04. IEEE (2004) 5. Trandac, H., Baptiste, P., Duong, V.: Optimized sectorization of airspace with constraints. In: The 5th USA/Europe ATM R&D Seminar, Budapest, pp. 23–27 (2003) 6. Delahaye, D., Schoenauer, M., Alliot, J.M.: Airspace sectoring by evolutionary computation. In: IEEE International Congress on Evolutionary Computation (1998) 7. Delahaye, D., Puechmorel, S.: 3D airspace sectoring by evolutionary computation, real world applications. In: GECCO 06 Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 1637–1644 (2006) 8. Trandac, H., Baptiste, P., Duong, V.: Airspace sectorization with constraints. RAIRO Oper. Res. 39(2), 105–122 (2005) 9. Yousefi, A., Donohue, G.L., Qureshi, K.M.: Investigation of en route metrics for model validation and airspace design using the total airport and airspace modeler. In: The 5th EUROCONTROL/FAA ATM R&D Conference, Budapest (2003) 10. Bichot, C.E., Durand, N.: A tool to design functional airspace blocks. In: Proceedings of ATM, the 7th USA/Europe R&D Seminar on Air Trafic Management, pp. 169–177 (2007) 11. Brinton, C.R., Leiden, K., Hinkey, J.: Airspace sectorization by dynamic density. In: Proceedings of the 9th AIAA Aviation Technology, Integration and Operations (ATIO) Forum, American Institute of Aeronautics and Astronautics (2009) 12. Gianazza, D.: Forecasting workload and airspace configuration with neural networks and tree search methods. Artif. Intell. 174, 530–549 (2010) 13. Basu, A., Mitchell, J.S.B., Sabhnani, G.: Geometric algorithms for optimal airspace design and air traffic controller workload balancing. J. Exp. Algorithmics 14(3) (2009) 14. Kicinger, R., Yousefi, A.: Heuristic method for 3D airspace partitioning: genetic algorithm and agent-based approach. In: 9th AIAA Aviation Technology, Integration, and Operations Conference (ATIO) and Aircraft Noise and Emissions Reduction Symposium (ANERS) (2009) 15. Xue, M.: Airspace sector redesign based on voronoi diagrams. In: Proceedings of AIAA Guidance, Navigation, and Control Conference, Honolulu, HI (2008) 16. Bichot, C.E.: Elaboration d’une nouvelle metaheuristique pour le partitionnement de graphe: la methode de fusion-fission. Application au decoupage de l’espace aerien. Ph.D. thesis, INP Toulouse (2007) 17. Tang, J.: Large scale multi-Objective optimization for dynamic airspace sectorization. Ph.D. thesis, University of New South Wales (2012) 18. Gabli, M., Jaara, E.M., Memri, E.B.: Planning UMTS base station location using genetic algorithm with a dynamic trade-off parameter. Lecture Notes in Computer Science, vol. 7853, pp. 120–134 (2013) 19. Gabli, M., Jaara, E.M., Memri, E.B.: A genetic algorithm approach for an equitable treatment of objective functions in multi-objective optimization problems. IAENG Int. J. Comput. Sci. 41(2), 102–111 (2014) 20. Holland, J.: Outline for a logical theory of adaptive systems. J. Assoc. Comput. Mach. 9, 297–314 (1962) 21. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning. First ed. Addison-Wesley (1989)

A Semantic Framework to Improve Model-to-Model Transformations Mohamed Elkamel Hamdane1,2(B) , Karima Berramla3 , Allaoua Chaoui1 , and Abou El Hasan Benyamina3 1

3

MISC Laboratory, AbdElhamid Mahri University of Constantine 2, Constantine, Algeria [email protected], a [email protected] 2 Department of Mathematics and Computer Science, Teachers’ Training School of Constantine, Constantine, Algeria Lapeci Laboratory, University of Ahmed Ben Bella Oran 1, Oran, Algeria [email protected], [email protected]

Abstract. The Model Driven Engineering approach is an important contribution in Software Engineering. The strength of this approach is related to its model transformation process. However, ensuring that the output model is semantically equivalent to the input model is in itself a challenge. This paper introduces a semantic framework based on the theorem proving method in order to rise this challenge. The framework allows facilitating any proof of model-to-model transformations. We explain how to apply this framework using the Coq tool that’s to prove the correctness of a standard case study. Keywords: Model

1

· Transformation · Proof · Correctness · Coq

Introduction

Model-Driven Engineering (MDE) aims to improve the productivity of software engineering via emphasizing model transformation as a central activity during software development [1]. Through its concepts, the MDE provides a standard framework to monitor the software development process [4]. It focuses on the use of models and transformation between models. The transformations are defined as refinement steps over models that decrease the level of abstraction. This refinement is carried out in two forms: Model-to-Model (M2M) transformation and Model-to-Code (M2C) transformation. However, in the M2M transformation an important question is often asked “is the transformation correct?”. For example, in the classical scenario that allows transforming a UML-Class diagram into Table schema (i.e. named Class2Table rule1 ); this transformation is considered correct, when all the attributes in the class diagram are translated into columns in the corresponding table. Therefore, there is no loss of information during the transformation process. 1

https://www.eclipse.org/atl/atlTransformations/.

c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 290–300, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_32

A Semantic Framework to Improve Model-to-Model Transformations

291

To solve the problem, the formal methods can be used. In this paper, we investigate through a case study the use of theorem proving method to ensure the correctness of M2M transformation. We focus more on ER2REL transformation as the illustrated example (presented in [2]) and we address an implementation using Coq2 assistant proof. First, a generic framework for proving a M2M transformation is proposed. Next, the case study ER2REL is detailed. The remainder of this paper is organized as follows. Section 2 presents the background and related works. Section 3 proposes a framework that allows proving the correctness of M2M transformation. Section 4 describes in detail the case study ER2REL transformation. Finally, Sect. 5 concludes the paper with several perspectives.

2

Background and Related Work

This section presents the background about model-to-model transformation, theorem proving and the related works in this field. MDE design is based on models, metamodels and model transformation concepts. A model is an abstract and a simplified representation of a system. This model allows describing, explaining or predicting it [3]. A metamodel formalizes the abstract syntax of a modeling language in the form of UML class diagrams. Usually, the transformations are refinement steps over models that decreases the level of abstraction. Generally, there are two types of transformation: Model to Model transformation (M2M) and Model to Code transformation (M2C). For a precise definition of these concepts, the reader is referred to [3]. A formal verification technique provides a rigorous mathematical solution to ensure a coherent design. As it has been mentioned in [5], the use of verification technique is categorized through three approaches; testing-based approach, theorem proving and model checking. In this paper, we focus on the theorem proving approach. In [6] theorem proving is defined as: “it means some arrangement where the machine and a human user work together interactively to produce a formal proof”. Coq is a proof assistant classified as one of the most popular theorem proving tools. it implements an intuitionistic logic [7] based on Gallina language, which is built on the calculus of inductive constructions (CIC). Coq has also a power of expression that can describe imperative programs directly in its specification language. Therefore, it incorporates an automatic extraction procedure to Ocaml programs and the ability to add axioms already verified to Coq system. For more details, the reader can refer to [8]. As related work, The work in [10] presents a formalization of the ATLAS transformation language (ATL) and some of its operations into a Calculus of Inductive Construction (CIC) [9] specification. The former, proved using the Coq theorem tool, in order to prove that the model transformation is able to generate a target model to meet certain invariants. The proposed work in [11] 2

http://coq.inria.fr.

292

M. E. Hamdane et al.

is the continuation and the improvement of Daniel work [10] but the idea of the latter is based on the verification of transformation example written in QVT-op3 . The continuation is described by using a complex example and the improvement is illustrated by correcting some definitions in metamodel level and the definition of models in Gallina language. Lano et al. [12] presented a study that aims to summarize the famous approaches and their tools used to prove the transformation process. Then, they proposed a generic framework that automatically generates verification formalisms (expressed in B or Z3) starting from the transformation process. In [13] the idea is that the transformation process is formally defined as rules expressed in Labelled Transition Systems. The Coq tool is used, as far as the correctness of transformation is interpreted when a given φ property (i.e. written in a fragment of the μ-calculus) must be preserved for all possible input models. This work proposes the proof of the correctness transformation at a semantic level by using the rule-by-rule demonstration style.

3

Overall Approach and Architecture

Figure 1 gives a generic framework aims to verify the M2M transformation through a theorem proving technique. It summarizes the main steps to follow in order to formalize the M2M transformation. The framework is structured in two spaces: MDE technical space and Coq technical space. The first one represents a classical M2M transformation that defines the translation from a source model (SM) into a target model (TM) by using a set of transformation rules written in a specific language. The second space, concerns the translation of the entire transformation process into Coq. In fact, step one involves translating the source and the target metamodels into Coq tool through the Gallina language. The same task is done at the level of the models, and the transformation rules (step 2 and 3). The step 4, allows defining a set of properties that expresses the specific requirements of the system to be verified. These properties are combined with the previous steps in order to build a proof system in Coq. 3.1

Mapping Models, Metamodels, Rules Transformation in Coq

According to [10,11], the three parts of the M2M transformation such as: metamodels, models and transformation rules; must be specified on Gallina language. For example, the representation of “Class” and “Attribute” in this specification is defined by the “inductive” clause: inductive Class : Set :=| Build_ClassName ( attribute : type ).

3

http://www.omg.org/spec/QVT/1.3/.

A Semantic Framework to Improve Model-to-Model Transformations

293

Fig. 1. Overview of the proposed approach

The definition of “Relation” between classes is done by the “List” clause. The following code presents an example: inductive Petrinet : Set :=| Build_Petrinet ( name : string )( Places : list Place ) with Place : Set :=| build_Place ( name : string )( token : nat ).

The “Instantiation” operation of a model from its metamodel is defined as: Definition Event1 : Event := build_Event ‘‘ add ’ ’.

3.2

Applying Theorem Proving

According to [11], the proof scenario is based on the idea: “it assumes that the transformation rules specifications are correct and it tries to demonstrate that the properties of the transformation are verified”. Moreover, the tools offer support to achieve the proof. The Coq tool follows a type of scenario in order to make a correctness proof. This scenario is [9]: 1. The user enters a statement that he wants to prove, using the command Theorem or Lemma, at the same time giving a name for later reference, 2. The Coq system displays the formula as a formula to be proved, possibly giving a context of local facts that can be used for this proof (the context is displayed above a horizontal line written =====, the goal is displayed under the horizontal line),

294

M. E. Hamdane et al.

3. The user enters a command to decompose the goal into simpler ones, 4. The Coq system displays a list of formulas that still need to be proved, 5. Back to step 3. The commands used at step 2 are called tactics (for example: induction, simpl, discriminate, injection, intuition, etc.). Some of these tactics actually decrease the number of goals. When there are no more goals; the proof is complete, it needs to be saved, and performed when the user sends the command Qed. The effect of this command is to save a new theorem whose name was given at step 1. The reader finds in [1] the programming details and tips used to apply theorem proving in Coq.

4

Case Study: Proving Correctness of ER2REL Transformation

The ER2REL transformation is a typical case study of model-to-model transformation. It aims to ensure the translation of an entity-relational data model into a relational schema model. 4.1

Metamodels

Firstly, we introduce the metamodels used in the ER2REL transformation. Figure 2 depicts a simplified metamodel of the ER and REL that are (resp.) the source and target meta-models for the ER2REL transformation (inspired from [2]). In addition, we use the Ecore meta-modelling language inside Eclipse modelling Framework (EMF) [14], to express the two metamodels. 4.2

Transformation Process: ER2REL Rules and Running Example

Listing 1.1 presents the ER2REL transformation expressed in ATL. The idea behind this transformation is to produce an output instance of the REL metamodel from an input instance of the ER metamodel by using a set of transformation rules that are presented in the following codes. 1 2 3 4 5 6 7 8 9 10 11

module ER2REL ; create OUT : REL from IN : ER ; rule S2S { from s : ER ! ERSchema to t : REL ! RELSchema ( name E R A t t r i b u t e 2 R E L A t t r i b u t e ( x ):: nil /* **** */ | x :: l = >( E R A t t r i b u t e 2 R E L A t t r i b u t e ( x ):: nil )++( b e l o n g E n t 2 b e l o n g a t t ( l )) end . Definition Entity2Relation ( a : Entity ) : Relation := build_Relation ( Entity_name ( a )) ( Entity_IdEntity ( a )) ( b e l o n g E n t 2 b e l o n g a t t ( E n t i t y _ b e l o n g E n t ( a ))). ......

Listing 1.5. Rule Entity2Relation into Gallina code

4.3.2 Applying Theorem Proving: in Coq, the use of semantic equivalence between the source and target model is a technique used to verify the correctness of such transformation. This equivalence is translated in the demonstration process by the definition of axioms that specify the transition between the source and target models. Precisely, these axioms can be defined in the demonstration process as rules where their proof is expressed using the tactic “rewrite”. We propose some properties to define this semantic equivalence between the input ER model and the output REL model. – Proposition I: ∀a ∈ ERSchema, ∃p ∈ RELSchema, EqT ransf ormation (a)(p) = (compareRelations(a)(p) ∧ compareAttributes(a)(p)). This proposal makes it possible to prove that whatever is the ER input model, there is a REL output model generated by the transformation while respecting two properties (proposition II and III). These properties reflect the semantic equivalence between ER model and REL model. We define these two propositions by:

298

M. E. Hamdane et al.

– Proposition II: ∀a ∈ ERSchema, ∀p ∈ RELSchema, compareRelations(a)(p) > 2 > > < 8i 2 ½1; m; 8ðk; lÞ 2 ð½1; Qnfyi gÞ ; K3 ðei;k ; ei;l Þ ¼ 0 8i 2 ½1; m; 8k 2 ½1; Qnfyi g; ð2  pÞei;k  0 > Q > P > > : ð1  k1 Þ hk ¼ 0 k¼1

Where k 2 Rþ ; ðK1 ; K3 Þ 2 f0; 1g2 ; K2 2 Rþ ; M 2 MQm;Qm ðzm Þ is a matrix of rank ðQ  1Þm; if p ¼ 1, p 2 f1; 2g; then M is diagonal matrix. To obtained the four M-SVM from the generic model, it’s enough to apply for each model the adequate hyper parameters values defined in the Table 1 below [20]: Where IQm ðzm Þ and M 2 are matrices of MQm;Qm ðzm Þ whose terms are respectively:

Table 1. Hyper parameters of the four M-SVM. M-SVM M P K1 K2 K3 WW model IQm ðzm Þ 1 1 1 0 1 1 1 1 1 CS model Q1 IQm ðzm Þ LLW model IQm ðzm Þ M-SVM

2

M2

1 0 2 0

1 Q1 1 Q1

0 0

pffiffiffiffi   Q1 ð2Þ mik;jl ¼ di;j dk;l ð1  dyi;k Þ and mik;jl ¼ ð1  dyi;k Þð1  dyj;l Þ dk;l þ di;j : Q For the implementation of the four M-SVM, we used the M-SVM package (MSVMpack) [21]. Three types of kernel are available: Linear, Polynomial, and Gaussian RBF; in this application, we selected the “Gaussian RBF” kernel which gives the best results.

5 Results and Discussion In order to test our classifiers, we take into consideration all subjects, except subject 4 because of the invalidation of some records [8]. Each subject completes 1 or 2 or 3 sessions, for each session he performs 5 mental tasks, and executes each task 5 times per session. Subjects are considered separately because the mental resonance to establish a task may vary from subject to another. Also, the reasoning of the same person can change from one moment to another depending on their mood, so, the sessions are considered separately for the same subject. Five-folds, cross-validation procedures were accomplished to obtained the results. We take, three trials for training, one trial for cross-validation, and one trial for testing. Then to find the best hyper parameters for proposed models, we do the model selection and we optimized two parameters: penalty parameter and kernel parameter.

Using Probabilistic Direct Multi-class Support Vector Machines

327

We examine 15  15 = 225 dissimilar combinations of (C, c) for each cross-validation level, with c = [24, 23, 22, …, 2−10] and C = [212, 211, 210, …, 2−2]. Table 2 summarizes the results achieved for each subject and for each session with the four direct models as follows: • The test accuracy for each task, which represents the average accuracy per mental task over the five-fold cross-validation. The test accuracy for each session, which represents the average accuracy of all mental tasks per session. In addition, we strengthened our results by associating a degree of confidence to the outputs of our classifiers. Thus, a task is considered well ranked only if the probability of belonging to the class is superior than 0.75, otherwise the decision is associated with to the rejection class. To generate probability estimates from the outputs of a direct M-SVM, we just apply the fallowing softmax function: 8k 2 ½1; C ; ~hk ¼

expðhk Þ C  P hk exp

ð2Þ

k¼1

• The following statistical metric introduced by [22], that predicts a margin of correct generalization rate for a new test sets with a confidence rate of a%. Iða; NÞ ¼



Za2 2N

Za

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z2 Tð1TÞ þ 4Na2 N



Za2 N

ð3Þ

Where N is the size of testing set, T is the correct classification rate, and Za = 1.96 for a = 95%. From the results, it can be seen that all M-SVM generate nearly similar accuracies. However, CS model outperforms the other models in all sessions and all subjects, with a classification rates between (76.68%) and (90.86%). For all subjects, one can deduce that the addition of the rejection rates to that of the rates of good classifications, accumulates an average classification rates of: 81% ± 7.78 for LLW model, 81.97% sd ± 7.94 for CS model, 79.89% sd ± 7.53 for LLW model and 81.23% ± 7.74 for M-SVM2 model (sd is the standard deviation). These observations, confirm the necessity to use probabilistic models with a reject class to minimize errors (we prefer to abstain and reject the decision than to make a bad prediction). Finally, from the confidence intervals, one can deduced that our classifiers can be adapted to larger databases, because even in the worst cases our classifiers achieve satisfying results.

328

M. Hendel and F. Hendel Table 2. Discrimination results of the proposed classifier models (%).

Subjects and Models Sessions

BT

MT

LT

RT

CT

Reject Accuracy and confidence intervals

Accuracy + Reject

Subject 1 Session 2

83.47 82.60 78.26 79.13 93.47 95.21 87.82 88.69 66.08 67.39 63.91 64.78 65.65 66.52 62.17 61.73 83.91 85.65 78.69 80.86 82.17 83.47 85.65 87.39 74.34 76.52 76.08 79.13 86.08 88.69 81.30 83.04 90.86 95.21 88.69 90 88.26 90 83.47 86.08 90.43 93.47 83.91 86.08

85.65 88.69 87.39 88.69 96.52 97.39 98.69 97.39 73.47 77.39 70.00 77.82 76.08 75.21 77.82 78.26 65.65 64.78 72.60 71.73 61.30 63.04 56.95 58.69 63.04 64.78 60.43 63.04 75.21 76.52 79.56 81.30 82.60 83.91 86.52 87.82 82.17 83.47 88.26 92.60 85.65 86.08 87.82 88.26

96.95 97.82 97.82 97.39 94.34 93.47 95.21 99.13 88.26 88.69 86.08 87.39 73.91 74.78 71.73 69.13 81.30 83.91 79.13 80.43 87.39 86.08 83.91 83.47 88.69 88.26 81.30 83.91 89.13 90.43 88.26 90 97.82 96.52 95.21 94.34 89.13 87.39 85.65 85.21 94.34 95.65 92.6 95.21

87.39 86.95 85.65 85.21 82.60 83.91 79.56 81.30 95.17 96.52 90.00 94.34 64.78 64.34 62.17 60.43 74.34 75.65 73.47 74.78 76.08 76.95 76.52 78.26 66.95 66.08 70.43 69.56 77.82 76.52 75.21 76.08 85.65 86.52 83.91 84.78 88.69 92.17 83.91 83.04 82.17 81.30 79.56 81.30

82.60 84.78 81.30 82.60 83.91 84.34 80.43 81.73 85.65 87.39 86.52 92.17 59.13 60.43 66.08 68.26 71.73 73.47 71.73 74.34 66.52 66.08 64.34 63.91 64.34 64.78 61.30 63.91 68.26 69.13 66.08 65.21 89.13 88.26 86.95 87.39 86.52 90.43 92.60 94.78 85.65 83.91 83.91 86.52

5.82 6.52 6.17 5.91 5.56 6 5.13 6.08 6.86 7.56 7.82 6.52 19.13 19.04 18.52 19.21 12.60 14.34 13.13 12.43 17.30 17.65 17.04 17.73 16.26 17.47 17.13 16.69 10 10.08 10.34 8.86 8.26 8.00 7.73 7.56 8.78 9.39 8.43 8.52 8.52 8.95 9.04 8.34

93.03 94.68 92.25 92.51 95.72 96.86 93.47 95.72 87.98 91.03 87.12 89.82 87.04 87.29 86.51 86.77 87.98 91.02 88.25 88.85 91.99 92.77 90.51 92.07 87.73 89.55 87.03 88.6 89.3 90.33 88.42 87.98 97.47 98.08 95.98 96.42 95.73 98.08 95.2 96.86 96.16 97.03 94.6 95.81

Subject 1 Session 2

Subject 2 Session 1

Subject 3 Session 1

Subject 3 Session 2

Subject 5 Session 1

Subject 5 Session 2

Subject 5 Session 3

Subject 6 Session 1

Subject 6 Session 2

Subject 7 Session 1

WW CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2 WW l CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2 WW CS LLW MSVM2

87.21[82.27–90.92] 88.16 [83.34–91.72] 86.08 [81.01–89.96] 86.6 [81.59–90.40] 90.16 [85.62–93.37] 90.86 [86.43–93.94] 88.34 [83.54–91.87] 89.64 [85.02–92.94] 81.12 [75.56–85.65] 83.47 [78.12–87.71] 79.30 [73.60–84.03] 83.30 [77.94–87.56] 67.91 [61.62–73.60] 68.25 [61.97–73.92] 67.99 [61.70–73.68] 67.56 [61.26–73.27] 75.38 [69.42–80.50] 76.68 [70.80–81.67] 75.12 [69.15–80.26] 76.42 [70.52–81.44] 74.69 [68.69–79.80] 75.12 [69.15–80.26] 73.47 [67.41–78.75] 74.34 [68.32–79.56] 71.47 [65.31–76.91] 72.08 [65.59–77.47] 69.90 [63.68–75.47] 71.91 [65.77–77.32] 79.3 [73.60–84.03] 80.25 [74.62–84.87] 78.08 [72.29–82.94] 79.12 [73.41–83.87] 89.21 [84.53–92.59] 90.08 [85.53–93.30] 88.25 [83.44–91.79] 88.86 [84.13–92.30] 86.95 [81.98–90.70] 88.69 [83.94–92.16] 86.77 [81.78–90.55] 88.34 [83.54–91.87] 87.64 [82.75–91.28] 88.08 [83.25–91.65] 85.56 [80.43–89.51] 87.47 [82.56–91.14]

Using Probabilistic Direct Multi-class Support Vector Machines

329

6 Conclusion In this search, we have proposed the use of the direct M-SVM models (WW, CS, LLW, and M-SVM2) for discrimination between five mental states, the goal being to compare their prediction performances. Each model, has as inputs the energies of the DWT decompositions, has as outputs the class posterior probabilities, and tested on EEG records obtained from the Keirn and Aunon database. We deducted from this comparative study that the CS model outperforms the other models unanimously (for all subjects and all sessions). However, we would like to explore in future work some issues like: the fusion effect of direct M-SVM, therefore, we take simultaneously into account the advantages of each model; comparison of M-SVM with decomposition methods involving binary SVM.

References 1. Vaid, R.S., Singh, P., Kaur, C.: EEG signal analysis for BCI interface: a review. In: IEEE Transaction on Advanced Computing and Communication Technologies, pp. 143–147 (2015) 2. Prashant, P., Joshi, A., Gandhi, V.: Brain computer interface: a review. In: 5th Nirma University International Conference on Engineering, pp. 1–6. IEEE (2015) 3. Gupta, A., Agrawal, R.K., Kaur, B.: Performance enhancement of mental task classification using EEG signal: a study of multivariate feature selection methods. Soft Comput. 19, 2799– 2812 (2015) 4. Hendel, M., Benyettou, A., Hendel, F.: Hybrid self organizing map and probabilistic quadratic loss multi-class support vector machine for mental tasks classification. Inform. Med. Unlocked 4, 1–9 (2016) 5. Gupta, A., Kirar, J.S.: A novel approach for extracting feature from EEG signal for mental task classification. In: IEEE Transaction on Computing and Network Communications, pp. 829–832 (2015) 6. Gupta, A., Kumar, D.: Fuzzy clustering-based feature extraction method for mental task classification. Brain Inform. 4, 135–145 (2016) 7. El Bahy, M.M., Hosny M., Mohamed, W.A., Ibrahim, M.: EEG signal classification using neural network and support vector machine in brain computer interface. In: Advances in Intelligent Systems and Computing, vol. 533, pp. 246–256. Springer (2017) 8. Liang, N., Saratchandran, P., Huang, G., Sundararajan, N.: Classification of mental tasks from EEG signals using extreme learning machine. Int. J. Neural Syst. 16(1), 29–38 (2006) 9. Weston, J., Watkins, C.: Multi-class support vector machines. Royal Holloway, University of London, Department of Computer Science, Technical report CSD-TR-98-04 (1998) 10. Crammer, K., Singer, Y.: On the algorithmic implementation of multiclass kernel-based vector machines. J. Mach. Learn. Res. 2, 265–292 (2001) 11. Lee, Y., Lin, Y., Wahba, G.: Multicategory support vector machines: theory and application to the classification of microarray data and satellite radiance data. J. Am. Stat. Assoc. 99 (465), 67–81 (2004) 12. Guermeur, Y., Monfrini, E.: A quadratic loss multi-class SVM for which a radius-margin bound applies. Informatica 22(1), 73–96 (2011) 13. Keirn, Z.: Alternative modes of communication between man and machines. Master’s dissertation, Department of Electrical Engineering, Purdue University, USA (1988)

330

M. Hendel and F. Hendel

14. http://www.cs.colostate.edu/eeg/main/data/1989_Keirn_and_Aunon 15. Keirn, Z., Aunon, J.: A new mode of communication between man and his surroundings. IEEE Trans. Biomed. Eng. 37(12), 1209–1214 (1990) 16. Palaniappan, R.: Utilizing gamma band to improve mental task based brain-computer interface designs. IEEE Trans. Neural Syst. Rehabil. Eng. 14(3), 299–303 (2006) 17. Diez, P.F., Mut, V., Laciar, E., Torres, A., Avila, E.: Application of the empirical mode decomposition to the extraction of features from EEG signals for mental task classification. In: Engineering in Medicine and Biology Society, Minneapolis, pp. 2579–2582 (2009) 18. Tolic, M., Jovic, F.: Classification of wavelet transformed eeg signals with neural network for imagined mental and motor tasks. 45(1), 130–138 (2013) 19. Hariharan, H., Vijean, V., Sindhu, R., Divakar, P., Saidatul, A., Yaacob, Z.: Classification of mental tasks using stockwell transform. Comput. Electr. Eng. 40, 1741–1749 (2014) 20. Guermeur, Y.: A generic model of multi-class support vector machine. Int. J. Intell. Inf. Database Syst. 6(6), 555–577 (2012) 21. Lauer, F., Guermeur, Y.: MSVMpack: a multi-class support vector machine package. J. Mach. Learn. Res. 12, 2269–2272 (2011) 22. Bennani, Y., Bossaert, F.: Predictive neural networks for traffic disturbance detection in the telephone network. In: Proceedings of IMACS-IEEE Computational Engineering in System Applications, France (1996)

Tracking Attacks Data Through Log Files Using MapReduce Yassine Azizi ✉ , Mostafa Azizi, and Mohamed Elboukhari (

)

Lab. MATSI, ESTO, University Mohammed 1st, Oujda, Morocco [email protected], {azizi.mos,m.elboukhari}@ump.ac.ma

Abstract. In this paper, we propose a methodology of security analysis that aims to apply Big Data techniques, such as MapReduce, over several system log files in order to locate and extract data probably related to attacks. These data will lead, through a process of analysis, to identify attacks or detect intrusions. We have illustrated this approach through a concrete case study on exploiting access log files of web apache servers to detect SQLI and DDOS attacks. The obtained results are promising; we are able to extract malicious indicators and events that char‐ acterize the intrusions, which help us to make an accurate diagnosis of the system security. Keywords: Big Data · Security · Attacks · Log files · MapReduce · SQL injection DDOS

1

Introduction

The world has experienced a data revolution in all digital domains due to the exponential use of connected tools and objects. According to statistics developed by IBM [1], we generate 2.5 trillion bytes of data each day, these data come from different sources, namely social networks, climate information, GPS signals, sensors and log files. The log files are a very important source of information; they retrace all the events that occur during the activity of the system. These are often of great volume and come from every‐ where, operating systems, application servers, data servers … In this paper, we present our approach to system security analysis which aims to track data related to DDOS and SQL injection attacks and analyze them in order to extract knowledge that helps us to improve the security rules. The proposed method is mainly based on log files analysis. The log files have a vital interest in computer security because they present an overview of all what has happened on of the whole system in order, for example, to explain an error, to understand how a system detects and attacks anomalies This paper is organized as follows: Sect. 2 presents main related works that really have used log files for extracting useful information. Then, Sect. 3 illustrates our meth‐ odology that we use to deal with log files for extracting data on eventual attacks. Before concluding, in the last section, we show a case study on Apache web servers.

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 331–336, 2019. https://doi.org/10.1007/978-3-030-03577-8_36

332

2

Y. Azizi et al.

Related Works

In the literature, several research studies consider log files as a very useful data source in several areas. Authors in [2, 3] exploit log files in the field of e-commerce to predict the behavior of their customers and improve the income of their business. In [4], the work was devoted to in-depth analysis, log file data from NASA’s website to identify very important information about a web server, the behaviors of users, the main mistakes, potential visitors of the site, all this in order to help the system administrator and web designer to improve the system by questioning. In [5] they used the log files the routers for error diagnosis and troubleshooting in home networks because the information contained in the log file helps to clarify the causes of network problems, such as misconfigurations or hardware failures. In [6], the researchers propose a diagnostic approach in a cloud computing architecture; this approach is based on exploiting log files of different systems of that architecture for finding the wrong uses and detecting anomalies which will improve system security, and in [7] they propose a multi-stage log analysis architecture, which use logs gener‐ ated by the application during attacks to effectively detect attacks and to help preventing future attacks.

3

Proposed Methodology

We are interested in the exploitation of techniques of Big Data in the security analysis of systems and networks. In this sense, we have proposed a methodology that consists of four stages (Fig. 1):

Fig. 1. Proposed approach

Tracking Attacks Data Through Log Files Using MapReduce

1. 2. 3. 4.

Data collection Data processing Data storage Data analysis

4

Case Study

333

Nowadays, there are over than 3.81 billion users connected to the Internet and more than a billion websites; 60% of these websites are hosted on the Apache web servers. The Web server provides different mechanisms for logging anything that may occur in the server, from the initial request to the URL mapping process to the connection, including any errors that may happen during processing. In our case study, we are working on access log files from web servers apache, To apply the proposed methodology, we started by defining and determining the usable data in the “access log” file of the web server. Through a java program, we retrieve the indicators of each event and we save them in a database, then we use an ETL “Pentaho Data integration” to transform the collected data to a standard XML format. These preprocessing and data formatting steps have ensured the transition from a state of unstructured data to well-structured consolidated data, which facilitates subse‐ quent analysis and exploitation. Then, we analyze log files of Web servers looking forwards to trace some attacks like SQL injection (SQLi) and distributed denial of service (DDOS). The approach targets to analyze correlate several events recorded in Access Log files over time and to release useful security information. We store all generated log files in a common platform to make the analysis of these files more efficient. Then we use MapReduce to perform parallel and distrib‐ uted processing. Our implementation of MapReduce runs on a large access log files stored in HDFS. The inputs and outputs of our Map-Reduce job are in the form of

Fig. 2. MapReduce processing

334

Y. Azizi et al.

peers {(K, V)}, the entry of each Map is a set of words {w} of a partition of log file records. Map function will calculate the number of times a key w appears in the partition; The Reduce function calculates the total number of occurrences of a key indicator (Fig. 2). SQL Injection SQL injection is an attack that exploits a security vulnerability of an application interacting with a database, this happens when inserting an SQL query not planned by the system [8], it consists of injecting SQL code that will be interpreted by the base engine of data. This attack involves entering specific characters in a variable that will be used in SQL query. These characters will cause the original query to be deviated from its purpose in order to open roads to malicious users [9]. They could, for example, authenticate themselves without knowing the password, create a new administrator user whose password they will know, destroy a table, screw up the data, and so on. Three injection mechanisms can execute malicious SQL code on the databases of a Web application: injection into user inputs, injection into the cookies, and injection into server variables, which consists of injecting values into the http header. The mechanism of this attack is to inject special characters, which will make the original request, deviated from its purpose (Table 1). Table 1. Some indicators of SQL Injection Indicators (\’)|(\%27) (\-\-)|(%20--%20) (;)|(%20;%20) (%20UNION%20), (%20SELECT%20%), (%20OR%20), (20%INSERT%20)

Signification The single quote and its URL encoded version The double-dash, comment on a line Semicolon, request delimiter Structured Query Language keywords

Here, for detecting the SQL injection attack at log files, we parse access log file line by line and we look for SQL keywords or specious characters in order to identify the deviations in the behavior of the monitored events and to clear the IP addresses that make the SQL injection attempts. It is impossible to carry out an attack without injecting dangerous characters into the input parameters since this is the only way to be able to change the structure or the syntax tree an SQL query at run time. We obtain as a result the IP addresses that launched the malicious requests, the number of attempt and the index of the attack. After running our MapReduce program that contains the sets of SQL injection tracking and detection instructions, we get the result of this analysis in a file named part-r-00000, and we can clearly deduce the malicious users who attempts to attack the system in question with the number of attempts and the detection indicator in order to take the necessary countermeasures (Fig. 3).

Tracking Attacks Data Through Log Files Using MapReduce

335

Fig. 3. The result of the SQLI attack detection approach

DDOS Attack Distributed Denial of Service (DDoS) is a malicious attempt to disrupt the normal traffic of a targeted server, service or network by saturating the target or its surrounding infrastructure with a flood of Internet traffic [10]. DDOS attacks owe their effectiveness by using multiple compromised computer systems as sources of attack traffic, specifi‐ cally, it is for hackers to send a large number of requests on a device (host, server, web application, etc …) in order to saturate and cause a total interruption of service. In this work, we are interested in the DDOS attack detection that aims to exhaust the processing capabilities of a target. For example, an attacker can try to reach the limit of the number of concurrent connections that a web server can process. In this case, the attacker constantly sends a large number of HTTP GET or POST requests to the targeted server. A single HTTP request is not expensive to execute on the client side, but can be expensive for the target server to respond to it, it must often load multiple files and execute database queries to create a web page. Our approach is to scan the access log file to detect users or machines that attempt to send massive queries in a very short time interval for particular resources in the hope of making the service unavailable. For this, we have developed a MapReduce program that releases the number of requests sent by users in a time interval of 5 s (Fig. 4).

Fig. 4. The result of the DDOS attack detection approach

The analysis of the log files allowed us to extract some indicators which characterize the attacks like SQLI and DDOS in order to anticipate this threat, and to take a certain number of technical and organizational measures to protect system security. These results also represent some limits, on one hand, the difficulty to confirm whether it is a potential

336

Y. Azizi et al.

attack or not which can generate false alarms. On the other hand, the challenge to determine in advances all the dangerous characters and behaviors which evolve rapidly.

5

Conclusion

In this paper, we present a methodology that aims to exploit the log files in the domain of computer security, looking forwards to improve anomaly detection and increase the level of security. This methodology is made of four stages: Data collection, Data processing, Data storage, and Data analysis. As a case study, we have collected and saved the events of a web server regarding SQL Injection and DDOS attacks, and then organized these data in a common structure. For data analysis and extraction of knowl‐ edge, we used a parallel and distributed approach based on MapReduce. The obtained results are encouraging but is still there some limits about accurate detection.

References 1. Miranda, M.: S. Big Brother au Big Data. In: Conférence de Big Data, Université Sophia Antipolis (2015) 2. Savitha, K., Vijaya, M.S.: Mining of web server logs in a distributed cluster using Big Data technologies. IJACSA 5(3), 137–142 (2014) 3. Salama, S.E., Marie, M.I., El-Fangary, L.M., Helmy, Y.K.: Web server logs preprocessing for web intrusion detection. Comput. Inf. Sci. 4(4), 123–133 (2011) 4. Saravanan, S., Uma Maheswari, B.: Analyzing large web log files in a Hadoop distributed cluster environment. Int. J. Comput. Technol. Appl. (IJCTA) 5(5), 1677–1681 (2014) 5. Müller, A., Miinz, G., Carle, G.: Collecting router information for error diagnosis and troubleshooting in home networks. In: IEEE 36th Conference on Local Computer Networks (LCN), pp. 764–769. IEEE, October 2011 6. Amar, M.M., Lemoudden, M., El Ouahidi, B.: Log file’s centralization to improve cloud security. In: International Conference on Cloud Computing Technologies and Applications, CloudTech 2016, pp. 178–183 (2016) 7. Moh, M., et al.: Detecting web attacks using multi-stage log analysis. In: IEEE 6th International Conference on Advanced Computing (IACC). IEEE (2016) 8. Halfond, W.G., Viegas, J., Orso, A.: A classification of SQL-injection attacks and countermeasures. In: Proceedings of the IEEE International Symposium on Secure Software Engineering, vol. 1, pp. 13–15. IEEE, March 2006 9. Alwan, Z.S., Younis, M.F.: Detection and prevention of SQL injection attack: a survey (2017) 10. Balakrishnan, H.P., Moses, J.C.: A survey on defense mechanism against DDOS attacks. Int. J. 4(3) (2014)

Toward a New Integrated Approach of Information Security Based on Governance, Risk and Compliance Mounia Zaydi ✉ and Bouchaib Nassereddine (

)

Faculty of Sciences and Techniques, Hassan 1st University, Settat, Morocco [email protected], [email protected]

Abstract. Nowadays, information system security (ISS) is more than just a technical issue, it becomes a business matter. To deal with it, disciplines such as ISS governance (ISS-Gov), ISS risk management (ISS-Risk) and ISS compliance (ISS-Compliance) have been emerged, nevertheless these domains have been addressed separately, which arises a problem of performance and efficiency. Hence, the necessity of an ISS integrated approach. This paper propose a new integrated approach of information security based on Governance, Risk manage‐ ment and Compliance (ISS-GRC). Keywords: ISS process · Information system security · Risk management Compliance · Governance

1

Introduction

The Governance, Risk, and Compliance (GRC) management process for Information Security is a necessity for any information systems where important information is collected, processed, and used. To this extent, many standards for ISS exists [15] (e.g., ITIL, ISO27K family etc). What is often missing is a structured and integrated approach to Governance, Risk and Compliance (GRC) of Information Security [16]. Governance is the set of policies, laws, culture, and institutions that define how an organization should be managed (as opposed to the way it is actually managed); – Risk Management is coordinate activities that direct and control an organization forecasting and managing events/risks that might have a negative impact on the business; – Compliance is the act of adhering to regulations as well as corporate policies and procedures [15, 16]. The article is structured as follows. Section 2 presents the previous work on ISS-GRC proposed in the literature. Section 3 devoted to the authors proposition. Section 4 discuss ISS-GRC in his current state and how it will be put in place in target organization. And, Sect. 5 presents the conclusion and future work.

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 337–341, 2019. https://doi.org/10.1007/978-3-030-03577-8_37

338

2

M. Zaydi and B. Nassereddine

Related Work

After an extensive literature review, the authors have identified five studies that deal with GRC integrated approach. The following table (Table 1) reviews these studies. Table 1. GRC integrated models Author Year Open 2009 Compliance and Ethics Group (OCEG) Tapscott 2006

Pricewaterhouse 2004 Coopers

Vicente and da Silva

2011

Mihkel Vunk and Nicolas Mayer

2017

Description Presents the OCEG Capability Model GRC360 which consists of nine categories and 29 sub-elements for each of which sub-practices are listed [1, 2] Gives four core values for the enterprises to achieve the ‘trust’ expectation, which is their main aim when the take an integrated approach to GRC [3] Develops a model consisted of four steps, as well as organizational entities, activities and the relationships involved within these steps [4] Presents the concepts and the key functions of GRC by using OCEG Capability Model (2009) [5] Present a framework for assessing organizational IT governance risk and compliance [6]

Focus Insight to GRC practices and activities

Four core values approach

An Operational Model for GRC

Conceptual Model for Integrated Governance, Risk and Compliance Adoption of IT-GRC in an enterprise, providing a structure to manage the IT and business together

None of the five models elaborates explicitly on ISS-GRC. Their applicability to GRC for information security can only be guessed, authors conclude that a scientific model for integrated ISS-GRC has yet to be created.

3

Establishing the New Integrated ISS-GRC Approach

3.1 Process Model Proposed for the ISS-Risk Brick In this section, authors opt for the 4D-ISS model proposed in their previous work [7], which broken down into four major phases namely (D1) Define risks and its different components. (D2) Direct them in terms of their priorities and criticality, define the functional requirements to bring relevant treatment. (D3) Deploy the countermeasures and (D4) decide on the risk-management strategy. 3.2 Process Model Proposed for the ISS-Compliance Brick To deal with the heterogeneity of the ISS’s laws, the distorted legislative between different countries, authors opt for a generic process of ISS-Compliance brick’s, in this

Toward a New Integrated Approach of Information Security

339

aims adaptability to different contexts and bodies, regardless of the legislation in force. To do this, the authors use the process model of Rath and Sponholz [8], consists of (1) the identification of regulations, laws, contracts, IT operations obligations and internal policies. This analysis can be realized by internal and external audits, evaluations and security checks. The results (2) expose the shortcomings of management (3). At this level, these shortcomings must be eliminated by improving, or creating new countermeasures. All these actions taken in the first three phases should be documented (4), and relevant information should be reported (4) to relevant stakeholders. 3.3 Process Model Proposed for the ISS-Gov Brick The ISS-Gov is an integral part of ITG [9], as well as the project is aligned with the contin‐ uous improvement, the ISS-Gov should be aligned with ISO/ IEC 38500: 2015 [9]. In this sense, the authors propose an extension of the ISO38500 standard that will be adapted and dedicated to the ISS-Gov process. The proposed model will inherit the three basic phases of the ISO38500, namely “direct” guidelines from the point of view of business strategies and risk management, “Monitor” governance activities with measurable indicators and, “Evaluate” results. To include the ISS aspect, the authors take on consideration the fourth phase “Oversee and report” to audit governance processes and bring information back to the relevant stakeholders involved; proposed by Ohki et al. [8]. 3.4 ISS-GRC Merging ISS-Gov, ISS-Risk and ISS-Compliance are interdependent. The risk is to understand the probability of uncertainty [10, 14]. Compliance is about focusing on the policies and regu‐ lations that govern the target organization. Governance is essential for stakeholders to put in place processes and practices throughout the compliance process. A big challenge is to understand how much ISS-GRC integration can take place. It can highlight opportunities and avoid waste. ISS-Risk prioritize controls and compliance activities. Threats and vulnerabil‐ ities are fundamental elements of risk management [13]. To cope with the legacy systems that put these processes in place on silo, organizations must begin to integrate them in a single and holistic way. In this part, the authors will explain the levels fusion of each element until the construction of the ISS-GRC final model. 3.5 Merging the ISS-Risk Process with ISS-Compliance Process ISS-Risk analysis and its related processes are an integral part of compliance with different standards and laws. For this reason, ISS-Risk analysis is closely linked to ISS-Compliance, hence the interest of starting with merging these two components.

340

M. Zaydi and B. Nassereddine

3.6 Merging the ISS-Risk Process and ISS-Compliance Process with the ISS-Gov Process IT-Gov in general and ISS-Gov in the case of ISS-GRC represents a higher control level than ISS-Risk and compliance processes, and this, if the organization is considered a cyber‐ netic system [11, 12]. The relationship of ISS-Gov and ISS-Risk and ISS-Compliance. 3.7 ISS-GRC Model’s Establishing Based on the selected components and merging their different processes, authors get the following model (Fig. 1):

Fig. 1. ISS-GRC unified model construction

4

Discussion

The bricks choice is based on models derived from the best practices in the areas of gover‐ nance, risk management and compliance. The applicability of this model will be prove once the model is developed into a Framework. The existing GRC models present a complete lack of ISS processes inclusion. While, the authors have developed this model for the ISS-GRC and attempted to facilitate the convergence of the IT-GRC with the ISS-GRC by selecting the 4D-ISS, IT compliance and an extended version of IOS 38500. Moreover, the role of ISS-Gov in existing models was vast and inconsistent, while this model underline the connection with the other components. Finally, it should emphasized that the ISS-GRC model is based on existing standards and best practices.

5

Conclusion and Future Work

The authors propose an integrated model for the ISS-GRC, based on existing best practices and in accordance with the ISO standards. The overall part of the model was derived from

Toward a New Integrated Approach of Information Security

341

the ISO/IEC 38500: 2015 model for the ISS-Gov. More future work will involve experi‐ menting with the implementation of the ISS-GRC process model in an organization and comparing its effectiveness with ISS-Gov, ISS-Risk and ISS-Compliance in silos.

References 1. Mitchell, S.L.: GRC360: A framework to help organisations drive principled performance. Int. J. Discl. Governance 4(4), 279–296 (2007) 2. Racz, N., Seufert, A., Weippl, E.: A process model for integrated IT governance, risk, and compliance management. In: Proceedings of the Ninth Baltic Conference on Databases and Information Systems (DB&IS 2010), p. 155 (2010) 3. Tapscott, D.: Trust and Competitive Advantage: An Integrated Approach to Governance, Risk & Compliance (2006) 4. PricewaterhouseCoopers: PricewaterhouseCoopers Integrity-Driven Performance PricewaterhouseCoopers International Limited, Germany (2004) 5. Vicente, P., da Silva, M.M.: A conceptual model for integrated governance, risk and compliance. In: Proceedings of the 23rd International Conference on Advanced Information Systems Engineering, p. 199. Springer, Heidelberg (2011) 6. Vunk, M., Mayer, N., Matulevičius, R.: A framework for assessing organisational IT governance, risk and compliance. In: International Conference on Software Process Improvement and Capability Determination, pp. 337–350 (2017) 7. Zaydi, M., Nassereddine, B.: A new comprehensive information system security governance framework a proposition of an information system security risk management unified process (4DISS), pp. 1–16 (2018) 8. Ohki, E., Harada, Y., Kawaguchi, S., Shiozaki, T., Kagaua, T.: Information security governance framework. In: Proceedings of the First ACM workshop on Information Security Governance, pp. 1–6 (2009) 9. ISO/IEC 38500:2015: Information technology - governance of IT for the organization. International Organization for Standardization, Geneva (2015) 10. Bloch, L., Wolfhugel, C.: Sécurité informatique: Principes et méthodes à l’usage des DSI, RSSI et administrateurs. Editions Eyrolles, 15 May 2013 11. Lewis, E., Millar, G.: The viable governance model – a theoretical model for the governance of IT. In: Proceedings of the 42nd Hawaii International Conference on System Sciences (2009) 12. Racz, N., Weippl, E., Seufert, A.: A process model for integrated IT governance, risk, and compliance management (2010) 13. Humbert, J.P., Mayer, N.: La gestion des risques pour les systèmes d ’ information. 24, 1–12 (2006) 14. ISO 27005 LOGICAL C. Information technology–Security techniques–Information security management systems–Requirements (2013) 15. Asnar, Y., Massacci, F.: A method for security governance, risk, and compliance (GRC): a goalprocess approach. In: LNCS (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6858, pp. 152–184 (2011) 16. Rasmussen, M., Kark, K., Penn, J., McClean, C., Bernhardt, S.: Trends 2007: governance risk and compliance: organizations are motivated to formalize a federated GRC process (2007)

A Novel Steganography Algorithm Based on Alpha Blending Technique Using Discrete Wavelet Transform (ABT-DWT) Ayidh Alharbi(B) and Tahar M. Kechadi(B) University College Dublin, Belfield, Dublin 4, Ireland [email protected], [email protected] https://www.insight-centre.org/

Abstract. Steganography is the process of secret communication, it is the art of dissimulating information into digital mediums such as images. This discipline is still a thriving domain under information hiding and security field. In this paper, we propose a novel image steganography algorithm based on alpha blending technique through Discrete Haar Wavelet Transform (ABT-DWT). The hidden data is located in the most robust area under the frequency transform of the image. Intensive experiments are achieved in order to validate its efficiency by examining it with different kinds of images and four increased cases of capacity of hiding. Results indicate that the proposed algorithm reaches a high level that satisfies all steganographic parameters comparing with similar previous results under the same domain. The findings enhance the image steganography techniques and their performance. Keywords: Image · Steganography · PSNR · MSE Wavelet transform · Alpha blending technique

1

Introduction

The rapid development in the digital technology leads to the evolution of information security techniques, among those, Cryptography, Watermarking and Steganography are the most important methods [1]. Steganography is the science of embedding secret information into a digital file in an imperceptible way so that no one suspects the existence of the hidden information. In ancient Greek, the word steganography means “covered writing”. In modern steganography, the confidential information is embedded into digital multimedia files and also at the network packet level. The digital multimedia files may be text, audio, images or video. Images are the most widely used because an image consists of more redundant information and the human visual system can’t detect the variation in luminance of colour vectors at higher frequency ends of the visual spectrum. Image steganography consists of hiding data into cover image, this generates a stego which is sent to the other party through communication channel, the objective of steganography is that the opponent who may have access to c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 342–351, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_38

A Novel Steganography Algorithm Based on Alpha Blending Technique

343

the stego image through the channel does not know that this stego image embeds confidential information, and at the receiving end, the recipient should be able to correctly extract the confidential information with or without the stego key. The important requirements of steganography are Invisibility or Imperceptibility, Capacity of hiding, Robustness and Computational Complexity. Invisibility is the strength of steganography lies in its ability to be unnoticed by the human eye. The payload capacity refers to the maximum amount of secret information that can be embedded into the cover image without generating perceptible artefacts. The steganographic algorithm is robust to attacks when the stego image remains intact if it undergoes transformations such as scaling, filtering, cropping and addition of noises. The computational complexity measures how much it is computationally expensive during embedding and extracting of hidden messages. Image steganography includes several techniques of hiding the confidential information within the cover image, the important approaches are spatial domain and transform domain based steganography techniques. In spatial domain approach, the confidential information is directly embedded into the pixels of the cover image, such as the technique of the Least Significant Bits [2], the LSB of the pixels is substituted by the bit of the secret information. In the LSB encoding technique, the stego image is sensitive and not robust to operations such as blurring, cropping, lossy compression and additive noise. There is also the interpolation based techniques [3,4] and the pixel value differencing (PVD) techniques [5,6]. In [7], authors proposed an algorithm based on PVD technique which was applied after dividing the cover image into blocks of two consecutive pixels. In [8], Hong et al. proposed a reversible data hiding method based on image interpolation, data is embedded into interpolation errors using the histogram shifting technique. A semi reversible data hiding method that utilises interpolation and the least significant substitution technique is proposed in [9]. In [10], a high payload image steganographic scheme based on an extended interpolating method is proposed. These two techniques are characterised with a high payload, the error of interpolation and the difference between consecutive pixels provide a large enough space to hide data with respect to the image complexity. In transform domain, data is dissimulated in the coefficients of the frequency domain, the common transforms utilised for hiding secret information are the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT). The cover image is first transformed using any of these transformations; then the secret information is embedded into the appropriate coefficients. In [11], authors used integer mapping to implement the DCT in order to minimise the error produced when rounding the coefficients. In [12], the proposed steganographic scheme hides data into medical images, it preserves the dependencies between the inter-block neighbouring DCT coefficients in order to minimise the produced fluctuation on which the steganalysis attacks are based. The DCT transform is usually used in the steganographic algorithms on the JPEG images. In [13], the authors utilise the PVD on the discrete wavelet transform coefficients. In [14], the dissimulation starts with the block (HH) of the Integer Wavelet Transform

344

A. Alharbi and T. M. Kechadi

(IWT), this block contains the highest frequencies to which the human visual system is the less sensitive. Data is hidden in the LSBs of the IWT coefficients accordingly to their magnitude (absolute value), the greater the magnitude, the more bits of data are stored in the coefficients. In [15], a steganographic scheme based on the Haar DWT is proposed, data is hidden in the LSB of the DWT coefficients, the algorithm of hiding is generalised in [16] using K-LSB and the optimal pixel adjustment (OPA) procedure. The remaining of the paper is organised as follows: Sect. 2 presents the decomposition and reconstruction algorithms of the Haar discrete wavelet transform. In Sect. 3, the proposed algorithm is explained. In Sect. 4, we test the proposed work and compare it to existing schemes. Section 5 concludes the paper.

2

Haar Discrete Wavelet Transform

Since the proposed work is based on Haar discrete wavelet transform, we present in this section the algorithms of decomposition and reconstruction, this transform is a Multi-resolution analysis (MRA). With this transformation, the spatial domain is passing through low and high pass filters to extract low and high frequencies, respectively. Applying one 2D wavelet transform level decomposes the image C into four non overlapping sub bands, namely cA, cH, cV and cD as shown in the Fig. 1.

Fig. 1. Two level Haar Discrete Wavelet Transform of the image Barbara

The sub-band cA includes the low pass coefficients and presents a soft approximation of the image. The other three sub bands show respectively the horizontal cH, vertical cV and diagonal cD coefficients. The decomposition algorithm of the Haar Discrete Wavelet Transform (HDWT) is given by the following equations:

A Novel Steganography Algorithm Based on Alpha Blending Technique

345

C2i−1,2j−1 + C2i,2j−1 + C2i−1,2j + C2i,2j 2 C2i−1,2j−1 + C2i,2j−1 − C2i−1,2j − C2i,2j cHi,j = 2 C2i−1,2j−1 − C2i,2j−1 + C2i−1,2j − C2i,2j cVi,j = 2 C2i−1,2j−1 − C2i,2j−1 − C2i−1,2j + C2i,2j cDi,j = 2 The reconstruction algorithm or the inverse Haar Discrete Wavelet Transform is as follows: cAi,j + cHi,j + cVi,j + cDi,j C2i−1,2j−1 = 2 cAi,j + cHi,j − cVi,j − cDi,j C2i−1,2j = 2 (1) cAi,j − cHi,j + cVi,j − cDi,j C2i,2j−1 = 2 cAi,j − cHi,j − cVi,j + cDi,j = C2i,2j 2 cAi,j =

3

Proposed Algorithm

Over the past few years, a variety of powerful and sophisticated wavelet-based schemes for image compression have been developed and implemented. Furthermore, those schemes are being designed to address the requirements of very different kinds of applications, e.g. internet, colour facsimile, printing, scanning, digital photography, remote sensing, mobile applications, medical imagery, digital library, military application and e-commerce [17]. In our proposed image steganography algorithm, we exploit the feature of alpha blending technique. The blending factor or percentage of colours from the first source image used in the blended image is called the “alpha”. Alpha values are on a range between 0 and 1 which mathematically generates the blended values in approximated values of original image pixels [18]. There are two general steps to generate the stego image using alpha blending techniques, let C and S be the cover and stego images respectively, and let m be the secret message. (1) Secret text embedding and producing stego image S = α ∗ C + (1 − α) ∗ m

(2)

(2) Secret text extraction from stego image. m = S − α ∗ S

(3)

where S  is the selected sub band of the stego image S. The proposed algorithm is based on an innovative scheme of data hiding into the wavelet subdivisions of the cover image. It strives for maximising the hiding capacity. The key of contribution applied on encrypted data (AES-128), where

346

A. Alharbi and T. M. Kechadi

the maximum size of the key is 16 digits. Furthermore, this technique specifically based on Haar Discrete wavelet transform, it technically applies the Haar wavelet transformation in 2 levels then exploiting the second Diagonal coefficients [cD] using the Alpha blending technique, to embed our hidden data and saved it in a new coefficient, which will be later apart of the process of inverse wavelet (IDWT) to produce the stego image. At the initial stage, we examine it within different blending values. The best result was recorded in blending value = 0.9, where most of the previous related works done in different alpha blending values. In the following, we present the steps of the embedding and extraction stages of our proposed algorithm. 3.1

Steps of ABT-DWT embedding algorithm

Step 1: Get the secret text m to be embedded and specify its length. Step 2: Get the encryption key K for an enhanced security purpose. Step 3: Apply the cipher process of AES-128 Encryption algorithm on the combination of step 1 and step 2 (m, K). Step 4: Maintain the cipher data length and its double value representation. Step 5: Apply the Haar DWT on two levels, the bloc coefficients of the first and second level are named respectively [cA1 , cH1 , cV1 , cD1 ] and [cA11 , cH11 , cV11 , cD11 ]. Step 6: Assume that Alpha blending value α = 0.9. Step 7: Assign the second diagonal coefficients cD11 to a new coefficient of the image OC by the help of steps 9 – 14. Step 8: Open a temporary coefficient (tmpcf d) as a copy of the original coefficient cD11 for a swapping purpose. Step 9: Let the secret text index as t that represent the hidden text values prepared in cD11 with help of its approximation cA11 . Step 10: Let r and q represents the rows and columns of coefficients cD11 respectively, they represent the limitation of t in a new loop parameters which are Kr and kq respectively to produce a new text index t1 . Step 11: Keep t1 as a copy of t and perform the new coefficient OC based on the following formula. OCkr,kc = α ∗ OCkr,kc + (1 − α) ∗ t1 Step 12: Keep collecting the index t1 until the last hidden value. Step 13: Let OC be the new proposed diagonal coefficient with hidden text. Step 14: Apply the Inverse Discrete Wavelet Transform (IDWT) to include the new proposed cofficients OC instead of the old one and produce the stego image. 3.2

Steps of ABT-DWT extraction algorithm

Step 1: Get and read the stego image. Step 2: Apply two levels of Haar discrete wavelet transform to the stego image.  which is derived Step 3: Let the second diagonal decomposed coefficient be cD11 from the stego image matrix.

A Novel Steganography Algorithm Based on Alpha Blending Technique

347

Step 4: From the same parameter of embedding, and t that represent the hidden text values. Let r,q represent the rows and columns of coefficients OC  and they are prepared in an initial loops for extraction in a new text index t . Step 5: By the help of a temporary coefficient T , apply the inverse formula of alpha blending value α = 0.9 to gather the values of t . based on the following formula.  t = OCkr,kc − α ∗ Tkr,kc /(1 − α)

Step 6: Save all collected values of t in a matrix. Step 7: Apply the decryption process on the combination of new matrix t and K to get the secret values that represent the hidden text. Step 8: Read the hidden text by characterising the values after decryption.

4

Experiment and Results

To verify the performance of the proposed method, experiments on a variety of images extensions were accomplished. These images are chosen from two benchmark data sets. They include the most common images such as Baboon, Lena, Pepper, etc. Each type of image is applied in 4 cases of a capacity of hiding (C1–C4) up to 10000 alphabets in different 6 images represented in three dimensions. We started to embed 100 Alphabets, then 1000, 5000, and 10000 alphabets which are the maximum capacity case in our experiments. A sample of the images in our experiments are shown in Fig. 2. In (a) The dimensions of the images are 256 × 256. Moreover, dimensions of the images in (b) and (c) are (512 × 512), (1024 × 1024) respectively.

(a)

(b)

(c)

Fig. 2. Sample of benchmark images in our experiments

A sample of our intensive experiments and results on grayscale and colour stego images from their imperceptibility perspective are shown in Fig. 3. Each colourful line represents a case of hidden capacity. Sub-figures (a,b,c,d) for grayscale images, while (e,f,g,h) for colour images. In (a,e) The imperceptibility behaviour of smallest images (256 × 256). In (b,f) and (c,g), the results and analysis of the same behaviour with bigger images (512 × 512 ) and (1024 × 1024) respectively. In (d,h), the approximate values of imperceptibility in all cases. They indicate a high imperceptible result nearly 83 decibels (dB). They are shown as follows:

348

A. Alharbi and T. M. Kechadi

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 3. Imperceptibility behaviour of stego images in 4 payload capacity (C1–C4)

A Novel Steganography Algorithm Based on Alpha Blending Technique

349

According to the results in Fig. 3, the range of imperceptibility for grayscale images between 54 and 77 decibel (dB) which is lower than colour image range between 60 and 83 decibels (dB). In all sub-figures, the blue line is located at the top as it explains the minimum capacity of hiding, in contrast, the green line at the bottom for maximum hidden capacity. Moreover, there is some similarity in most of the imperceptibility behaviour within each image size category. The highest range of imperceptibility values are observed in Sub-figures (b) and (f) for (512 dimensions) images which is around 15 Decibels (dB). In addition, the lowest ranges of imperceptibility values are on the largest image (1024 dimensions) which is between 8 to 9 Decibels in Sub-figures (c) and (g). Image with small dimensions are shown in Sub-figure (a) with 11 Decibels (dB) and Subfigure (e) with 9 Decibels (dB). Out of all experiments, there are 3 un-predicted results of images 3, 4, and 5 in cases 2 and 3 of (256 × 256) colour images, it is either due to the nature of the images or their outcome measures during or after the blending process. However, to compare the efficiency of our proposed algorithm, we presented a fair comparison between it and previous similar techniques using wavelet transform. As a result, the proposed algorithm is showing a more imperceptible value than [14,19,20]. It shows a significant difference in more than 6 (dB Decibel) in two tested cases of hidden capacity which are 12.8 Kb and 6.3 Kb respectively shown in the following figure (Fig. 4).

Fig. 4. Comparing our findings with similar techniques

In the following Table, a sample of two common metrics in steganography, Peak Signal to Noise Ratio (PSNR) and Entropy, they are represented on the first and second images in each dimension to emphasise the relationship between these metrics. Moreover, to ensure the robustness and imperceptibility of our method. Low entropy value near to 0 is a sign of high robustness, while high PSNR

350

A. Alharbi and T. M. Kechadi

indicates high imperceptibility. In our proposed work, bigger images increase the PSNR value in more than 5 decibels (dB). In addition, Entropy values are less than 0.5. They are shown in Table 1. Table 1. PSNR and Entropy for case 2 (C2) Image type Dimensions 256 × 256 Metrics

5

512 × 512

1024 × 1024

PSNR Entropy PSNR Entropy PSNR Entropy

Grayscale

Image 1 Image 2

63.50 63.07

0.395 0.419

69.26 63.21

0.142 0.151

74.90 74.51

0.052 0.059

Colour

Image 1 Image 2

68.30 69.21

0.158 0.146

74.15 68.54

0.057 0.057

78.69 77.96

0.021 0.022

Conclusion

To conclude, this contribution is achieved using MATLAB and Image processing Wavelet Tool-Box. It proposes a new image steganography technique that reach a new level of all steganographic parameters. It is based on Discrete Haar Wavelet transform, the hidden data are exploiting the advantage of alpha blending technique embedded in the second diagonal coefficients after the decomposition and subdivision process. It shows a high robustness as the hidden data located in the most robust area of the image. It used the cipher data for embedding process to re-enforce its security. By comparing the proposed method with recent methods under the same domain. This technique (ABT-DWT) shows better results. As a future work, we will extend our study to find out its detect-ability under suitable settings of machine learning classifier.

References 1. Sathisha, N., Venugopal, K.R., Babu, K.S., Raja, K.B., Patnaik, L.M.: Non embedding steganography using average technique in transform domain. In: IEEE 9th International Colloquium on Signal Processing and its Applications (CSPA) (2013). https://doi.org/10.1109/CSPA.2013.6530003 2. Chan, C.K., Cheng, L.M.: Hiding data in images by simple LSB substitution. Pattern Recognit. 37, 469–474 (2004) 3. Jung, K.H., Yoo, K.Y.: Data hiding method using interpolation. Comput. Stand. Interfaces 31, 465–479 (2009) 4. Gholampour, I., Khosravi, K.: Interpolation of steganographic schemes. Signal Process. 98, 23–36 (2014) 5. Yang, C.H., Weng, C.Y., Tso, H.K., Wang, S.J.: A data hiding scheme using the varieties of pixel-value differencing in multimedia images. J. Syst. Softw. 84, 669– 678 (2011)

A Novel Steganography Algorithm Based on Alpha Blending Technique

351

6. Hong, W., Chen, T.S., Luo, C.W.: Data embedding using pixel value differencing and diamond encoding with multiple-base notational system. J. Syst. Softw. 85, 1166–1175 (2012) 7. Malik, A., Sikka, G., Verma, H.K.: A modified pixel-value differencing image steganographic scheme with least significant bit substitution method. I.J. Image Graph. Sig. Process. 4, 68–74 (2015) 8. Hong, W., Chen, T.S.: Reversible data embedding for high quality images using interpolation and reference pixel distribution mechanism. J. Vis. Commun. Image R. 22, 131–140 (2011) 9. Jung, K.H., Young, K.: Steganographic method based on interpolation and LSB substitution of digital images. Multimedia Tools Appl. 74, 2143–2155 (2015) 10. Hu, J., Li, T.: Reversible steganography using extended image interpolation technique. Comput. Electr. Eng. 46, 447–455 (2015) 11. Lin, Y.K.: A data hiding scheme based upon DCT coefficient modification. Comput. Stand. Interfaces 36, 855–862 (2014) 12. Liao, X., Yin, J., Guo, S., Li, X., Sanagaiah, A.K.: Medical JPEG image steganography based on preserving inter-block dependencies. Comput. Electr. Eng. (2017). https://doi.org/10.1016/j.compeleceng.2017.08.020 13. Gulve, A.K., Joshi, M.S.: An image steganography method hiding secret data into coefficients of integer wavelet transform using pixel value differencing approach. Math. Probl. Eng. 11 (2014). https://doi.org/10.1155/2015/684824 14. Miri, A., Faez, K.: An image steganography method based on integer wavelet transform. Multimedia Tools Appl. 1–12 (2017). https://doi.org/10.1007/s11042-0174935-z 15. Taouil, Y., Ameur, E.B, Belghiti, M.T.: New image steganography method based on haar discrete wavelet transform. In: EMENA-TSSL Advances in intelligence on Computer Sciences, vol. 520, pp. 287–297. Springer (2016) 16. Taouil, Y., Ameur, E.B., Benhfid, A., Harba, R., Jennane, R.: A data hiding scheme based on the haar discrete wavelet transform and the K-LSB. Int. J. Imaging Robot. 17, 41–53 (2017) 17. Prabakaran, G., Bhavani, R.: A modified secure digital image steganography based on discrete wavelet transform. In: IEEE International Conference on Computing, Electronics and Electrical Technologies (ICCEET) (2012). https://doi.org/ 10.1109/ICCEET.2012.6203811 18. Dey, N., Roy, A.B., Dey, S.: A novel approach of color image hiding using RGB color planes and DWT. Int. J. Comput. Appl. 36, 19–24 (2011) 19. Al-Dmour, H., Al-Ani, A.: A steganography embedding method based on edge identification and XOR coding. Expert Syst. Appl. 46, 293–306 (2016) 20. Kieu, T.D., Chang, C.C.: A steganographic scheme by fully exploiting modification directions. Expert Syst. Appl. 38, 10648–10657 (2011)

A Comparison of American and Moroccan Governmental Security Approaches Rabii Anass1(&), Assoul Saliha2, Ouazzani Touhami Khadija2, and Roudiès Ounsa1 1

2

Univ. Mohammed V-Rabat, EMI, Siweb Team, E3S, Rabat, Morocco [email protected], [email protected] Univ. Mohammed V-Rabat, ENSMR, Siweb Team, E3S, Rabat, Morocco [email protected], [email protected]

Abstract. In a time where security is paramount, maturity models enable institutions to evaluate their security level and elucidate paths for improvement. Security maturity evaluation is more critical and challenging for governments in order to ensure the safety of critical infrastructure and their citizens. In this paper, we analyze and compare two governmental security approaches that of Morocco and USA. We aim to outline the common features, best practices and showcase them in their respective contexts. We started by analyzing legacy maturity models to extract comparison criteria relative to their main functions, key components, and implementation mechanics. We added a context category to reflect the approach’s legal status and the country’s resources. We then evaluate the two approaches and showcase the difference based on means of assessment and guidance, we analyze how governmental structure influences the approaches’ mandatory status. We then discuss our findings explaining why each approach is more appropriate considering context. Keywords: E-government  Maturity model  Information security Cyber security  Morocco  USA  Community Cyber Security Maturity Model CCSMM  Directive Nationale en Securité des Systèmes d’Information DNSSI  ISO 27002

1 Introduction Ever since the creation of the original capability maturity model in 1989 for software engineering and the massive success that ensued, there have been many variations and enhancements in the field of cyber security [1]. Many studies discuss their similarities and differences aiming to determine whether these maturity models are viable and in hope to assert which one is most adequate [2, 3]. Security maturity evaluation is indeed more critical and challenging for all governments willing to ensure safety of critical infrastructure and of their citizens. Furthermore, countries have dealt with these security issues differently depending on their resources and culture and have proposed national security maturity evaluation approaches based on in-depth studies. In this paper, we compare two security approaches belonging to two different countries: The American Community Cyber Security Maturity Model (CCSMM), and the Moroccan © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 352–360, 2019. https://doi.org/10.1007/978-3-030-03577-8_39

A Comparison of American and Moroccan Governmental Security Approaches

353

“Directive Nationale en Sécurité des Systèmes d’Information” (DNSSI). We also discuss their visions and gaps. Our objectives are twofold: to understand the gaps between the countries and how to bridge them, and to outline the common features that may be considered as best practices. The choice of the countries is not arbitrary as we have elected to compare countries representative of their economical counterparts, Morocco as an emerging country with increasing rate of technology usage in various industries, and the USA world power and the leader in development. First, we analyze legacy maturity models through a literature review [4] we’re carrying out: SSE-CMM, C2M2, ISEM and many others. We then extracted a set of criteria relative to their main functions, key components, and implementation mechanisms. We also added a national context category. Then we proceed with the discussion and the comparison of the two approaches. The remainder of this paper is structured as follows. Moroccan and American security approaches are described in Sect. 2. Section 3 presents our set of criteria and scorecard and evaluation of the aforementioned approaches. The discussion is following in Sect. 4. Finally, a conclusion and some perspectives will be shared.

2 Survey of Moroccan and American Governmental Security Approaches In this section, we present the aforementioned evaluation approaches. We will be showcasing the context which led to their creation and appropriation, their intent and finally an overview of their content. The U.S.A. have opted for The Community Cyber Security Maturity Model (CCSMM), while Morocco has the DNSSI. 2.1

The Community Cyber Security Maturity Model

The CCSMM is the result of multiple awareness exercises conducted in multiple states by the Center of Infrastructure Assurance and Security (CIAS) at the University of Texas at San Antonio (UTSA) [5]. The goal was to simulate DoS attacks to critical infrastructure and assess readiness. They concluded that a program was needed for communities to assess their security and offer assistance to guide its implementation. As a result of these exercises, the CIAS created the CCSMM in 2006, aiming for nationwide spread at its peak. The CCSMM was meant to provide 3 main features: • A measurement tool of current level of security in communities, • An improvement plan based on Gap Analysis, • A common national repository for all communities to serve as a reference. The model treats issues in the most vulnerable areas to cyber-attacks, and are most in need of refinement. These four areas, called Dimensions, are: Awareness, Information Sharing, Policies and Planning. The awareness dimension handles the communities’ viewpoint of impending threats, their impacts and their readiness. Information sharing showcases its criticality of information flow. Policies are guidelines that structures have to analyze and implement over time detailing all operations to ensure safety and incident management. The planning addresses recovery plans,

354

R. Anass et al.

emergencies, and continuity to identify readiness of the community to any event with major impact. The model’s measurement tool is built on five maturity levels ranging from Initial to Vanguard for every aforementioned dimensions [6]. Every community starts at the Initial level and has to implement a set of characteristics defined by the model to reach the next one. To improve, communities have to implement these characteristics into four general categories by determining: • • • •

Metrics to watch for in assessments and how to conduct them, Technologies that should be implemented, Training to achieve the necessary skillset for stakeholders to have, Documented processes to follow.

The metrics are context dependent, they are usually gathered from standards and complemented if needed. Technology implementation depends on the storage medium and what incidents it might face. With improvements, technology is later used for incident detection and later becomes the reference for possible enhancements. Implemented processes should describe precise operationalization and assignment of duties, as well as incident handling and continuity plans, usually based on publications or standards such as the recommended NIST’s “Framework for Improving Critical Infrastructure Cyber Security”. On the other hand, Assessments are periodical checkings of technical nature or administrative one. Finally, training ensures that everyone has the skills and knowledge needed to address issues and execute processes. Targeted training is favored and should depend on the technology and practices used. 2.2

The Moroccan National Guideline for Information System Security

The DNSSI is one of many actions taken by the Moroccan government’s General Committee for Information System Security on December 2013 to enhance security and maturity level of all government branches, public organizations and Vital Organisms [7]. Its implementation is mandatory for these organizations. This guideline is based on a study led in a sample group of ministries and Vital Organisms, in July 2013, and on Moroccan Standard version of ISO/CEI 27002 and possesses a similar structure. Each section contains weighted measures that must be implemented with a set of level priority ranging for the lowest C to A. Classes represent magnitude of damage to an information systems in case of an attempt on perenniality of the organization’s missions and assets ranging from catastrophic to limited. Each DNSSI section contains rules to be implemented dependently on the class to achieve objectives that we address next. Organization is at the heart of these processes. Top Management should fully back up and supervise the implementation of the measures, liable figures must be chosen for each process, and contractually uphold any third-party involved to comply with the defined rules. DNSSI requires that all assets and information must be inventoried and possess the adequate amount of protection it needs. The staff interacting with assets and executing processes is as crucial as information itself, making recruitment and termination critical. Security aspects should be imbedded in the planning, configuration, usage and maintenance in every element within the perimeter. Moreover, many sections address technical security aspects such

A Comparison of American and Moroccan Governmental Security Approaches

355

as measures for data treatment and access. Also, every organization should be prepared by having incident management and continuity plan. Continuity plans should be tested regularly and all stakeholder should undergo training to be prepared. Finally, the last DNSSI section addresses compliance with legal requirements, intellectual property and protection of personal data. All requirements must be audited and reported at least every two years and improved based on the results. Planning, implementing and tracking of the guideline’s requirements have to be included in each organization’s action plan. Moreover, for the improvement of this guideline, a Computer Emergency Response Team (MaCert) updates the security measures that have to be implemented as new vulnerabilities are discovered as it was the case on May 2017 [8]. Also multiple seminars are organized to spread awareness. The guideline emphasizes that the measures are the minimal requirements to be implemented and every organization should improve their level of security as long as it does not hinder the requirement.

3 American and Moroccan Maturity Model Evaluation 3.1

Evaluation Criteria

In order to extract our criteria, we study in-depth characteristics of several popular standards and all the process areas they encompass like the ISO/IEC 27000. Afterwards, we analyzed security maturity models to understand their security evaluation and guidance assistance. The models were either context specific, or extensive like the SE-CMM [9]. Through this study, we extracted the following categories to which we added a forth dedicated to national context: • • • •

National context, Main functions, Key components, Implementation.

The context category enables us to understand the circumstances of their inception and the governments’ intended use. The second category addresses the main functions of the security approaches and compare the methods used for: performance assessment, model based improvement and gap Identifying. The third category entitled “Key Components” is composed of the tools carrying out the main functions: maturity evaluation, security domains, the necessary attributes of each domain in each level, means and tools necessary for improvement and gap analysis functions. All the criteria in their respective categories are gathered in the first 2 columns of Table 1. 3.2

Results of DNSSI and CCSMM Evaluation

Our evaluation is based on a literature review of open governmental publications completed by studies available through common scientific libraries [16, 17] (Scopus, IEEE, etc.). In addition, we conduct some interviews with government officials [15].

356

R. Anass et al. Table 1. Set of criteria.

Dimension Context

Criteria Geography

Digitalization rate

Date

Legal

Main functions

Performance assessment

CCSMM (USA) A federal republic housing 323,4 million citizens and it is considered as a world power with its 18,62 trillion USD [10] gross domestic product Ranked 12th world wide in E-Gov with an internet penetration rate of 88.5% [11, 12] The first version and current version was published in 2006 The CCSMM targets all sensitive infrastructure, recognized by the government (PPD8) [13] yet still not mandatory The CCSMM is based on SSE CMM and CMM [5] and has a qualitative means of assessment. The number of PA doesn’t usually

Model based improvement

The model possesses “Implementation Mechanisms” (Metrics, technology, Policies, Training, and Assessments) to use in each level of each dimension. They serve as recommendations for implementation to reach the next level. They have proven to be effective since the feedbacks from the communities were positive

Gap identifying

The model proposes testing of all kinds: technical, daily checkups, etc.

DNSSI (Morocco) A an emerging constitutional monarchy of 35,28 million citizens and is considered an African leader reaching 101,4 billion USD for their GDP [10] Ranked 85th with an increasing internet penetration rate (57,6%) [11, 12] The first version was published on 2013 and was reinforced by guides The DNSSI’s application is mandatory for every organization deemed an OIV [14] The DNSSI is based on ISO 27002 [7]. It similarly uses a quantitative assessment; the implementation can last for the number of process areas The improvement rate is calculated from the compliance percentages. For improvement, DNSSI suggests the implementation of missing measures. MaCert also publishes guides to implement some of the harder requirements to implement. They are published regularly and are up to date with newer threats Gap analysis is made through the results of the assessment (continued)

A Comparison of American and Moroccan Governmental Security Approaches

357

Table 1. (continued) Dimension Key Components

Criteria Maturity evaluation

Security domains

Security attributes

Improvement roadmap

Diagnostic methods

Implementation

Evaluation tool

Documentation

CCSMM (USA) The model has 5 levels each representing a set of characteristics the organization now possesses. They reflect how the organization handles security issues in every security domain The model addresses 4 domains called dimensions: awareness, information sharing, policy, planning. Each with its own maturity level and the overall level is the smallest. The repartition is equal, especially since the maturity level is equal to the lowest maturity level between the dimensions The requirements are clear, the improvement goes from simple awareness to established practices to constantly evolving and improving practices A roadmap is built through “Implementation Mechanisms” and recommendations Diagnostic uses metrics for all security domains. They serve as indicators during gap analysis No evaluation tool is provided, every organization has to use its own The documentation is hard to get since it is mostly published articles

DNSSI (Morocco) The inexistence of maturity levels isn’t problematic, the compliance is rated by percentages with a high precision which makes it a very good measurement scale This model covers almost every security facet, ranging from technical to organization issues. All sections are equal as the model is based on ISO 27002 that treats each measure equally. But each rule is weighted to prioritize more vulnerable facets

Each security objective in each section has its rules. Every rule is a set of requirements to implement

Though absent, its existence is deemed necessary and should be adequately made by the organization Diagnostic is done merely by comparison with the requirements of each rule No evaluation tool provided, but audit guides are available on their website The documentation is easy to find on the DGSSI website

358

R. Anass et al.

4 Discussion 4.1

National Context

Different resources result in different approaches mainly because of the massive gap in GDP and E-Gov. rank. The CCSMM has bigger impact range considering Morocco’s low internet penetration rate. The creation of the CCSMM was a reaction to many sabotage and DoS attacks, while Morocco’s national interest in cyber security was triggered by the increase in European offshore software companies. Even though the CCSMM was made by an independent research team, it received recognition from the White House through the Presidential Policy Directive PPD-8 [13] without gaining a mandatory status. The DNSSI has always been mandatory for vitally important organization to implement and report at least bi-yearly their audit reports. The mandatory status is a means of risk reduction by the Moroccan central government and is easier to oversee the requirement’s implementation whereas the federal government in USA doesn’t have that power over the states. 4.2

Main Functions

The DNSSI’ maturity score is calculated from compliance to all the requirements, whereas the CCSMM’s KPA receive different levels and the overall score is equal to the lowest of said levels. Their respective structures reflect that of their bases and so we find similar concepts such as KPA in SSE CMM. Regarding precision, the DNSSI presents more depth by addressing more security domains as well having weighted rules depending on priorities. The CCSMM however qualitatively describes these characteristics. It defines “Implementation Mechanisms” guiding improvement by using metrics to gauge the current situation, then determine which technologies, policies and training the organization needs. Alternatively, the DNSSI guides the implementation of the missing applicable measures. For DNSSI, Gap identifying is done through rigorous Gap Analysis and audits, while the American model proposes to implement testing in periodic manner ranging from random questions to the staff to technical reviews. Both approaches are commendable, the tradeoff in flexibility of the implemented measures is again due to the Moroccan government favoring risk reduction instead. The American approach works better if diversity is acceptable since it could be more appropriate to the organization’s structure and needs. 4.3

Key Components

As for Improvement, The CCSMM offers enhancement patterns through: metrics, technology, processes, training and assessments. The roadmap specifies that the organization must use metrics to evaluate the current situation. Then, the community should determine the technology, and the processes to implement. The organization then sets to train its staff if needed and carries out assessments periodically, to check for implementation issues and seek enhancements whenever possible. All these elements are lacking in the DNSSI since it does not have a specific improvement pattern.

A Comparison of American and Moroccan Governmental Security Approaches

4.4

359

Implementation

Implementation seems to be the biggest issue for DNSSI requirements, the low maturity rates across the board reflect that this hardship is not dependent on resources but on the quality of the provided guides and counseling and also the overall skill level in organizations. On the other hand, CCSMM creators report receiving good feedback from communities. Lastly, the accessibility of the American approach to the target organizations is fairly difficult since the documentation is a set of published articles since the maturity model is the culmination of CIAS’s work, a team in the University of Texas at San Antonio. On the other hand, the official DNSSI document and the completing guides are all available on the DGSSI’s official website.

5 Conclusion This work presented a comparison between two governmental security approaches for maturity evaluation, the American CCSMM and Moroccan DNSSI. We have chosen two radically different countries and evaluated their approaches based on what constitutes a comprehensive maturity model. We observed that even though Morocco is comparatively dragging in terms of resources, it is offering a sufficient approach. The DGSSI chose to have a detailed approach that organization can easily understand and follow, reducing the risk of misimplementation and offering guides for burdensome aspects. Morocco’s central government supports the mandatory status to guarantee that all vital infrastructures are decently protected. The CCSMM diversely leaves more room for organization to implement their own processes that would be more suited for their situation, since a federal government can’t enforce the mandatory status. On the other hand, the DNSSI constitutes to the government a set of minimal requirements that all organizations should have for a start. Results have shown that most organizations have abysmal results with the top rate is around 70% [15], which shows that there is an execution and guidance issue. A second version of this approach is already in the works and set to be published in 2018 to resolve that.

References 1. Humphrey, W.S.: CMM. IEEE (1989) 2. Le, N.T.: Can maturity models support cyber security. In: IPCCC. IEEE (2016) 3. Karokola, G., Yngstrom, L.: Discussing E-government maturity models for developing world-security view (2009) 4. Rabii, A., Assoul, S., Ouazzani, K., Roudiès, O.: Cyber security maturity models: a systematic literature review (2018, to appear) 5. CIAS: The community cyber security maturity model. In: 6th Annual Security Conference (2007) 6. Clark, R.M., Hakim, S.: Cyber-physical security: protecting critical infrastructure at the state and local level, pp. 161–183. Springer (2017) 7. Directive Nationale de la Sécurité des Systèmes d’Information. https://www.dgssi.gov.ma/ publications/documents.html

360

R. Anass et al.

8. Guides: DGSSI. https://www.dgssi.gov.ma/publications/guides.html 9. SSE-CMM: System Security Engineering Capability Maturity Model® SSE-CMM®, Model Description, Document: Version 3.0, Carnegie Melon University 10. Trading Economics. https://tradingeconomics.com/countries 11. United Nations E-government survey 2016. http://workspace.unpan.org/sites/Internet/ Documents/UNPAN97453.pdf 12. Internet users. http://www.internetlivestats.com/internet-users/ 13. PPD-8: National Preparedness. https://www.dhs.gov/presidential-policy-directive-8-nationalpreparedness 14. Décret n° 2-15-712. https://www.dgssi.gov.ma/reglementation/textes-legislatifs-et-reglemen taires/decrets.html 15. Rabii, M.: Director of MaCert, May 2018. Personal interview 16. Von Solms, S.H.: A maturity model for part of the african union convention on cyber security. In: Science and Information Conference (2015) 17. Karabacak, B., Ozkan, S., Baykal, N.: A vulnerability-driven cyber security maturity model for measuring national critical infrastructure protection preparedness. IJCIP 15, 47–59 (2016)

Polyvalent Fingerprint Biometric System for Authentication Mohamed El Beqqal(&), Mostafa Azizi(&), and Jean Louis Lanet(&) University Mohamed First Oujda, B.P. 524, 60000 Oujda, Morocco [email protected], [email protected], [email protected]

Abstract. In the vast majority of authentication systems, biometric techniques are used to verify the identity of the person authorized to cross the authorized zone. The biometric fingerprint remains among the most used solutions to meet this need. Several implementations of biometric-based systems exist in the research field due to the variety of fingerprint biometric readers and SDKs available in the market. As a result, each implementation can be largely modified or re-built just by changing the biometric reader. In this work, we propose a model of system less coupled to the type of reader and the SDK used. This approach reduce the cost of development while adding a new fingerprint reader or using an heterogeneous SDK. Keywords: Authentication

 Interoperability  Fingerprint

1 Introduction The use of biometrics in authentication systems remains a safe and reliable way compared to other methods of identity verification. The fingerprint solutions represent the large part of the market for biometric processes. It is clearly the preferred solution for companies working in this field. The strength of this process is that the use of the fingerprint is generally easier to accept by the community and is one of the most effective and least expensive [1]. In spite of the large advantages offered by this biometric technique, many challenges remain an object of interest and field of study for researchers and companies such as the privacy aspect, the optimization of image enhancement algorithms and matching techniques. The technical solutions proposed by the companies including hardware and software allowing biometric recognition are largely diversified which result the existing of several biometric implementations that are strongly coupled to the architecture proposed by the supplier. Hence our proposal solves this interoperability problem. The paper is structured as follows: Sect. 2, we present the basic information concerning the biometric fingerprint from classification to most used characteristic points. After this, and we present the state of art of some academic biometric fingerprint implementations in which we discuss the issues existing in these systems. In Sect. 4, we present the proposed architecture of our implementation. More precisely, we explain the role and communication between each component’s layer. Finally, we conclude the paper with a conclusion and future works. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 361–366, 2019. https://doi.org/10.1007/978-3-030-03577-8_40

362

M. El Beqqal et al.

2 Basics on Biometric Fingerprint A fingerprint is a drawing formed by the lines of the skin. It is found in different parts of the body. When we talk about fingerprints, we are referring to the lines of the skin of the fingers. This last are analyzed by a fingerprint reader to establish a numeric template. Several classifications of fingerprint exist in the research field. As mentioned in [2], among the most used ones, we found Henry classification which provides five classes of fingerprints as shown in Fig. 1. Other sub-categories was listed by author in [3] deriving from the three main categories (Arch, loop and whorl).

Fig. 1. Fingerprint classes based on Henry’s [1] classification problem

In addition to the singular points (core and delta points) which allow the determination of type of fingerprint, the characteristic points better known under the name of minutiae constitute the base of the process of matching between two fingerprints. Both of this technical information are extracted based on image processing algorithms. A minutia is a point that is located on the change of continuity of the ridges lines. Among the most used characteristic points in the matching algorithms, we have the bifurcation ridges and ending ridges as indicated in [4].

3 Related Works Kalunga and Tembo [5] have proposed a biometric fingerprint system for verification and vetting management. In this project, many features were modeled and implemented such as Criminal Vetting, Fingerprint enrolment, Criminal Investigation, Identity Verification. All these functionalities are implemented using specific System Development Kit (SDK) for U 4500 fingerprint reader using visual studio 2010 for implementing the backend verification functions. If we decide to use another reader which SDK provides only java interface, we will need to design new code for this application. In [6, 7], authors present a biometric fingerprint system for access control in university context. The verification process of fingerprint is done after a successful enrollment of student in database. During the authentication step, the collected finger print is matched with all fingerprint templates stored in database which can be a time consuming operation in case of a large number university database records. Author in [6] answers this need by combining RFID technology with biometric fingerprint to quickly identify the concerned template based on the appropriate RFID tag.

Polyvalent Fingerprint Biometric System for Authentication

363

In [8], the classroom attendance system designed by author aims to bring the portability aspect by using Arduino as a local processing unit which interacts with a mobile fingerprint reader. The student’s data is stored in a memory card as encrypted templates. Furthermore, the author used the ZFM20 fingerprint scanner for finger identification to reduce the processing load on Arduino main processor. Besides the ZFM20 scanner will be also used for storing template in ImageBuffer area available in RAM space module. However, no encryption or protection of the collected fingerprint was assured, since users can read and write in the buffer dedicated for storage using instructions. The storage of fingerprint’s template in ZFM20 ImageBuffer and SD card in different format can cause a serious problem of redundancy and synchronization between the two modules.

4 Proposed System In this section, we present the prominent ideas behind our system implementation (Fig. 2):

Fig. 2. Architecture of the proposed system

364

M. El Beqqal et al.

As shown in the figure above, the system is composed of 3 parts: • Fingerprint Application and the global Finger Application System • Database interaction • Communication with fingerprint readers 4.1

FA and GFAS

The Fingerprint Application (FA) stands for the front-end and back-end parts developed by the user to manage a full authentication and enrollment based on biometric data collected from the readers. The Global Finger Application System (GFAS) represents the main features used in a biometric fingerprint system (FP capture, FP pre-processing, Minutiae extraction, FP enrollment, FP identification and FP verification). These features were collected based on many exiting implementations of fingerprint verification system: 4.2

Communication with Fingerprint Readers

In this section, we will present the communication between the three layers: As shown in Fig. 3, when the Fingerprint Application triggers an action which is already specified in the Global Finger Application System, this action is expressed in form of high level message able to be treated only by the FA methods and independent of the type of the SDK used to execute this action. For this purpose the FA will call an intermediate layer FPEM using web services. This layer consists of application which take as input some parameters as action to be executed, reader to use and other optional attributes. The data concerning each reader is specified in an XML file. This last contains rooting and information to communicate with the SDK layer. For example, the configuration file will contain the port number, IP address and the web service to call depending on action coming from FA layer. Once the information is gathered, the FPEM main application will dispatch calls to SDK layer. As indicated in Fig. 3, for each reader we have the main SDK components that consist of technical objects allowing the physical communication with the reader. Some companies offers DLL (Dynamic Link Library) libraries based on Windows execution, others offer JAR (Java Archive) files requiring JVM (Java Virtual Machine) environment and others systems can be supported by the SDK. To answer to this interoperability problem, for each reader used in our system, a Specific Fingerprint Application (SFAi) will be developed. The main goal of SFAi is to capture the web service request from FPEM application and convert the incoming message to specific message supported by the SDK which will be used in its turn by the methods offered by the SDK library.

Polyvalent Fingerprint Biometric System for Authentication

365

Fig. 3. Communication between BA and FP readers

5 Conclusion In a situation where institutions aim to use different biometric fingerprint readers for identity verification, each scanner requires its own application design and development. In order to answer this need and unify system functional authentication process, we have proposed a new architecture based on intermediate middleware layer which will abstract the main biometric authentication solution from the technical hardware specifications. Furthermore, the proposed system is designed to be adaptable for adding biometrics readers in the system. In addition, we have optimized the database model design and interaction during the biometrics matching process. As a future work, we aim to continue on the implementation of our system architecture and validate our solution by real test cases.

References 1. Kataria, A.N., Adhyaru, D.M., Sharma, A.K., Zaveri, T.H.: A survey of automated biometric authentication techniques. In: 2013 Nirma University International Conference on Engineering (NUiCONE). IEEE (2013) 2. Galar, M., Derrac, J., Peralta, D., Triguero, I., Paternain, D., Lopez-Molina, C., García, S., Benítez, J.M., Pagola, M., Barrenechea, E., Bustince, H., Herrera, F.: A survey of fingerprint classification Part I: taxonomies on feature extraction methods and learning models. Knowl. Based Syst. 81, 76–97 (2015) 3. Gao, Q., Pinto, D.: Some challenges in forensic fingerprint classification and interpretation. In: 2016 IEEE Long Island Systems, Applications and Technology Conference (LISAT). IEEE (2016) 4. Iwasokun, G.B., Akinyokun, O.C., Dehinbo, O.J.: Minutiae inter-distance measure for fingerprint matching. In: International Conference on Advanced Computational Technologies and Creative Media (ICACTCM 2014), 14–15 August 2014, Pattaya, Thailand (2014) 5. Kalunga, J., Tembo, S.: Development of fingerprint biometrics verification and vetting management system. Am. J. Bioinform. Res. 6(3), 99–112 (2016)

366

M. El Beqqal et al.

6. El Beqqal, M., Kasmi, M.A., Azizi, M.: Access control system in campus combining RFID and biometric based smart card technologies. Advances in Intelligent Systems and Computing, pp. 559–569. Springer, Cham (2016) 7. Mittal, Y., Varshney, A., Aggarwal, P., Matani, K., Mittal, V.K.: Fingerprint biometric based access control and classroom attendance management system. In: 2015 Annual IEEE India Conference (INDICON). IEEE (2015) 8. Zainal, N.I., Sidek, K.A., Gunawan, T.S., Manser, H., Kartiwi, M.: Design and development of portable classroom attendance system based on Arduino and fingerprint biometric. In: The 5th International Conference on Information and Communication Technology for the Muslim World (ICT4M). IEEE (2014)

Bitcoin Difficulty, A Security Feature Abdenaby Lamiri(&), Kamal Gueraoui, and Gamal Zeggwagh Modeling and Simulation in Mechanics and Energetics Team (MSME) of the Research Center on Energy, Mohamed 5th University, Rabat, Morocco [email protected], [email protected], [email protected]

Abstract. Bitcoin has been growing, since its inception in 2009, to gain a financial mainstream despite the constant fluctuations in its value. It is currently ranked as the most successful Crypto-Currency and decentralized payment system among the others. This success is due, to some extent, to its security, which depends mainly on the cutting-edge cryptographic innovations, such as the hashing functions, the elliptic curve digital signature (ECDSA), and the difficulty that regulates the mining process and allows the system to keep up with the increasing hash-rate. This paper provides an overview of Bitcoin difficulty and how it contributes to the security of this Crypto-Currency. Keywords: Bitcoin Security

 Blockchain  Crypto-Currency  Difficulty

1 Introduction Since 2009, Bitcoin value has been soaring in a rapid rate until it reached a peak on December 17th, 2017 [1] when it attained, for the first time in history, more than $20,000 US dollars. Since then, its value has decreased tremendously, and it is currently fluctuating around $7, 000 US dollars for one bitcoin. Bitcoin is a decentralized system that does not rely on any third party to process the transactions. Transactions are collected and validated by all the participating nodes connected to the peer-to-peer network. The validated transactions are stored in blocks and these blocks are added to the Blockchain. The Blockchain is a distributed ledger that contains all the valid transactions that ever happened in the system. Bitcoin is considered a secure system since it relies on the implementation of some of the advanced cryptographic features. For instance, the Bitcoin keys and addresses generation process is secure because of the randomness used in producing the private keys, the elliptic curve discrete logarithm, which cannot be solved giving the current computational power, the one-way property of the hashing functions SHA256 and RACE MD (RIPEMD160), and because no backdoors were yet discovered in the elliptic curve. This elliptic curve used in Bitcoin is defined by a standard known as SECP256K1 [2]. This security is improved using the Base58Ckeck formatting, which ensure the integrity of the Data, mainly for the Bitcoin keys and addresses [3]. Also, the use of the ECDSA ensures that only the holders of the private keys can redeem the related funds. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 367–372, 2019. https://doi.org/10.1007/978-3-030-03577-8_41

368

A. Lamiri et al.

Other security features are added to Bitcoin to ensure the most prominent security objectives such as the confidentiality, the integrity and the availability. These features are, for instance, the back-linkage of blocks, which helps ensure the integrity of the Blockchain; the Merkle tree, which ensures the Block integrity; and the difficulty, which ensures the system integrity since it forces miners to work hard for at least 10 min to find a proof-of work for a new block. This paper aims to provide some insights about the Bitcoin difficulty by illustrating how it is related to the target and the Bits value, and how it contributes to the Bitcoin security. It provides some scripts, written in python 3, to calculate the difficulty and the target and to verify the proof-of-work. It shows also the correlation between the difficulty and the hashing rate used in Bitcoin.

2 Related Works Since its inception in 2009, Bitcoin security became an active research area that interested many researchers around the world. Juan Garay, Aggelos Kiayias, and Nikos Leonardos studied Bitcoin difficulty and suggested a Bitcoin protocol with chains of variable difficulty as a way to deter any malicious adversary controlling a fraction of miners holding around 50% of the mining power [4]. Ittay Eyal, Adem Efe Gencer, Emin Gün Sirer, and Robbert van Renesse mentioned that the difficulty provides some resilience to mining power variation by allowing different instances of blockchain to tune their proof of work difficulty at different rate in order to maintain a stable rate of Blocks [5].

3 Bitcoin Difficulty Difficulty can be defined as a measure of how difficult it is to find a hash (proof-ofwork) below a given target [6]. This parameter is set dynamically by the Bitcoin network every 2016 blocks or two-weeks in average. The difficulty is tied to two other parameters, the target and the Bits, which we will explain in the following paragraphs. The proof-of-work serves as a proof that the miner has committed a great amount of hashing power to find the block header’s hash that satisfies the required condition. The proof-of-work is hard to find but easy to verify. It involves finding a value for the nonce that results in a block’s header hash, using SHA-256 algorithm, that is less or equal to the difficulty target (target). So how this target is calculated? Every block contains a field called “Bits”, known also as target Bits, which is a four-byte number represented in a hexadecimal floating-point format. Bits value serves to calculate the difficulty target, which is used as a condition in the mining algorithm. The Bits field value of the first block in the Blockchain is 1d00ffff [7]. By convention, the first two digits (1d) represent the total number of digits a target is made of. It is used in the exponent of the floating-point notation while the remaining digits (00ffff) represents the coefficient. Now, how the target is derived from the Bits value? To calculate the target from the Bits value, we rely on the following formula:

Bitcoin Difficulty, A Security Feature

TARGET ¼ COEFFICIENT  2   ð8  ðEXPONENT  3ÞÞ

369

ð1Þ

Where: – COEFFICIENT is the three Bytes on the right part of 4-Byte format of the Bits. – EXPONENT is the first Byte on the left part of 4-Byte format of the Bits. Using the hexadecimal representation and applying this formula to the block #0 with Bits value of (0x1d00ffff), the target would be: TARGET ¼ 0X00FFFF  2   ð0X8  ð0X1D  0X3ÞÞ

ð2Þ

Therefore, the result in hexadecimal format is: TARGET ðin HEXÞ ¼ 0xffff0000000000000000000000000000000000000000000000000000 We compare the header’s hash of the Block #0 (proof-of-work of Block #0) with the calculated target, using python 3. The following Script shows that the Block Header’s Hash is less or equal the calculated target, which means that the proof-ofwork (POW) is valid.

The target condition sets the frequency at which a new proof-of-work is found. It determines also the difficulty for a collection of blocks. Since the computational power is increasing at a rapid speed and the Bitcoin network must keep the block generation time at 10 min in average, the target should adjust accordingly. The retargeting is happening dynamically on every full node independently for every 2016 blocks, which occurs every two weeks. The retargeting formula used by Bitcoin full nodes is [8]:

370

A. Lamiri et al.

NEW TARGET ¼ CURRENT TARGET  ðTIME ON MINUTES OF THE LAST 2016 BLOCKSÞ=20160 MINUTES: ð3Þ The difficulty is tightly linked to the target and shows how it is difficult to find a new hash of a block that satisfies the target condition. Its main purpose is to regulate the mining process, so a new block is mined every 10 min in average. It is calculated using the following formula [6]: DIFFICULTY ¼ TARGETMAX=TARGETCURRENT

ð4Þ

Where: – TargetMax is the target of the genesis block (Block#0) – TargetCurrent is the target of the current block The following script is used to calculate the difficulty of a Block using its target and the target of the genesis block. We used the block #495223, mined on Nov 20, 2017 10:53:40 AM, to verify this script.

When running the script, we found the following results as depicted in Fig. 1. The calculated difficulty matches with the difficulty displayed in the Block #495223 Information (see Fig. 2). The difficulty is tightly linked to the hashing rate. When the hashing rate increases, the proof-of-work is found quickly and therefore the difficulty increases too to keep the proof-of-work finding around 10-min in average. Also, when proof-of-work discovery time is slower, the difficulty decreases. Table 1 illustrates the strong correlation between the difficulty and the hashing rate. It shows also that difficulty and the hash

Bitcoin Difficulty, A Security Feature

371

Fig. 1. Difficulty calculation of Block #495223.

Fig. 2. Block #495223 Information [9]. Table 1. Difficulty and hash rate change between 2016 and 2017 [10] Date Dec 6th, 2017 Dec 2nd, 2016 Ratio of change

Difficulty 1,590,896,927,258 286,765,766,821 5.547722606133257

Hash rate (GH/s) 11,388,083,790 2,052,749,317 5.547722605818799

rates have quintupled since the last year. This is due mainly to competition between miners. Without the difficulty, any miner possessing big hashing power would take over the Blockchain and could change it at will, therefore the difficulty participates strongly to the security of the Bitcoin.

4 Conclusion All the aforementioned concepts suggest that Bitcoin is a secure by design cryptocurrency. Its security relies on the cutting-edge cryptographic technologies such as the digital signature and the hash functions. The Difficulty plays a major role in the Bitcoin Security since it regulates the mining process, so a new block is added to the Blockchain within 10 min in average. Also, its dynamic change helps keep up with the increasing hashing rate to avoid Blockchain hijacking by miners with huge computational power. Notwithstanding the difficulty benefits, there is a big issue that Bitcoin community should address, which is the huge electricity consumed by the Miners using their hashing machines to overcome the difficulty.

372

A. Lamiri et al.

Finally, to preserve the Bitcoin security, the community should empower the proofof-work concept while searching for other computing alternatives so that the hashing process would become more energy efficient.

References 1. Coin market capitalization. https://coinmarketcap.com/currencies/bitcoin/#charts. Accessed 26 Dec 2017 2. Certicom Research. Standards for Efficient Cryptography. SEC 2: Recommended Elliptic Curve Domain Parameters. (n.d.). http://www.secg.org/sec2-v2.pdf. Accessed 26 Dec 2017 3. Daulay, R.S.A., et al.: IOP Conference Series: Materials Science and Engineering, vol. 260, p. 012002 (2017) 4. Garay, J., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol with chains of variable difficulty. In: Katz, J., Shacham, H. (eds.) Advances in Cryptology – CRYPTO 2017. Lecture Notes in Computer Science, vol. 10401. Springer, Cham (2017) 5. Eyal, I., Gencer, A.E., Sirer, E.G., van Renesse, R.: Cornell University. Bitcoin-NG: A Scalable Blockchain Protocol. https://www.usenix.org/system/files/conference/nsdi16/ nsdi16-paper-eyal.pdf. Accessed 10 Aug 2018 6. Bitcoin Difficulty. https://en.bitcoin.it/wiki/Difficulty. Accessed 26 Dec 2017 7. Block Explorer. https://blockexplorer.com/block/00000000839a8e6886ab5951d76f41147 5428afc90947ee320161bbf18eb6048. Accessed 27 Dec 2017 8. Andreas, M.: Antonopoulos. Mastering Bitcoin: Programming the Open Blockchain, 2nd edn. (2017) 9. Block Explorer. https://blockexplorer.com/block/0000000000000000004e3c9d483093f88 760b3c4c7083308785f6c880f81ab31. Accessed 10 Aug 2018 10. Bitcoin difficulty. https://bitcoinwisdom.com/bitcoin/difficulty. Accessed 27 Dec 2017

Result Oriented Time Correlation Between Security and Risk Assessments, and Individual Environment Compliance Framework Dimo Dimov(&) and Yuliyan Tsonev Department of Information Technologies, Nikola Vaptsarov Naval Academy, Varna, Bulgaria [email protected], [email protected]

Abstract. Security professionals and attackers have common approach on their daily work. Hackers are following the same rule set as the data protectors while seeking for potential vulnerabilities to be exploited. Security assessments are usually deployed on year or mid-year basis. A prepared attacker would need 2 days to take down the environment once it is breached, or to remain unnoticed for more than 6 months. The discrepancies in those two timelines of activities performed on both sides could bring significant implications for any organization. The aim of this article is to suggest an approach for creation of an automated daily security assessment. Keywords: Security assessment  Automated reporting Information security  Event forwarding

 PowerShell

1 Introduction 24–48 h are needed from initial environment breach to privilege escalation up to domain admin access. Still not every attacker aims total data destruction or service outage. Priority might be focused on stealing or altering data, espionage, extortion, etc. In any case the attacker taken over control of an organization will try to stay with the company as much as possible. Maintaining a sustainable access however inevitably leaves footprints. It takes most organizations about 197 days to detect a breach on their network. Many companies have been breached and still have no idea, and as hackers get more sophisticated it will only take businesses even longer to realize that they have been compromised [2]. There are multiple easy to apply settings that will instantly and significantly reduce the attack surface on a Microsoft based domain environment. Majority of the companies that are not having those simple and easy to do settings are definitely the more vulnerable are to be targeted by a potential attacker. An attacker wouldn’t be wasting time and resources on penetrating an environment having ‘simple’ security rather than the thousands of companies having no protection at all. Those settings that are delivering quick wins will not be the main focus of this paper yet will be mentioned within the topics. This article will accentuate on the approach for creation of an automated compliance check (scheduled for daily execution and reporting) that is relevant for the organization. Such report will outline abnormal behavior for the © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 373–383, 2019. https://doi.org/10.1007/978-3-030-03577-8_42

374

D. Dimov and Y. Tsonev

privileged accounts and especially for the key holders for the kingdoms’ gate – the domain administrators. Once applied and daily reviewed, this automated compliance check-report would have the means to eradicate or reduce the risk of having an undetected intruder, working his way through the breached environment.

2 Proof of Concept: Security Assessments – Same Checklists Used by Security Experts and Hackers Security professionals need to comprehend two major realms in order to properly oppose attackers. Those are: how systems work, and how attackers work. A security assessment is a check on some delicate spots of a system. Same spots are being checked and attempted to be breached by an attacker. By knowing your system, knowing the spots that are potential entry for an intruder, the security professional could try to either harden those systems or make reliable and trustworthy monitoring. Many companies rely on the yearly security assessment(s). If there are such, annotated any by the documents, kindly provided by the security governance role – this could be considered job done by the book. Penetration tests are an expensive experience again performed yearly. Some companies are having the so-called health checks – performing checks on some business-critical systems and processes, ensuring proper performance and overall availability. In terms of security, checking systems (manually or automatically) for potential vulnerabilities or running exploits once a year, quarterly or even weekly is not enough. Especially when vulnerability journals, antivirus definitions and security vendors are updating their products by the hour. Performing penetration tests is vital and highly recommended for all organizations that preach security. However, by knowing that 48 h is standard time for a prepared attacker to take down the environment or stay there for 200 days unattended, the yearly security assessment or pen-test are needless. If security team would like to have a strong hold against attacker (from the outside or even an insider) – they should start the defense by analyzing and understanding the functions of the systems in scope. When a potential weak spot on the system is detected – it should be included on the daily check for compliance. Example: Members of the “Account Operators” build-in active directory group have close to full control on every account, group (maybe even more objects) in AD database by default. If the organization governance did their job by the book - any additions/corrections of the membership of the “Account Operators” group should be initiated and approved via a change management system. If the Infosec team is more explicit the change operation itself should be tracked by an event management system. Addition in the example: a rough member is added to this privileged group. If there is a scheduled monthly heal check activity and if checking the members of “Account Operators” groups is part of the check list, and if the specialist is pedantic to perform the checks punctually - the discrepancy will be detected and reported. Detecting such malicious activity would need to comply with multiple conditions. Timeframe starting from compromising to detection could be weeks or even months (again if detected). Compromising a privileged group is immutable part of privilege escalation. This is just a step on an overall attack. Daily automated check and report of the status of this secured however entice to being attacked group would significantly lower the time

Result Oriented Time Correlation Between Security and Risk Assessments

375

frame from breach to detection. Following 3 topics are the fundamental building blocks that are layered one by one.

3 Governance. One Size Fits All – Wrong! Although an automated report creation, execution, mitigation, etc. are technically related details, initial starting point should be the strong governance. Information Security Team, security officers, architect, managers and consultants should always be synchronized. Their mission and actions must be aligned and driven from the industry best practices and proven expertise. By performing copy-paste of the ISO27001 standard on a blank page, does not mean that the organization has established strong security framework. Approving gigantic budgets for new and sophisticated appliances that would be placed on the network peripheral does not mean that the organization is protected from malicious activity from outside. In order to initiate the framework creation, the Infosec team should worn on a few documents. Those documents must be adapted to the legal and regulatory requirements of the company itself instead of having just generic meaning. Organizations operate in different ways, with different partners and suppliers, different business objectives and a variety of business models, and no single compliance framework is likely to work – or even be suitable as a general approach [3]. From the perspective of the current article, the following documents (policies, processes, etc.) are mandatory for the sake of the proper execution and results mitigation of the automated assessment and compliance report: • Information Security Policy – document approved by the management, enforced by Infosec team and communicated to all employees of the organization; • Information Security Risk management process – outlining the risk assessment deployment and performance of the respective mitigations; • Security Incident Management process – as of ISO/IEC 27035:2016. The standard lays out a process with 5 key stages: prepare to deal with incidents e.g. prepare an incident management policy, and establish a competent team to deal with incidents; identify and report information security incidents; assess incidents and make decisions about how they are to be addressed; respond to incidents i.e. contain them, investigate them and resolve them; learn the lessons [4]. • Process for detection and reporting of unauthorized activities;

4 Credentials Hygiene An endpoint workstation is much easier starting point when attempting to carry out an attack. All the firewalls and IPS/IDS appliances that are guarding the front door are deterring attackers and force them to choose another path. End user could be simply misguided by a scam email, hiding a link to a malicious website. Or a trusted website could be impersonated so a user could be deceived and malicious content to find its way through. If structurally applied, the topics below will guarantee sustainable

376

D. Dimov and Y. Tsonev

credentials hygiene. These techniques will tremendously reduce the chance of potential attacker to remain stealth in the environment while attempting to harvest credentials, attempt to reuse/impersonate the credentials, and perform lateral movements across the network or escalate privileges. 4.1

Limit Credentials Exposure in Terms of Leaving Footprint on Systems Allowing Credentials to Be Reused

It is highly possible that an organization is utilizing tools and methods for remote administration that are leaving credentials footprint that could be reused if the destination host is compromised. Such methods are the service logon, interactive and remote interactive logon. Performing administration tasks on remote endpoints by one of those methods using privileged domain accounts brings potential risk for the same accounts as they become vulnerable to credentials theft and reuse attacks. There are tools available on the market as well as remote authentication methods that are bypassing the storage of credentials on the target machine. This way no Kerberos ticket, hash or another essential credential detail could be captured, automatically lowering the risk of pass-the-hash or similar attack. Remote administration methods that are preferable in terms of credentials hygiene are: Microsoft Management Console, NET USE commands, Powershell WinRM (without CredSSP), PSEXEC (without explicit credentials). 4.2

Administrative Accounts – Password Expiration Is a Must

Domain or local administrative accounts that are not enforced to change their password regularly (“password never expires” is ‘$true’) are considered great risk for the environment. Once compromised, such account could be used by the attacker and the proper user at the same time as long as the password remains the same. As the period for the last change of a password extends, the chance this password is being compromised grows rapidly. It could be breached by brute force, by guessing as the attacker gets more knowledge for the user or if the user shares the password. Password for administrative accounts should be forced to expire at a reasonable period. Certain password attributes should also be enforced, like password length, complexity, password history. Such rules, enforcing those attributes should be deployed via Group Policy Object. Furthermore, administrative accounts should be receiving higher complexity requirements. This could be achieved by fine grained password policies, complimenting the regular password policy applied at domain level. 4.3

Limit Privileged Service Accounts to Have Admin Privileges on Various Systems

Risk related to service accounts utilization is defined by a few conditions like: what is the scope of access for the service account; where credentials are stored; how credentials are managed; what other access the service account has. Service accounts are often granted access on member servers, endpoints and group objects administrative privileges on multiple tiers across the domain. The methodology by which this access is

Result Oriented Time Correlation Between Security and Risk Assessments

377

provided is crucial for the security in the domain. Credentials exposure could be prevented by assuring the service account privilege assignment is done by a tiered model or by authentication policies defining network limitations. Following such approach will lead to account constrain in a single defined scope in case account is compromised and will avoid environment wise compromise. 4.4

Clear Privileged Groups from Computer Object Accounts

Administrative groups could contain computer object accounts. Under normal circumstances this is not needed as the computer object could be impersonated to access resources. Similarly, to user accounts, computer accounts are subject of authentication and auditing across the network. Computer accounts also have credentials used for authentication in the domain but authentication and credentials renewal are transparent for the users. Usually computer object activities are not under monitoring, yet computer object can be impersonated and gain access to domain resources that are not denoted to be accessed by computers. Such memberships should be monitored and group members for which there is no reason to be there no longer applies should be removed from the group. 4.5

Drain Out the Security Principals from Enterprise Admins, Schema Admins, Domain Admins Groups

Best recommendation for this topic is to keep those groups empty as possible. Group membership should be managed ad-hoc – more accounts to be added before administration task on domain/forest level is needed and afterwards the group membership should be removed. Members of Domain/Enterprise admins, Schema admins have access on domain controllers and all the domain partitions. Potential compromise of an account member of those groups would lead to significant impact on the organization. It is possible to keep one account for emergency situations like authoritative restore of the AD database. Password of this account should be very complex and written on paper, kept in a locker, with access permitted to key personnel. Actions performed by this user must be audited and monitored strictly. Any misuse of this account should be detected and investigated as soon as possible. 4.6

Local Admin Password Randomization

This last topic of the Credentials Hygiene chapter is not less important yet sure is most neglected. Accounts having administrative access on an endpoint could be used to gain full access on the machine. Having full control, the attacker could easily harvest all the other credentials stored on the computer. Once credentials are captured the attacker would try lateral movement across the network, probing other machines with the gained credentials. Many organizations have deployment process which result in same password set for the local administrator on all enrolled computers. Such design will allow anyone to initiate connection to all other computers by just compromising a single machine. Maintaining (local admin) access to any computer would allow an

378

D. Dimov and Y. Tsonev

attacker to harvest the credentials stored locally, inevitably leading to capture of a domain privileged account. Best mitigation approach to such credentials theft and reuse techniques is to randomize the local administrator password for each endpoint. LAPS (Local Admin Password Solution) automatically manage local administrator account passwords on domain-joined computers so that passwords are unique on each managed computer, randomly generated, and centrally stored in Active Directory infrastructure. LAPS stores the password for each computer’s local administrator account in Active Directory, in a confidential attribute in the computer’s corresponding Active Directory object. The computer is allowed to update its own password data in Active Directory, and domain administrators can grant read access to authorized users or groups, such as workstation helpdesk administrators. The solution is built on Active Directory infrastructure and does not require other supporting technologies [6].

5 Daily Report – Parsing Through the Noise with PowerShell Final deliverable intent is a single report file, containing nothing more than findings to be mitigated. Useless ‘information’ would load the report with verbose data that could delude and distract the reader from the important findings. Before the file is generated some work at the background needs to be done. An event collection structure needs to be deployed. In order for us to be able to detect and identify any abnormal activities (privileged accounts starting services on member servers where they shouldn’t, service accounts scheduling task, local administrators installing 3rd party software at 4AM, backup operators running PowerShell scripts, etc.) an advanced auditing should be enabled. Security administrators should focus on the potentially problematic events rather than adding all events to the collection. Collecting all the possible event log will generate huge amount of data that will be nearly impossible to use, filter and store. Again, events explicitly relevant to the environment should be enabled along with the ones propagated by security vendors as baselines. Compilation of the report data will be performed with PowerShell. PowerShell is considered to be a native tool for administration of Microsoft based environment. It has full access on COM and WMI allowing administering locally and remotely. Script creation will be segmented on two major modules. First segment will directly query configuration-specific settings in the Active Directory environment, group membership of domain privileged groups and group assignment on domain controllers (and ultimately extended on member server, even endpoints). Script is designed to consist of the widespread cmdlets, pipelining is avoided. Target is to compile a code that has the simplest methods for checks, avoiding any possible disruption, miss or error because of code over-sophistication. Second segment will parse through the collected events searching for any abnormal behavior in terms of ‘who performs what, when, where, how many times’. This is possible to be achieved if the credentials hygiene section is prior adopted by the organization and strictly followed. This will assure that in normal circumstances no domain admin is logging on member server, no service account has ‘password never expires’ attribute, etc.

Result Oriented Time Correlation Between Security and Risk Assessments

379

When assembled, the script fragments below are to be considered as a starting point of assessing security in Active Directory and are to be adapted and extended to cover and comply with the specific environment scope and configuration peculiarities. 5.1

Check and Report (Part 1) - Domain Privileged Groups – The Most Assailable

Members of the domain privileged groups have significant control on the environment that in the event an account of such type is impersonated or compromised the damage would be devastating. First checks when performing the security assessment are to ensure that members of those privileged groups are only the correct accounts. • Command1: Get group membership of a specific privileged group:

This command will list all the group members of “Enterprise Admins” with a lot of details. The most important detail is the SID – Security Identifier. A security identifier (SID) is a data structure in binary format that contains a variable number of values. The SIDs are unique to the domain. When we refer to a particular account with the attributes ‘DisplayName’ or ‘SAMAccountName’ or “UserPrincipalName” the AD actually is referring to the SID. ACLs (Access Control Lists) are containing the SIDs of the accounts with their respective authorizations on a certain resource. In this case we could have user in AD called ‘John Doe’ and his SID would be: ‘S-1-5-21-414326903719764436-1984117282-1158’. Following actions could occur if an attacker manages to get membership of the “Enterprise Admins” group and proceed to cover tracks and remain stealth. John Doe could be deleted, newly added malicious member could be renamed to John Doe and thus the members of the “Enterprise Admins” group would remain the same but one of the accounts would be malicious. Therefore, SID check is recommended. Even though the ‘DisplayName’ remains the same, there is no technical possibility for the SID to be changed. When performing checks of the privileged groups membership, accounts should be checked against their Security Identifier attribute instead of ‘DisplayName’ or ‘SAMAccountName’. • Command2: Get group members that are not explicitly listed by their SID:

The script fragment above will display any accounts that are members of ‘Enterprise Admins’ group that are not explicitly designated by SID attribute. Same check applies and is mandatory for the rest of the privileged groups.

380

5.2

D. Dimov and Y. Tsonev

Check and Report (Part 2) - Quick and Easy, Nice to Have Checks

Following checks should report no data if the above collection of hardening methods and techniques are implemented. In case an account pops out of the report – investigation should be initiated. 5.2.1

Account with Never Expiring Password

• Command3: Check for accounts with ‘password never expires’ activated:

5.2.2 Check for Privileged Accounts Delegation Activation There are number of configuration options recommended for securing high privileged accounts. One of them, enabling ‘Account is sensitive and cannot be delegated’, ensures that an account’s credentials cannot be forwarded to other computers or services on the network by a trusted application [7]. • Command4: Get any group member of ‘Domain Admins’ group that has inactive “AccountNotDelegated” attrib:

Same check is to be performed on any other privileged groups where members are forbidden to delegate control. 5.2.3 Check for Users in the Domain That Have the ‘PasswordNotRequired’ Flag Enabled This flag indicates if an account would need a password to logon. For example, if an account has ‘PasswordNotRequired’ flag enabled, this account could perform logon operation without typing any password – just leaves the password field and hit “Enter”. This does not mean the particular account does not have password set, nor the user is logging without password. It means that if flag is enabled, a malicious user could take advantage over this account and perform logon operations where the account has sufficient permissions

Result Oriented Time Correlation Between Security and Risk Assessments

381

• Command5: Get AD user objects with enabled ‘PasswordNotRequired’ flag

5.2.4 Check for Computer Objects in the Domain That Have the ‘PasswordNotRequired’ Flag Enabled Similarly to the user objects, computer objects also have certain level of permissions. • Command6: Get AD computer objects with enabled ‘PasswordNotRequired’ flag

5.3

Event Collection and Filtering

Event collection allows administrators to get events from remote computers and store them in a local event log on the collector computer. The destination log path for the events is a property of the subscription. All data in the forwarded event is saved in the collector computer event log (none of the information is lost). Additional information related to the event forwarding is also added to the event [5]. The following list describes the types of event subscriptions: For the purpose of the reporting we will rely on Source-initiated type of subscription. By using group policy object (GPO) we make sure that every endpoint located under the Organizational Unit (OU) where the GPO is linked has the same configuration keeping the infrastructure neat in terms of applied settings (it is not a rear occasion for an admin or operator to forget or disparage to set up a specific config). Furthermore, the same settings will apply on any newly created/moved computer object in the OU where the GPO is linked, reducing the administrative effort and minimizing the chance of wrong or imperfect config. On high level the needed prerequisites for an event forwarding and event collection architecture are: • Designated Event Collector Server – this member server will hold the subscription and will collect the defined events; • GPO that applies the proper settings (on the target machines) to allow forwarding the events to the event collector and the proper settings that are enabling the auditing of the defined events;

382

D. Dimov and Y. Tsonev

When events are collected in the “Forwarded Events” log the proper command to start with reading the events is:Get-WinEvent -LogName “ForwardedEvents”. Since we target reading the event from the last 24 h, we create a $time variable and add a condition related to the “TimeCreated” attribute of the event so the following fragment will filter only the events created minus 24 h from now. • Command 7: Get event logs for the last 24 h:

From this point forward, the information collected could be tweaked and filtered by various criteria in a manner to deliver as much as relevant information for the report. Further results could be filtered by “EntryType” (Error, Warning, Information), “Source”, “InstanseID”, certain “eventID”, or even by word or message with the “Message” attrib. Mandatory to be checked events will be any “audit failed” events in combination with privileged account or domain admin accounts. Such modifications are possible a couple of times of the year when some heavy maintenance or system upgrade is performed. If none of those actions happened the last 24 h and there is an event related to those privileged groups the red flag is up.

6 Conclusions Cyber-attacks are ‘the number one problem with mankind’ and more dangerous than nuclear war, warns Warren Buffett [1]. If business organizations claim they are protected and compliant with the security standards, then assessments should not wait a year to check if there is skeleton in the closet. Security and risk assessments are necessity of uppermost priority. Most of the checks they perform, the environment settings, the systems thresholds, could be adapted, automated and scheduled for daily scan. Setting the administrative effort to minimum by automating the audit of those delicate groups, values, attributes, keys, flags, would allow security team to agilely verify that the environment is malicious-free on daily basis.

References 1. http://www.dailymail.co.uk/sciencetech/article-4480778/Buffett-expresses-sympathy-Berkshire-political-spending-proposal-fails.html 2. https://thebestvpn.com/cyber-security-statistics-2018/ 3. EU General Data Protection Regulation (GDPR): An Implementation and Compliance Guide, Second Edition; IT Governance © 2017

Result Oriented Time Correlation Between Security and Risk Assessments 4. 5. 6. 7.

383

http://www.iso27001security.com/html/27035.html https://msdn.microsoft.com/en-us/library/bb427443(v=vs.85).aspx https://technet.microsoft.com/en-us/library/security/3062591.aspx https://blogs.technet.microsoft.com/poshchap/2015/05/01/security-focus-analysing-accountis-sensitive-and-cannot-be-delegated-for-privileged-accounts/

Classification of Ransomware Based on Artificial Neural Networks Noura Ouerdi1(&), Tarik Hajji2, Aurelien Palisse3, Jean-Louis Lanet3, and Abdelmalek Azizi1 1

Faculty of Sciences, Mohammed First University, Oujda, Morocco [email protected], [email protected] 2 Faculty of Engineering, Private University of Fez, Fez, Morocco [email protected] 3 Inria, Campus of Beaulieu, Rennes, France {aurelien.palisse,jean-louis.lanet}@inria.fr

Abstract. Currently, different forms of ransomware are increasingly threatening users. Modern ransomware encrypts important user data and it is only possible to recover it once a ransom has been paid [14]. In this paper, we classify ransomware in 10 classes which are labeled using avclass tool. In this classification, we based on artificial neural networks with multilayer perceptron function. To do this, it was necessary to build the learning base based on ransomware files. We then implemented programs in java allowing the extraction of the key strings from ransomwares files intended for the learning stage and for the test one. Once the learning and testing databases have been prepared, we started the classification with the weka tool. The objective of this contribution is to investigate if the neural networks are an effective means for the classification of this kind of ransomwares or it will be necessary to think to another method of classification. Keywords: Classification Learning  Test

 Artificial neural networks  Ransomware

1 Introduction Currently, ransomware attacks have increased in a remarkable way. They are not limited to individuals. This malware, among the most sophisticated malwares, targets all internet users, individuals, corporate networks, and also the government agencies. Corporate ransomware attacks can affect shareholders, employees and customers and cause permanent damage due to the loss of confidential data, as well as bad publicity and financial losses. Several research works have been done to detect malware [15] and in particular ransomware [14]. Our current contribution goes beyond the detection stage towards the ransomware classification. We then assume that the ransomware has been already detected by the system, our goal is to classify it using artificial neural networks. This paper is organized as follows: the second section gives a general overview of the concepts we worked on. The third section details how ransomwares information have been collected and describes the MoM platform and runtime detection © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 384–392, 2019. https://doi.org/10.1007/978-3-030-03577-8_43

Classification of Ransomware Based on Artificial Neural Networks

385

mechanism. Our classification methodology is detailed by the three Sects. 4, 5 and 6. These sections explain the building steps of the learning base, the test base and the neural network model. An interpretation of the obtained result is given in the seventh section. We end this paper with a conclusion.

2 Background 2.1

Ransomware

A ransomware is malicious computer software, taking the data as a hostage. The ransomware encrypts and blocks the files of the disk and asks for a ransom in exchange for a key to decrypt them. First appeared in Russia, ransomware spread throughout the world, mainly in the United States, Australia and Germany. Often, the ransomware infiltrates in the form of a computer worm, through a file downloaded or received by email and encrypts the data and files of the victim. All files are encrypted quickly. A message appears to inform you that you must pay to recover your data. Among the known ransomwares, we have: CryptoLocker, CryptoWall, Reveton, etc. The purpose is to extort a sum of money to be paid most often by virtual currency to avoid any trace. 2.2

Artificial Neural Network

A neural network is inspired by the functioning of biological neurons and takes shape in a computer in the form of an algorithm. The neural network can modify itself according to the results of its actions, which allows the learning and the resolution of problems without algorithm, thus without conventional programming. The multilayer perceptron [13] is a type of artificial neural network organized in different layers. In the multilayer perceptron with backpropagation, the neurons of a layer are connected to all the neurons of the adjacent layers. These links are subject to a coefficient altering the effect of the information on the destination neuron. Thus, the weight of each of these links is the key element of the operation of the network: the implementation of a multilayer Perceptron to solve a problem therefore requires the determination of the best weights applicable to each of the inter-neuronal connections. Here, this determination is carried out through a backpropagation algorithm.

3 Collection of Ransomware Information and Related Works The drawback of malware prevention using anti-virus relies in its incapability to resolve unknown attack. We use our analysis platform Malware‘O’Matic (MoM) [7] described in the next section to collect information used in this study. This framework analyzes program at run time, and it decides if a program is suspicious to suspend the suspected process and to take a memory snapshot. The memory snapshot is stored in memory in such a way it can not be ciphered by the ransomware. All these data are transferred to the controller computer.

386

3.1

N. Ouerdi et al.

MoM Architecture

MoM is an automated analysis platform that does not use a virtual machine, while keeping all the main features of a regular analysis framework. Such fully bare-metal platform is built on top of two open source software, Clonezilla [1] and Viper [2], which makes it reproducible. The platform presents the master server and several slaves, each one running the analysis loop in parallel. The whole system is on a dedicated network under the supervision of the master server and directly connected to the Internet, to emulate a typical home network. A firewall allows a safe remote access to the platform. The virus database and the result of the analyses are stored on a Network Attached Storage (NAS). Due to the probability that an infected ransomware tries to cipher the NAS, this latter is saved manually every day. The platform grabs automatically new sample of ransomware from public repositories. Most of them are variants of already analyzed malicious files. Then, these samples are evaluated and for those that are alive (they have all the conditions to be executed) we evaluate our prevention, run time and post mortem analyses. A bare metal platform has been preferred to the solutions based on virtualization because of the numerous techniques used by the malware to fingerprint well-known sandbox (e.g., Cuckoo Sandbox [3]). The analysis loop consists in a few simple steps: setup of the monitoring environment, malware execution, results gathering and storage, cleanup. In the first step, the slave downloads a script from the master, which acts as instructions about how to conduct the next analysis. Once the procedure is completed, the slave sets its next environment and reboot for cleanup. The clean up process simply consists in flashing a clean disk image on to the slave’s drive. A Windows 7 SP1 32-bit disk image is used as the operating system to be infected. The user is logged in as administrator with the User Account Control (UAC) disabled. Each run is fifteen minutes long. After that period if the ransomware did not deploy its payload it is tagged as inactive. Else, several data are collected for signature extraction, runtime detection evaluation and for most mortem classification. Thus, MoM is able to analyze up to 360 malware per-day with only 1 server and 5 slaves. The final goal of this platform is to run uninterruptedly and thus automate the analysis of samples. To avoid evasion during the analysis, we build a corpus of files that look like a regular user environment, thanks to the Digital Corpora corpus [4]. Each file is renamed according to a dictionary to avoid any detection by the malware while the slave loads its disk image. 3.2

The Runtime Detection Mechanism

The mechanism limits the monitoring to a minimum to reduce the impact on detection with a low rate of false positive. We use a statistical test to decide if one of the program executed on a slave is a malware or not. We implemented the chi-square goodness-offit test instead of Shannon entropy (i.e., sensitive to compressed chunks of data [5]). We also achieve system completeness and fine granularity by monitoring the whole file system for all user land threads. This test is implemented in a Windows mini filter driver, which inspects all the operations that target the disks, regardless of whether the

Classification of Ransomware Based on Artificial Neural Networks

387

requested operation is an I/O request packet (IRP) or a fast I/O. In this context, we are able to monitor write, read operations and so on. Once the compromise indicator (e.g., chi-square) is obtained on the corresponding data we update an internal structure related to this thread behavior in a non-paged memory. In this structure, we store the complete memory snapshot of the suspected thread. At the end of the experiment, the structure is stored in the disk in a file directory named with the MD5 of the suspected program for further analysis. This the way the data used in this paper are obtained.

4 Build Our Learning Database Some principles are to be respected when building the learning data. First, it must have a sufficient number of instances for the classification model to be able to correctly classify the several ransomwares. Typically, it seems very difficult to learn a model if the learning data contains less than a few hundred instances. Moreover, we should not have a set of learning data with label representing a very small proportion of data. It is impossible to define general rules concerning the minimum number of instances necessary to learn a model, or the degree of acceptable imbalance, because these values depend on the detection problem considered and the instances constituting the learning set. In our case of ransomware classification, we used the MoM platform described in Sect. 3. Thanks to this platform, we generated 211 files representing ransomwares distributed on 10 classes which are shown in the Fig. 1. The data partition was done using the algorithm K-means [6] which is still one of the most widely used algorithms for clustering because of its simplicity, efficiency, and empirical success are the main reasons for its popularity. As for the database, we used three malware online repositories which are [8–10]. The data have been labeled by the tool avclass [11], hence the labels of the 10 classes presented in the Fig. 1. In the learning database, we have a total of 211 files distributed on the 10 classes.

Fig. 1. The ten classes of ransomwares

388

4.1

N. Ouerdi et al.

First Step: Search Common Strings Between the Files of Each Class

As the beginning of work, we analyzed the different ransomwares of each class. Indeed, we developed a java program that extracts the common strings between the ransomwares of each class. These common strings represent system calls or dll. We ignored the non-significant strings. Therefore, we have 10 files, each file representing each class. 4.2

Second Step: Search Distinct Strings Contained in the 10 Files Found

In this second step, we had to select the distinct strings contained in the 10 files obtained in the first step. Our objective was to build a vector V so that the different ransomwares represent the instances of this vector. The vector V is unique for the different classes. The values of its instances are Boolean to represent the existence or not of key strings. To do this, we had to develop another java program that allows to select the separate chains of the 10 files. We found 603 distinct strings. So the length of the vector V is equal to 603. The Fig. 2 shows the first 12 strings.

Fig. 2. The first 12 strings in the vector V

4.3

Third Step: Extract the Instances of the Vector V

Once the vector V is built, we instantiate all ransomwares of all classes, if the string exists in the ransomware, we put 1 otherwise 0. We obtained 211 instances of the vector V. Each instance is composed of the Boolean values which characterizes the associated ransomware. An example of ransomware file is presented below:

Classification of Ransomware Based on Artificial Neural Networks

389

if we analyze the first 12 elements (in red), we conclude that the strings WritePrivateProfileStringW, SetClipboardData, StrStrIW, IsWindowVisible, StrChrA, GetSystemInfo, GetCommandLineA, DuplicateToken and InitiateShutdownW did not exist in the ransomware file while the strings InternetReadFile, WriteConsoleW and advapi32.dll exist in the ransomware file.

5 Build the Test Database In order to build the test base, we followed the same steps followed during the learning base construction but on different ransomware files. We used 288 ransomware for the test. So, we obtained 288 instances which are split into the 10 classes as follows: – – – – – – – – – –

36 instances for bitman class 76 instances for cerber class 1 instance for deshacop class 1 instance for fsysna class 1 instance for gamarue class 1 instance for gpcode class 108 instances for telsacrypt class 61 instances for xorist class 3 instances for yakes class 0 instance for zerber class.

6 Build the Neural Network Model We would like to point out that the performance of the detection model on its learning data is not a good evaluation of the model. Indeed, the goal of a detection model is not to correctly classify the learning data, but to be able to generalize that is, correctly

390

N. Ouerdi et al.

classifying unused data during its learning phase. However, analyzing the performance of the model on the learning data can detect the problem of under-learning. So, we chose test set which is totally different of the learning set. We used weka tool [12] on a high-performance machine with 32G RAM and 4 processors. We chose the classifier with multi-layer perceptron function. Then we selected the supplied test set and we started the building of the model. As a result, we had:

7 Interpretation of Results and Discussion According to the supplied results, we notice that the recognition rate obtained is equal to approximately 54%. If we look at the confusion matrix, we have ten alphabets from a to j. the Table 1 explains their meaning. According to the matrix confusion, we can say that, for example, for the first class bitman, we have 27 correctly classified instances among 36 instances. The recognition

Classification of Ransomware Based on Artificial Neural Networks

391

Table 1. Meaning of the alphabets in confusion matrix. Alphabet a b c d e f g h i j

Class bitman deshacop fsysna gamarue gpcode telsacrypt xorist yakes zerber cerber

rate for this class is about 75%. However, if we see the last class cerber, we notice that have only 31 correctly classified instances and the 45 other instances are classified in the zerber class (recognition rate of about 41%). On the other hand, we tried to take a set of the learning base for test, the recognition rate did not exceed 72%. We conclude that the method of classifying ransomware using artificial neural networks is not an efficient method, it is therefore necessary to use convolutional neural networks or another classification technique such as genetic algorithms or research trees.

8 Conclusion In this paper, we investigated the different classes of ransomware. We exploit the MoM plateform and online databases in order to get real examples of ransomware that encrypt disk. We chose to classify this type of malware using artificial neural networks with multilayer perceptron function. This classification required a preliminary step of preparation of the learning and test data adapted to the input-outputs of the network. The obtained result shows that the classification by neural networks did not lead to a good result. This may be due to one of two reasons: either the choice of artificial neuron networks for ransomware classification is not really a good choice. Another reason can justify the misclassification of our ransomware is the irrelevance of the strings contained in ransomware files. Indeed, our classification is totally based on the extraction of the common strings between ransomwares of each class. We must then realize a work on a classification algorithm by k-means in order to find the relevant clusters. This will be treated in the future works. We also aim to make a comparison of the classification technique proposed in this paper with other classification techniques proposed by weka or other classification tools in order to have a relevant classification of malwares.

392

N. Ouerdi et al.

References 1. Clonezilla: The Free and Open Source Software for DiskImaging and Cloning. http:// clonezilla.org/ 2. Viper: Binary Management and Analysis Framework. http://viper.li/ 3. Cuckoo Foundation: Cuckoo Sandbox: Automated Malware Analysis. cuckoosandbox.org 4. Digital Corpora: Producing the Digital Body. http://digitalcorpora.org/ 5. Mbol, F., Robert, J.M., Sadighian, A.: An efficient approach to detect torrent-locker ransomware in computer systems. In: International Conference on Cryptology and Network Security, pp. 532–541. Springer (2016) 6. Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recogn. Lett. 31, 651–666 (2010) 7. Palisse, A., Durand, A., Le Bouder, H., Le Guernic, C., Lanet, J.-L.: Data Aware Defense (DaD): towards a generic and practical ransomware countermeasure. In: NordSec2017 Nordic Conference on Secure IT Systems, Tartu, Estonia, 8–10 November 2017 8. Malware Online Repository (2018). https://malwr.com 9. Malware Online Repository (2018). http://malwaredb.malekal.com 10. Malware Online Repository (2018). https://virusshare.com 11. Sebastián, M., Rivera, R., Kotzias, P., Caballero, J.: Avclass: a tool for massive malware labeling. In: International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 230–253. Springer (2016) 12. Swasti, S., Monika, J.: A study on WEKA tool for data preprocessing, classification and clustering. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 2(6) (2013). ISSN 2278–3075 13. Marc, P.: Réseaux de Neurones (Le perceptron multicouche et son algorithme de retropropagation des erreurs), Laval University, Laval, 272 p. (2004) 14. Krzysztof, C., Wojciech, M.: Using Software-defined networking for ransomware mitigation: the case of CryptoWall. IEEE Netw. 30(6), 14–20 (2016) 15. Joshua S., Konstantin B.: Deep neural network based malware detection using two dimensional binary program features. In: The 10th International Conference on Malicious and Unwanted Software (MALWARE). IEEE Xplore (2016)

On the Efficiency of Scalar Multiplication on the Elliptic Curves Siham Ezzouak1(B) and Abdelmalek Azizi2 1

Laboratory LAGA, Faculty of Sciences, Dhar El Mahraz Sidi Mohamed Ben Abdellah University, Fes, Morocco [email protected] 2 Laboratory ACSA, Faculty of Science, University Mohammed First Oujda Morocco, Oujda, Morocco [email protected]

Abstract. The scalar multiplication or point multiplication is the main computationnal operation in the most known cryptosystems based on elliptic curves. Therefore, relevant methods have been studied over centuries. This paper give a detailed study of the efficiency issues in scalar multiplication on the elliptic curves. First, we describe significant speedups in point multiplication. Second, we show that more optimizations can be achieved when better combination of multiplication methods for elliptic curves are performed. Keywords: Elliptic curves · Jacobian coordinate Chudnovasky coordinate · Binary method · NAF method Scalar multiplication

1

Introduction

The cryptosystem based on the elliptic curves have been discovered by Miller [1] and Koblitz [2] indepentaly. The main advantage of elliptic curve cryptosystem (ECC) is small key size, instead the 1024 bit size key with RSA, we use 160 bits size key on the ECC and it’s provide the same security level. Besides that, the security of the best recognized cryptosystems are based on the difficulty to solve discrete logarithm problem wich No subexponential-time algorithm is known. So, Fast computation and less memory and CPU usage are guaranted. For this raison, a lot of attention has been paid to improve computation on the elliptic curves, namely, Scalar multiplication, who’s the dominant cost operation in elliptic curve cryptographic. It’s equivalent to the exponentiation xn in the finite field whith replacing multiplication by addition and squaring by doubling. Speeding up EC scalar multiplication can be achieved by several methods: – Curve point coordinate choice [10,12]. – Representation of the scalar k [13]. c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 393–399, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_44

394

S. Ezzouak and A. Azizi

– Finite field operation [11]. The sequel of the paper is organized as follows: In the Sect. 2, the detailed study of the efficiency issues in scalar multiplication on the elliptic curves are reviewed. We propose a combination of multiplication methods with good choice of parameters for elliptic curves are performed analysis of the performance in Section 4. Finally, we conclude the paper in Section 5 with a discussion in implementation and future work.

2

Scalar Multiplication in the Elliptic Curves

In this section, we study two ways to speed-up scalar multiplication in the elliptic curves, the first one is linked to curve point coordinate choice. The second one is based on the representation of the scalar k. Throughout this paper, P = (X1 , Y1 , Z1 ), Q = (X2 , Y2 , Z2 ), M , S and I the cost of multiplication, squaring and inversion in the finite field respectively. Also, the cost of addition, subtraction and multiplication by a small constant in finite field are neglected. Because they are much faster than multiplication and inversion in Fp . 2.1

Coordinate Choice

Affine Coordinate. In the Table 1, we detailed the equations for point addition and doubling and also the operations cost for point addition and doubling in affine coordinates system: If affine coordinates are used for doubling point, then the running time expressed in terms of field operation are one inversion, two multiplications and two squarings. else for addition point the running time are one inversion, two multiplications and one squaring. Table 1. Equations and operation cost for point addition Cost for addition λ=

y2 −y1 x2 −x1 2

Cost for doubling M +I

λ=

3x2 1 +a 2y1 2

S+M +I

S x3 = λ − 2x1 S x3 = λ − 2x1 y3 = λ(x1 − x3 ) − y1 M y3 = λ(x1 − x3 ) − y1 M 2S + 2M + I S + 2M + I

Jacobian Coordinate. For Jacobian coordinates, we set x = ZX2 et y = ZY3 [10], the tables 2 describe equation for addition and doubling point and detailed the cost of addition and doubling points:

On the Efficiency of Scalar Multiplication on the Elliptic Curves

395

Table 2. Equations and operation cost for point addition Cost for addition A= B= C= D= E= F = X3 = Y3 = Z3 =

2

X1 (Z2 ) X2 Z12 Y1 Z23 Y2 Z13 B−A D−C −E 3 − 2A.E 2 + F 2 −CE 3 + F (AE 2 − X3 ) Z1 ∗ Z2 ∗ E

Cost for doubling M +S M +S 2M 2M 0 0 2S + M 3M 2M 4S + 12M

A= B= C=

4X1 Y12 2 3X12 + a(Z12 ) 2 −2A + B

S+M 3S + M S

X3 = C Y3 = −8Y14 + B(A − C) S + M Z3 = 2Y1 Z1 M 6S + 4M

Chudnovsky Jacobian Coordinates. For Chudnovsky coordinates, Let P =(X1 , Y1 , Z1 , Z12 , Z13 ) and Q = (X2 , Y2 , Z2 , Z22 , Z23 ) and P + Q = R = (X3 , Y3 , Z3 , Z32 , Z33 ). The tables below listed the cost of addition and doubling point and curves doubling and addition formulas (Table 3): Table 3. Equation and operation cost for point addition Cost for addition A= B= C= D= E= F = X3 = Y3 = Z3 = Z32 = Z33 =

2

X1 (Z2 ) X2 Z12 Y1 Z23 Y2 Z13 B−A D−C −E 3 − 2A.E 2 + F 2 −CE 3 + F (AE 2 − X3 ) Z1 ∗ Z2 ∗ E Z32 Z33

Cost for doubling M M M M 0 0 2S + M 3M 2M S M 3S + 11M

A= B= C=

4X1 Y12 2 3X12 + a(Z12 ) −2A + B 2

X3 = Y3 = Z3 = Z32 = Z33 =

C −8Y14 + B(A − C) 2Y1 Z1 Z32 Z33

S+M 2S + M S

S+M M S M 6S + 5M

If Chudnovsky coordinates are used for doubling point, then the running time expressed in terms of field operation are five multiplications and six squarings. else for addition point the running time are eleven multiplications and three squarings.

396

S. Ezzouak and A. Azizi

Modified Jacobian Coordinate. For Modified Jacobian coordinates, the representation of a point P is the quadruple (X1 , Y1 , Z1 , a4 Z14 ) [10]. We set x = ZX2 et y = ZY3 , the Tables 4 listed the cost of addition and doubling point and curves doubling and addition formulas: Table 4. Equations and operation cost for point addition Cost for addition

Cost for doubling Z12

A=

S

A=

B=

Z22

S

B=

C=

X2 ∗ A

M

C=

D=

X1 ∗ B

M

D=

E=

Y1 ∗ Z2 ∗ B

2M

F =

Y2 ∗ Z1 ∗ A

2M

G=

C−D

H =

F −E

0

I =

G2

S

K=

I∗G

M

X3 =

−I(G + 2D) + H 2

Y3 =

S+M

3X12 + (aZ14 )

S+M S S

−2A + C 2

S

C

0

X3 = Y3 =

Z1 ∗ Z2 ∗ G

2M

Z3 =

aZ3 4

2S

aZ34 =

aZ34 =

8Y14

0

−E ∗ K + H(D ∗ I − X3 ) 3M

Z3 =

4X1 Y12

−8Y14 + B(A − C) M 2Y1 Z1

M

2B(aZ14 )

M

6S + 13M

4S + 4M

Comparison. To clarify our comparison, we assume that S = 0.8M , the inversion cost is linked to finite field and it’s between 9M and 30M in the case of p larger than 100 bits. The field operation counts for point addition and doubling in various coordinate systems are listed in Table 5 below: Table 5. Cost of addition and doubling in various coordinate systems Coordinate

2.2

Addition

Doubling

Affine

S+2M+I = 2.8M+I 2S+2M+I = 3.6M+I

Jacobian

4S+12M = 15.2M

6S+4M = 8.8M

Chudnovsky Jacobian 3S+11M = 13.4M

6S+5M = 9.2M

Modified Jacobian

4S+4M = 7.2M

6S+13M = 17.8

Bases Representation of Scalar

To compute kP , they are several methods like to compute xk in the finite field, the aim of this methods is to reduce the number of additions by reducing the

On the Efficiency of Scalar Multiplication on the Elliptic Curves

397

Hamming weight of the recoded key k. Hence, many bases representation of scalar k are used. Such as: binary methods, m-ary methods, sliding window methods and signed-digit methods (N AF, N AFw ). The substraction in the elliptic curves has the same cost as addition, so, the signed digit method is one of the most convenient in the elliptic curves. For the rest of paper, ADD is the cost of addition of two points and DBL the cost of doubling the point. Binary Method. The well-known binary method quickly calculates an Scalar multiplication with a large integer k. Let k be a positive integer. The binary method needs only log22(k) additions and log2 (k) doublings on average. That’s means, we do DBL + 2 ADD with  = log2 (k) the size of integer in binary base. NAF Method. Instead of representing the key k with the binary representation, we use the NAF representation known as the canonical representation with the fewest number of non-zero digits. In fact, the number of addition points linked to number of non-zero digits. If one decreases the last one, the number of operations is reduced and the running time will be improved. Furthermore, in the NAF representation, one must compute −P which does not require any operation. Besides this, If we use this presentation for computing kP, the expected A running time will A 3 + D instead of 2 + D in the binary representation. So including this representation will decrease the numbers of addition by approximately 6 . NAFw Method. The window N AF is an improved version of the N AF which processes w digits of k at a time instead of one digit with the NAF which reduces the hamming weight. On the one hand the running time can be decreased, on the other hand more memory are used to store the ki P . If extra memory is available this presentation is advised. The average density of nonzero digits N AFw is 1  , so the numbers of operation are w+1 Add + Dbl + (2w−2 − approximately w+1 1)Add. 2.3

Comparison

The combination of multiplication methods, specially coordinate choise and bases representation of scalar k for elliptic curve decrease the numbers of addition and doubling point. The table below detailed the numbers of addition for specific base with specific choice of coordinates system: The table above shows that N AF4 method with Chudnovsky Jacobian coordinate is the faster if the size of  is high.

398

S. Ezzouak and A. Azizi

Methods choise of coordinates

Number of Additions

Number of Doublings

Total Cost

Binary

Affine

(2.8M +I) 2

(3.6M + I)

5M + 3I 2

Jacobian

7.6 M

8.8 M

15.4 M

Chudnovsky Jacobian

6.7 M

7.2 M

13.9 M

Modified Jacobian

8.9 M

7.2 M

16.1 M

Affine

(2.8M +I) 3

(3.6M+I)

4.5M+ 2I 3

NAF

N AF4

3

Jacobian

5.06 M

8.8 M

13.86 M

Chudnovsky Jacobian

4.47 M

7.2 M

11.67 M

Modified Jacobian

5.94 M

7.2 M

13.14 M

Affine

(2.8M +I) 5

(3.6M+I)

(8.4 + 4.16)M + 6+15 I 5

+ (3)(2.8M + I)

Jacobian

45.6M+3.04 M

8.8 M

45.6M+11.84 M

Chudnovsky Jacobian

40.2M+2.68 M

7.2 M

40.2M+9.88 M

Modified Jacobian

71.2M+3.56 M

7.2 M

71.2 M+ 10,76 M

Conclusion

In this paper, we are compared the three methods for comuting scalar multiplication with various coordinate choice and we conclued that N AF4 method with Chudnovsky Jacobian coordinate is the faster if the size of  is high. For future work, research will be carried out to combine montgomery method with specific coordinate system to find the faster method and to give the running time of the last methods.

References 1. Miller, V.: Uses of elliptic curves in cryptography. In: Advances Groundwater Pollution Control. Advances in Cryptology, CRYPTO 1985, pp. 417–426 (1986) 2. Koblitz, N.: Elliptic curve cryptosystems. Math. Comput. 48, 203–209 (1987) 3. Cohen, H., Frey, G., Avanzi, R.: Handbook of Elliptic and Hyperelliptic Curve Cryptography. CRC Press, Boca Raton (2005) 4. Blake, I., Seroussi, G., Smart, N.: Advances in Elliptic Curve Cryptography. London Mathematical Society Lecture Note Series, Cambridge University Press (2005) 5. Silverman, J.H.: The Arithmetic of Elliptic Curves, Graduate Texts in Mathematics, vol. 106. Springer (1986) 6. Brier, E., Joye, M.: Fast point multiplication on elliptic curves through isogenies. In: Fossorier, M., Hoholdt, T., Poli, A. (eds.) Applied Algebra,Algebraic Algorithms and Error-Correcting Codes. LNCS, vol. 2643, pp. 43–50. Springer (2003) 7. Ciet, M., Joye, M., Lauter, K., Montgomery, P.: Trading inversion for multiplication in elliptic curve cryptography. J. Des. Codes Cryptogr. 39(2), 189–206 (2006) 8. Lim, C.H., Hwang, H.S.: Fast implementation of elliptic curve arithmetic in GF (pn ). In: Public Key Cryptography. LNCS, vol. 1751, pp. 455-461 (2000)

On the Efficiency of Scalar Multiplication on the Elliptic Curves

399

9. Hankerson, D., Menezes, A., Vanstone, S.: Guide to Elliptic Curve Cryptography, p. 92. Springer, Heidelberg (2004) 10. Cohen, H., Miyaji, A., Ono, T.: Efficient elliptic curve exponentiation using mixed coordinates. In: Proceedings of Advances in Cryptology - ASIACRYPT 1998, International Conference on the Theory and Applications of Cryptology and Information Security, Beijing, China, vol. 1514, pp. 51–65, 18–22 October 1998 11. Silverman, J.H.: Multiplication, fast, in finite fields GF (2n ). In: Cryptographic Hardware and Embedded Systems, CHES. Lecture Notes in Computer Science, vol. 1717. Springer, Heidelberg (1999) 12. Fay, B.: Double-and-add with relative Jacobian coordinates. Cryptology (2014). ePrint Archive 13. Longa, P., Miri, A.: New multibase non-adjacent form scalar multiplication and its application to elliptic curve cryptosystems (extended version). IACR Cryptology ePrint Archive 2008, 52 (2008)

Patients Learning Process Supporting Change in Identities and Life Styles - A Heart Failure Self-care Scenario Linda Askenäs

and Jan Aidemark(&)

Linnaeus University, 351 95 Växjö, Sweden {linda.askenas,jan.aidemark}@lnu.se

Abstract. This paper deals with the planning of eHealth systems in the area of chronic care from a patient-centered perspective. The specific area is heart failure (HF) and systems that support patients’ possibilities to be active learners during the care processes and facilitate learning as a process of life style changes. Becoming better at self-care implies changes in lifestyle and the creation of a new identity. A better understanding of this process is intended to create a base for developing appropriate information systems or information technology (IS/IT) support for learning processes. The objective of this paper is the development of a better understanding of the challenges of self-care within chronic illnesses with special focus on HF. As a result, we present a set of issues that could guide the choice and design of ICT-based support systems. Keywords: Learning management  Heart failure  eHealth  Patient learning Self-care  Life-style changes  Chronic illness  Patient-centered care Learning processes

1 Introduction EHealth is an area of IS/IT systems for the health sector that forms part of the solution for the current challenges that this sector faces. Changes like new classes of sickness, more chronic illness, greater demands from patients on individualization and a general drive for a more effective use of resources paint a difficult and conflicting picture. Care for chronic illness involves special characteristics as the time line is open-ended, the condition is changing, often following an unpredictable illness trajectory, and its aspects are challenging both from patient and societal perspectives. This paper deals with learning and change processes as part of developing better self-care in the area of heart failure (HF). Heart failure (HF) is a chronic condition in which the patient’s own actions are central to how to deal with the condition. Studies of coronary heart disease indicate that intensive changes in lifestyle [1] are highly important for the wellbeing of these patients. Self-care and lifestyle changes can be a complement to medical treatment [2, 3]. Changes in exercise, stress management, smoking and diet, amount to serious lifestyle changes and are as such not easy to achieve. In this paper, we look closer on how such lifestyle changes can be achieved from a learning perspective. A better understanding of these processes of patient © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 400–411, 2019. https://doi.org/10.1007/978-3-030-03577-8_45

Patients Learning Process Supporting Change in Identities and Life Styles

401

learning and personal knowledge development is hoped to create a basis for developing appropriate information technology (IS/IT) support for the learning processes. In the paper, we also try to understand lifestyle changes as a learning process integrated into the self-care process and the way this could create a base for IT support for self-care and an active process in changing one’s lifestyle. The aim of this paper is to develop a frame of understanding for the planning of information systems that support the learning needed by HF patients during the care process. The approach for achieving this is to investigate reports on care for HF patients. As a result, we wish to present a set of issues and discussion points that could guide the choice and design of ICT-based support systems.

2 Self-care as a Process of Patient Learning Theory of learning in real life and for practical purposes or experiential learning is close to the situation of heart failure patients. Learning is the patient’s process of creating personal knowledge of how to deal with the situation of having heart failure problems. Experiential learning has been presented as [4] building on a basic “Lewinian experiential learning” model. This model consists of a learning cycle of four stages: • • • •

Concrete experience. Observation and reflection. Formation of abstract concepts and generalizations. Testing implications of concepts in new situations.

In experiential learning, the learning process takes place on the individual level as learning by doing. First hand experiences and learning from the outcomes of one’s own actions are the keys to learning. This cycle progresses over time, creating feedback loops that direct the learning according to the desired goals. When new situations arise, learning restarts with new observations, generalization, and testing in other situations. Feedback [5] is in itself an important concept, including the concepts of single and double loop learning. The single loop directs learning towards a given goal, while the double loop is focused on the nature of the goal and how to change it if needed. In the single loop, information about performance is fed back to change current behavior in order to achieve better goal attainment. In the double loop, information from outcomes is used for questioning the current goals and the understanding underlying them. The learning process aims at creating new personal knowledge [6]. The process of acting and responding to situations implies building tacit, action-oriented knowledge, not only by being able to repeat facts about how to act but also actually doing it. Learning and self-care are important topics in the care of heart failure patients. Riegel et al. [7, 8] present a 3-step model of self-care, including self-care maintenance, monitoring and management. Maintenance refers to self-care activities that aim at preserving and improving the health situation of the patient by, for example, exercising and taking the medications prescribed. Monitoring is the process of listening to signals from the body in order to understand the current situation and recognize changes, which in turn might warrant changes in behavior. Management is the

402

L. Askenäs and J. Aidemark

evaluation of the output of monitoring and should aim at taking new decisions on how to change the current behavior, resulting in a new schema of self-care maintenance. Riegel et al. [7, 8] describe decision-making as naturalistic and being of an ad hoc nature, with little systematic analysis of alternatives or expected outcomes. The attitude or abilities of the patient play an important role, because patients must decide, in the end, to commit themselves to the self-care, which in turn can be done in a more or less active, knowledgeable and reflective way. Four types of patient approach are identified in the research: • Insufficient and reflective, with the patient deciding not to engage in self-care. • Sufficient, reasoned and reflective, with the patient engaging actively in the right level of self-care. • Insufficient and mindless, with the patient neither engaging in self-care nor reflecting at all on why or how. • Sufficient and un-reflected, with the patient performing sufficient self-care, but without any thought or reflections. These different types of patient present the dilemma for self-care in that, for various personal reasons, not everybody takes an interest in self-care, while some may even willfully abstain from it. Different aspects on effective patient learning are discussed by several authors. In [9], factors that influence the impact of education are investigated, with age and gender pointed out as especially important. This study shows that older males with fewer comorbid illnesses were more successful in practicing self-care. In another study, formal education and symptom severity predicted a better knowledge of HF [10]. There are many aspects of a HF patient’s situation that might prevent learning, as reported in a number of systematic studies. For example, Strömberg [11] reports five types of barriers for patient learning: (1) Functional limitations such as visual, hearing and mobility limitations, (2) Cognitive limitation, memory problems, (3) Misconceptions and lack of basic knowledge, (4) Low motivation and interest, and (5) Low self-esteem. Cognitive impairment includes a number of problems with basic mental processes following HF [12]. Problems can occur in several domains including attention, visualspatial memory, visual-spatial intelligence, verbal attainment, memory and executive functions. There are differences between HF patients in whether or not they have any cognitive problems whatsoever or have some of these problems but no other ones. Some might even come through without any problem. There are also low-level problems, such as subtle cognitive impairment, that are hard to discover but still affect the abilities of the patient. Patient learning processes have been explored from a general chronic illness perspective inspired by, for example, the chronic care model [13] as a way of organizing information systems in support of patient centered care [14, 15]. Six main areas were outlined in this model as well as one more overarching area that covers the interactions and conversations between patient and caregiver. Behind this model, a traditional decision-making model is used as an approach to understanding a patient’s process towards a healthier life.

Patients Learning Process Supporting Change in Identities and Life Styles

403

(1) Patient understanding. Patients suffering from HF need to have knowledge and skills acquired through experience and/or education in order to gain an understanding of their situation, including the cause of HF and its symptoms, the rationale for treatment and lifestyle changes. (2) Monitoring and fact gathering. Here traditional measures are used to acquire the facts about the condition of the patient. (3) Planning and formulation of alternative actions. Based on the facts, possible paths of action can be laid out. (4) Decision-making. This is a process of comparing alternatives in order to choose the most satisfactory one and committing resources to the chosen path. (5) Action. Performing what has been decided on, for example taking extra medications, e.g. diuretics in HF, following an exercise plan, or some additional treatment that is to be performed, such as an annual flu shot. (6) Evaluation and keeping records. Evaluations are made for understanding the outcome of different activities and of the care process as a whole. Here we consider the record keeping and information handling needed for the support and administration of the process, and how this can be used for learning. (7) Patient interaction with healthcare professionals (doctor/nurse etc.) is traditionally done in a face-to-face experience format or by telephone, with possibly a written note as support technology. The healthcare providers support patients in coming to terms with their situation, tests and examinations are performed, plans and decisions are made, and actions and records are kept. This is not a stage as such, rather the key interaction, on which all the other activities of the learning process are based.

3 Heart Failure Patients’ Life Style Changes and Identities To develop a better understanding of the complex needs of patients with HF, qualitative studies have been performed to explore the experiences of such patients through the disease trajectory. This also affects the possibilities and conditions under which the learning process occurs. It has been shown that these patients gradually find more and more difficulty in fulfilling their usual roles, and they may even be forced into a more physically dependent position in day-to-day living. Among the difficulties that HF patients face, the nature of the lifestyle changes, many of which are hard to change in any case, are important to understand. Welstand et al. [16] investigate how patients handle their heart failure condition and their ability to take responsibility for a new life situation. In an overview of 18 papers, a number of common characteristics emerge, the key one being taking on a new identity. This includes the abilities to understand the diagnosis and the manifestations of heart failure, perceptions of day-to-day life, coping behaviors, the role of others and the concept of self. Taking on a new identity is very important for the effectiveness of self-care. A number of studies have been made on different aspects on life style change. Problems like feeling worthless, social isolation and loneliness are commonly associated with chronic illnesses, especially among patients with heart failure [17, 18].

404

L. Askenäs and J. Aidemark

In a qualitative study of 25 patients, Thornhill et al. [19] explore people’s experiences of the diagnostic process and of living with heart failure. The study shows that the patients feel restricted in their way of life, especially in experiencing their lack of strength to perform physical activities. Social relations and support were therefore important to handle in the new situation. Stull et al. [20] investigate the process that people undergo when they become heart failure patients as a process of identity creation. In their study, 21 patients were interviewed about important situations or events when adjusting to their heart failure condition. Stull et al. suggest that the identity creation includes five phases: crisis event, diagnosis, patient and family response, acceptance and adjustment and, finally, getting on with one’s life. This is a slow gradual process of creating new behaviors, with new meanings being attached to events and a new role in life being developed. Changes in the exercise capacity due to heart failure bring changes to gender identity, self-esteem and the quality of life of men [21]. Jones et al. [22] pinpoint how heart failure causes fatigue and how this influences the patients’ perception of what they can do, of their self-identity, their body, experience of time, the environment, as well as their relationships with others and their need of assistance. Heart failure might lead to changes in appetite, and in turn to changes in diet and in general eating habits. Changes in the role of food in a patient’s life are also a factor affecting the self-identity of heart failure patients [23]. Park et al. [24] point to the importance for the well-being for patients with heart failure of factors of religion/spirituality, including concepts like forgiveness, daily spiritual experiences and belief in afterlife. The impact varies, for example, in that physical health is not affected, while forgiveness seems to have a positive effect on states of depression. Problems with sexual health among people with heart failure, both male and female, are reported [25], with implications for social life and self-perception. A number of effects of heart failure conditions, such as loss of appetite, fatigue and exercise capacity are thus reported, all of them very personal and identity challenging. In turn, these problems might lead to other ones like social disruption, loneliness and isolation. The advice given to counter these is not easy to adhere to, for example starting exercising while being extremely tired due to heart failure is difficult. There are also problems of compliance with medication and other lifestyle recommendations [26]. Van der Wal et al. [27] found that depressive symptoms and low knowledge of HF among patients are barriers for compliance to HF advice. To increase compliance, various forms of intervention could be tried. Research on the effects of support for heart failure patients using IT solutions (for example, telemonitoring) shows that these can be effective for improving patients’ life situation [28].

4 Life Style Changes as a Patient Learning Process In this section we discuss various aspects of the learning process for life style changes. Here we combine the 3-step model for self-care [7, 8] with the 7-point learning model, both discussed above. We map the 7 steps on to the self-care activities of maintenance,

Patients Learning Process Supporting Change in Identities and Life Styles

405

monitoring and management. For each of the three steps we discuss problems and possibilities of life-style changes from a learning perspective. 4.1

Self-care Maintenance: Planning, Decision and Action

Self-care maintenance is defined as those behaviors used by patients with a chronic illness to maintain physical and emotional stability [7, 8]. From a learning perspective, we break this down into three actions, planning, decision-making and taking action. Planning is the formation of possible future action, to be selected in the decisionmaking process and realized in an action phase. Plans for how to conduct self-care activities seem fairly standardized. Planning and decision-making appear to be rationalized into handing out standardized sheets of paper describing what to do for the patient when the self-care phase starts. A great deal of the personalized part comes out of the personal meeting between patient and caregiver. New IT-based systems could be used to promote an individualization of the planning processes, making this process more “patient centered”. In patient centered care, planning should be a shared activity between the care dyad/triad (caregivers, relatives and patient). It is important to note that this phase should not be too deeply integrated with the decision-making; it should be open to exploring possibilities and new perspectives. During the planning, it is good to really explore possibilities without too many restraints, instead of trying to solve problems directly, but opening up the situation and applying a fresh broad view of the whole situation. From a theoretical perspective, a decision is an irreversible commitment of resources; basically involving their consumption. For a HF patient, a choice is a commitment to act according to a chosen plan. However, as the subsequent action comes later, making a decision at a meeting does not equal action. The decision point for a HF patient could occur, for example, during conversations with a health-care specialist when the patient decides what to do. Still, when do things actual happen? When is the decision made: when things are purchased, like new scales, a gym card or healthier food, or when the shoes are used or food consumed? When the choice has been made, support is needed for bringing the intention of the decision into action. Such support could, for example, include the scheduling of activities, booking times and experts, buying the necessary items and training to use equipment or food preparation - in essence anything needed to help the patient to actually perform according to the decision. The adherence to or compliance with decisions and plans is a really hard part of this process. From a learning perspective, this is the process where the personal knowledge is actually achieved, which is something that comes from doing things. Systems for reminders and follow-up can be easily imagined and devised. 4.2

Self-care Monitoring: Measurements and Keeping Records

Self-care monitoring refers to the process of observing oneself for changes in signs and symptoms [7, 8]. ESC Guidelines [29] prescribe that HF patients shall record weight, drug and fluid intake, monitor signs and symptoms (typical: breathlessness, fatigue, ankle swelling, orthopnea and reduced exercise tolerance), drug effects and side effects

406

L. Askenäs and J. Aidemark

of physical activity and food intake (malnutrition/weight reduction) as well as recognizing depression symptoms. Daily facts could be recorded automatically or manually by the user. All of these are important for monitoring and providing information for further planning, decision-making and action. These should also be used to reinforce the basic understanding of the general HF diagnosis and what it means for the patient. This connects to the basic feedback loop [5] which involves comparing outcomes with the desired state and adjusting actions accordingly. The facts and data gathered during measuring activities must also be stored and made ready for future analysis and presentation. This could mean local databases or even a centralized storage where these facts could be shared with the health-care system. Analyses of the material are also part of this aspect, alerting the patient and using graphics and warning signals. The measurements that have been recorded by the health professionals should also be visible to the patient. Presenting figures is easily done by using IT, but the design and intended ways of using it must consider the basic conditions that a HF patient lives under, so that they actually function as support and do not drag people down or scare them away. Having information and facts presented as patients’ own experience should help the learning process and the understanding of cause and effects. At the early stages, this needs to be done with help from and in interaction with different health professions. There is a wide range of health-care systems, which a learning process can use or can be useful for. Here we look at the broader picture of systems and the organizational aspects of the learning process. There are many questions about how to make the connection between self-care and general health records. For example, how can patients feed information into these larger systems and how can the hospital system at large be used for the benefit of the patient? It might, for example, be possible to make direct connections to quality clinical registers. This data could then be used by patients to compare their personal data with those of others. Patients and patient groups could also make their own suggestions about which data could be beneficial for them to monitor as a group, even though it may not have a direct medical or clinical value for health professionals. Records can be utilized as an active part of the care, the learning and the life-style change processes. By using information from more general records of the behavior and outcomes of self-care, the patients may receive a broader perspective on their situation, and their motivation for change may improve. The monitoring of symptoms and improvement records need to be accompanied by descriptions of the situation and of what actually happened for patients to be able to learn from the situation. The writing in itself can be a help to start reflecting. This would be a process of trying to make sense of the different patterns in the data or situations. It will also be a way to help patients and relatives to remember and retell the health-related story in physical meetings with the caregiver. This story will be a health diary/blog that may be shared with relatives, family or friends. One interesting project could be to make an inventory of what possible use could be made of general health care systems and of the facts and statistics from past history that these systems contain.

Patients Learning Process Supporting Change in Identities and Life Styles

4.3

407

Self-care Management: Understanding and Patient-Care Professional Meetings

Self-care management is defined as the response to signs and symptoms when they occur [7, 8]. Understanding the connection between symptoms and self-care actions is necessary for the self-care processes to function. The understanding phase seems most central for HF patients, especially from a motivation and attention point of view. In order to be able to learn about something, you must have a basic frame of understanding. This is a phase involving understanding what goals the patients have with their self-care and their long-term vision. In this sense, a second order level of the feedback loop [5] must be developed, where not only goal attainment is looked at, but which also includes looking at and changing long term goals. To grasp the big picture and the long-term outlook for the future is important for preparing and sustaining new life-changing programs of action. There are well documented problems, as all too many patients have a low and or even nonexistent awareness of their sickness. Understanding is a problem, as studies have shown that patients might not even be aware that they have been diagnosed with HF. In one study, 20% of the patients were unaware of having an HF diagnosis [30]. At the same time, it does not seem to be due to an obvious lack of facts, but rather concerns problems of motivating patients to take the time and effort to commit themselves wholeheartedly. Personal contacts from health care personnel by telephone calls seem to be a success factor [31]. Constant reminders and a one-to-one care attitude from care persons are important. It should be possible to make use of personalized information systems that keep reminding people of the basics of HF and the importance of patient activities. It is also possible to include background facts, for example specific drug doses, effects and side effects, diet restriction (personal fluid intake with individual context factors such as weather or physical activity, due to malnutrition or weight reduction) or individual exercise recommendations. Overall, a great many problems of HF patients’ low compliance seem to stem from this phase of the learning process. Hence, it is hard to obtain any real commitment for the following steps if the basic understanding falters. From a learning perspective, people need a basic mental frame (for example [32]) which directs attention towards cognitive and social levels and enables the patient to even start taking in information on a subject. The framework consists of the individual perspective, which organizes how that individual perceives reality. A change of lifestyle would start with changes in these basic frames describing who that person is and what the individual situation is like. The frame consists of a context of facts, values and emotions that people possess when trying to make sense of incoming new facts. This is a key factor in the understanding phase, which could be spelled out as creating a frame of mind for learning and action as a HF patient. The life-long and often irreversible nature of HF seems to discourage people from thinking about their condition. To forget about it and live “normally” is of course desired, as the constant reminder of bad future outlooks in itself drags the perceived life quality down. Self-identification and self-perception are important issues in this phase. To become a person with a weak heart who always needs to consider this in actions and life choices is not desirable but necessary for many people. Framing the self-understanding into a situation of control is important; otherwise the path is open to resignation, learned helplessness and apathy. Viewing the full picture and being in

408

L. Askenäs and J. Aidemark

charge of one’s situation is a central theme in the continuous learning suggested in this model. In patient-centered care, it is important for every caregiver to recognize the patient’s situation and needs. The caregiver should develop an understanding of the values, thoughts and circumstances of the patient. Information about the patient’s current situation and how to cope with it is equally necessary for the patient to understand. The community aspect becomes a clear and important part of this equation. Today’s focus on social media and on creating connection between people is very important for creating one’s identity and becoming part of a group with a wider social context. It has been reported that HF patients might tend to feel ostracized and stigmatized by their diagnosis. This social level of learning and frame development is, however, strengthened while shared and discussed with involved and concerned member of social groups [33]. The face-to-face meeting plays a very central role in the learning process. Many of the activities mentioned above are discussed and framed in such meetings. The face-toface situation endows what is discussed with a higher feeling of reality and importance to become self-evident truths [33]. The chances of doing so would be greater if the patient is able to remember and relate what has happened since the previous meeting.

5 Discussion Patient learning is a central element in HF self-care and has a clear impact on the development of new identities as part of necessary life-style changes. In this study, we have compared an information-based, rational and decision-oriented model with a selfcare model focused on patient actions. Although many of the concepts are similar, there are also clear contrasts between these two. The “experiential learning” [4] model could be one example of what this might entail. This would provide a full learning cycle that ends with a patient that both has personal/tacit knowledge of being a person with HF and explicit knowledge, knowing what HF is and how to treat it (tacit/explicit knowledge, [6]). However, this is not easy to achieve with the limitations of the patients’ intention, for example the willful uninterested patient that abstains from selfcare [7, 8]. Stull et al. [20] point to the need of getting out of the crisis mode that HF patients often find themselves in and going back to normal. Ignoring or trying to ignore the consequences of HF might be one strategy for going back to normal but offers a bad start for learning to be sufficiently capable of self-care. This cycle points to the difficulties that come with tacit and personal knowledge. The creation of a new lifestyle and personal identity implies creating new sets of personal and tacit knowledge, involving acting in a self-evident and automatic style in everyday life with regard to HF-related activities. This is a process of creating personal knowledge, which means actually being able to perform the right actions without extra instructions or help. The integration of a learning process into the self-care cycle of a maintaining, monitoring and managing process should improve the patient’s ability to achieve lifestyle changes. We can integrate these different processes into a model, a self-care layers model (Fig. 1). In this model, we connect three layers of self-care. At the core of the patients’ selfcare actions we see the interactions with the care personnel, typically in the patient-care professional meeting. Here knowledge, trust and motivation for self-care are built. It is

Patients Learning Process Supporting Change in Identities and Life Styles

409

Fig. 1. Layers of self-care management.

around this core that the patients’ self-care cycle of management, maintenance and monitoring revolves. These are actions taken on an everyday basis in order to improve everyday life. To these two layers, we add a support system layer that could aid selfcare actions by further information, communication, knowledge and interaction with care personnel. Acknowledgement. This work is supported through the Swedish National Science Council/Swedish research council for health, working life and welfare (VR-FORTE) (20144100).

References 1. Ornish, D.D., Brown, S.E., Billings, J.H., Scherwitz, L.W., Armstrong, W.T., Ports, T.A., McLanahan, S.M., Kirkeeide, R.L., Gould, K.L., Brand, R.J.: MEDICAL SCIENCE: can lifestyle changes reverse coronary heart disease? The Lifestyle Heart Trial. Lancet 336, 129– 133 (1990) 2. Soler-Soler, J., Permanyer-Miralda, G.: How do changes in lifestyle complement medical treatment in heart failure? Br. Heart J. 72(3), S87 (1994) 3. Lainscak, M., Blue, L., Clark, A.L., Dahlström, U., Dickstein, K., Ekman, I., McDonagh, T., McMurray, J.J., Ryder, M., Stewart, S., Strömberg, A., Jaarsma, T.: Self-Care management of heart failure practical recommendations from the patient care committee of the heart failure association of the European Society of Cardiology. Eur. J. Heart Fail. 13(2), 115–126 (2011) 4. Kolb, D.A.: Experiential Learning. Prentice-Hall, Inc., Englewood Cliffs (1984) 5. Argyris, C., Schön, D.: Organizational Learning II – Theory, Method, and Practice. AdisonWesley, Reading (1996) 6. Polanyi, M.: The Tacit Dimension. Routledge, London (1966)

410

L. Askenäs and J. Aidemark

7. Riegel, B., Jaarsma, T., Strömberg, A.: A middle-range theory of self-care of chronic illness. Adv. Nurs. Sci. 35(3), 194–204 (2012) 8. Remme, W., Boccanelli, A., Cline, C., Cohen-Solal, A., Dietz, R., Hobbs, R., Keukelaar, K., Sendon, J.L., Macarie, C., McMurray, J., Rauch, B., Ruzyllo, W., Zannad, F.: SHAPE Study, Cardiovascular Drugs and Therapy, vol. 18, no. 2, pp. 153–159 (2004) 9. Chriss, P.M., Sheposh, J., Carlson, B., Riegel, B.: Predictors of successful heart failure selfcare maintenance in the first three months after hospitalization. Heart Lung 33(6), 345–353 (2004) 10. Barbareschi, G., Sanderman, R., Leegte, I.L., Van Veldhuisen, D.J., Jaarsma, T.: Educational level and the quality of life of heart failure patients: a longitudinal study. J. Card. Fail. 17(1), 47–53 (2011) 11. Strömberg, A.: The crucial role of patient education in heart failure. J. Card. Fail. 7(3), 363– 369 (2005) 12. Gaviria, M., Pliskin, N., Kney, A.: Cognitive impairment patients with advance heart failure and its implications on decision-making capacity. Congest Heart Fail. 17(4), 175–179 (2011) 13. Wagner, E.H., Austin, B.T., Von Korff, M.: Organizing care for patients with chronic illness. Milbank Q. 74(4), 511–544 (2016) 14. Aidemark, J., Askenäs, L.: Knowledge management and patient centered approach in health care. In: Proceedings of IADIS International Conference on E-Health (2009) 15. Askenäs, L., Aidemark, J.: Towards a conceptual process-oriented framework for patientcentered e-health – an exploratory study of current projects in Sweden. In: Proceedings of the International Conference on Society and Information Technologies (2010) 16. Welstand, J., Carson, A., Rutherford, P.: Living with heart failure: an integrative review. Int. J. Nurs. Stud. 46(10), 1374–1385 (2009) 17. Mårtensson, J., Karlsson, J., Fridlund, B.: Male patients with congestive heart failure and their conception of the life situation. J. Adv. Nurs. 25(3), 579–86 (1997) 18. Mårtensson, J., Karlsson, J.-E., Fridlund, B.: Female patients with congestive heart failure: how they conceive the life situation. J. Adv. Nurs. 28(6), 1216–1224 (1998) 19. Thornhill, K.K., Lyons, A.C., Nouwen, A.A., Lip, G.H.: Experiences of living with congestive heart failure: a qualitative study. Br. J. Health. Psychol. 13(1), 155–175 (2008) 20. Stull, D., Starling, R., Haas, G., Young, J.: Becoming a patient with heart failure. Heart Lung 28(4), 284–292 (1999) 21. Makowska, A., Rydlewska, A., Krakowiak, B., Kuczyńska, A., Sorokowski, P., Danel, D., Pawłowski, B., Banasiak, W., Ponikowski, P., Jankowska, E.A.: Psychological gender of men with systolic heart failure: a neglected strategy to cope with the disease? Am. J. Men’s Health 8(3), 249–257 (2013) 22. Jones, J., McDermott, C.M., Nowels, C.T., Matlock, D.D., Bekelman, D.B.: The experience of fatigue as a distressing symptom of heart failure. Heart Lung 41(5), 484–491 (2012) 23. Jacobsson, A., Pihl, E., Martensson, J., Fridlund, B.: Emotions, the meaning of food and heart failure: a grounded theory study. J. Adv. Nurs. 46(5), 514–522 (2004) 24. Park, C., Lim, H., Newlon, M., Suresh, D., Bliss, D.: Dimensions of religiousness and spirituality as predictors of well-being in advanced chronic heart failure patients. J. Relig. Health 53(2), 579–590 (2013) 25. Hoekstra, T., Lesman-Leegte, I., Luttik, M., Sanderman, R., van Veldhuisen, D., Jaarsma, T.: Sexual problems in elderly male and female patients with heart failure. Heart 98(22), 1647–1652 (2012) 26. van der Wal, M., Jaarsma, T., van Veldhuisen, D.: Non-compliance in patients with heart failure; how can we manage it? Eur. J. Heart Fail. 7(1), 5–17 (2005)

Patients Learning Process Supporting Change in Identities and Life Styles

411

27. van der Wal, M., Jaarsma, T., Moser, D., van Gilst, W., van Veldhuisen, D.: Unraveling the mechanisms for heart failure patients’ beliefs about compliance. Heart Lung 36(4), 253–261 (2007) 28. Clark, R., Inglis, S., Mcalister, F., Ball, J., Lewinter, C., Cullington, D., Stewart, S., Cleland, J.: Remote (non-invasive) monitoring in heart failure: effect on length of stay, quality of life, knowledge, adherence and satisfaction in 8,323 heart failure patients: a systematic review. Eur. Heart J. 31, 944–945 (2010) 29. McMurray, J.J., Adamopoulos, S., Anker, S.D., Auricchio, A., Böhm, M., Dickstein, K., Falk, V., Filippatos, G., Fonseca, C., Zeiher, A.: ESC guidelines for the diagnosis and treatment of acute and chronic heart failure: the task force for the diagnosis and treatment of acute and chronic heart failure 2012 of the European Society of Cardiology. Eur. J. Heart Fail. 14(8), 803–69 (2012) 30. Ekman, I., Ehnfors, M., Norberg, A.: The meaning of living with severe chronic heart failure as narrated by elderly people. Scand. J. Caring Sci. 14(2), 130–136 (2000) 31. Inglis, S.C., Clark, R.A., McAlister, F.A., Stewart, S., Cleland, J.G.F.: Which components of heart failure programmes are effective? A systematic review and meta-analysis of the outcomes of structured telephone support or telemonitoring as the primary component of chronic heart failure management in 8323 patients: abridged Cochrane Review. Eur. J. Hear. Fail. 13(9), 1028–1040 (2011) 32. Goffman, E.: Frame Analysis: An Essay on the Organization of Experience. Northeastern University Press, Boston (1986) 33. Berger, P.L., Luckmann, T.: The Social Construction of Reality. Doubleday, Garden City (1966)

Privacy Preserving Requirements for Sharing Health Data in Cloud Insaf Boumezbeur(&) and Karim Zarour LIRE Laboratory, Department of Software Technology and Information Systems, University Constantine 2 - Abdelhamid Mehri, Constantine, Algeria {insaf.boumezbeur,karim.zarour}@univ-constantine2.dz

Abstract. Cloud Computing (CC) has become an integral part of the operation of health. This technology plays an important role in healthcare domain, particularly in the sharing of health data. However, moving and sharing sensitive health data to the cloud still implies severe privacy risks. Various research results have proposed different solutions in order to preserve the privacy of health data in cloud environment. This paper focuses on privacy preserving requirements solutions related to sharing health data in cloud. These solutions have been compared together according to the privacy-preserving requirements, strengths and weaknesses. Keywords: Sharing health data

 Privacy  Access control  Encryption

1 Introduction The development of new technologies has greatly influenced the traditional healthcare practices. Consequently, the traditional healthcare systems have advanced from conventional clinical settings, with paper-based medical records, to the electronic versions of patient health information such as Electronic Medical Records (EMR), Personal Health Records (PHR), and Electronic Health Records (EHR) defines in [1]. This transition solves the limit of collaboration and coordination between patients and professional healthcare. The CC paradigm was included in any modernization efforts in healthcare due to its benefits. The CC does not just adequate data storage capacities and facilitates storing of health data; it also improves the transfer, availability and retrieval of health records [2], sharing large data volumes as well as exchanging EMR between hospitals and healthcare organizations [3]. Moreover, it makes it possible to remote patient monitoring by medical providers [ 3, 4]. On the other hand, the moving and sharing of sensitive health data or health records to the cloud still implies severe privacy risks, particularly the issues of privacy requirements. The health data constitutes a severe breach of the privacy of the individual that is why it must be handled carefully. While the privacy problems are the most important factors for the adoption of cloud computing in healthcare domain, frequently, privacy preserving is one of the main concerns in e-health cloud systems particularly in sharing health data. According to the literature, several solutions discuss corresponding data privacy preserving issues in sharing health data and present novel © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 412–423, 2019. https://doi.org/10.1007/978-3-030-03577-8_46

Privacy Preserving Requirements for Sharing Health Data

413

mechanisms to mitigate some of the problems (see Sect. 4). In addition, many government organizations have come up with a set of guidelines to enforce legal requirements of healthcare data as a way of achieving the intended level of privacy. In the United States, the Insurance Portability and Accountability Act (HIPAA) has emphasized the privacy of health data. The health-data privacy rules identified by HIPAA protect Health Information (PHI) and ensure the confidentiality, integrity, and availability of electronic health information [5]. The detailed information regarding HIPAA privacy guidelines can be obtained by applying these rules [6]. The remainder of this paper is organized as follows. Section 2 discusses the privacy requirements in the e-health cloud identified by the various works. Section 3 defines the privacy requirements that we have adopted in the remainder of this paper. Section 4 presents the privacy preserving solutions for sharing health data in cloud. Section 5 shows the results of the comparison of the presented solutions in terms of privacy requirements. Section 6 provides the conclusion.

2 Privacy Requirements for e-Health in Cloud Privacy protection of patients’ data in the e-health cloud is an important question and involves various requirements. Several papers have reviewed the requirements considered for privacy in cloud-based e-health, [7–9]. In [7], the authors identified the confidentiality, integrity, audit, availability extended by authentication, nonrepudiation, as well as ownership of information. The authors of [10] have referred to and used these requirements. Most papers that preserve health data privacy in a cloud have identified the following requirements: integrity, confidentiality, audit, and nonrepudiation. Table 1 summarizes all the general privacy requirements for e-health in cloud identified by previous works or studies over the past eight years.

3 Essential Privacy Requirements for e-Health in Cloud In this section, we have selected and defined the requirements considered for privacy in sharing health data in cloud. Based on research established for the privacy preserving requirements represented in the previous section (Sect. 2), the confidentiality, integrity, audit, non-repudiation, authenticity, anonymity, and unlinkability are the most adopted requirements. Each of the previous requirements is defined below in the context of ehealth systems. Confidentiality; is defined by the International Organization for Standardization (ISO) in ISO-17799 as “ensuring the information accessibility only to those authorized to have access”. The confidentiality requirement for the health data in the cloud environment requires that unauthorized parties do not access the data [8]. Confidentiality can be achieved by encryption techniques in EHR systems. Integrity; ensuring the accuracy and consistency of healthcare data [8]. It refers to the fact that data has not been tampered by unauthorized use [7]. Authenticity; in general, refers to the truthfulness of origins, attributions, commitments, and intentions [7]. In the healthcare systems, the information provided by the

Requirement

































[8]

[9]

[5]

[11]

[12]

[13]

Integrity

[10]

Confidentiality

[7]

Work















Audit















Nonrepudiation











Authenticity







Authentication







Accountability







Anonymity







Unlinkability





Ownership of information



Access revocation



Collusion resistance

Table 1. Privacy requirements for e-health in cloud.





Access Control



Reliability







Authorization



Patient’s understanding



Patient’s control



Consent exception

414 I. Boumezbeur and K. Zarour

Privacy Preserving Requirements for Sharing Health Data

415

healthcare providers and the identities of the entities using such information must be verified [9]. Only the entities possessing valid authentication codes and keys should only be granted access to the health information [5]. Audit means recording user activities of the healthcare system in chronological order, such as maintaining a log of every access to and modification of data. Auditing capability enables prior states of the information to be faithfully reconstructed [7]. Audit’s function is to ensure that all the healthcare data is secure and all the data access activities in the e-Health cloud are being monitored [9]. Unlinkability; the unauthorized entities of the health data should not be able to infer the relationships between the (a) identifying information of the patients, such as the name, address, social security number and (b) health data, for example, diagnosis and medical history. More specifically, the information obtained from different flows of the health data should not be sufficient to establish linkability by the unauthorized entities [9]. Anonymity; refers to the state where a particular subject cannot be identified. For instance, identities of the patients can be made anonymous when they store their health data on the cloud so that the cloud servers could not learn about the identity [9]. It is also important to distinguish between the anonymity and pseudonymity where pseudonymity is one of the methods used to maintain anonymity. Non-repudiation; Repudiation threats are concerned with the users who deny after performing an activity with the data. For instance, in the healthcare scenario, neither the patients nor the doctors can deny after misappropriating the health data [9]. To establish authenticity and non-repudiation, Electronic commerce uses technology such as digital signatures and encryption. Figure 1 illustrates the different privacy requirements identified by the previous works. The black bars represent the privacy requirements that we have defined above and which we have adopted in the remainder of this document.

Fig. 1. Essential privacy requirements for healthcare cloud.

According to the chart, it can be seen that confidentiality, integrity, audit and nonrepudiation have the highest frequencies, followed by the anonymity, unlinkability,

416

I. Boumezbeur and K. Zarour

authentication and authenticity with an average number, and finally, the other requirements with the lowest frequencies.

4 Privacy Preserving Solutions for Sharing Health Data in Cloud Privacy of sharing health data in the CC environment needs particular attention. Many works have proposed several solutions focused on the issue of privacy health data sharing in a cloud environment. Privacy preserving solutions are presented below in terms of cryptographic and access control strategies. 4.1

Cryptographic Approaches

Among the encryption techniques using to preserve privacy for sharing health data in cloud there is Attribute-Based Encryption (ABE), Symmetric Key Encryption (SKE), Homomorphic Encryption, Digital Signature and Proxy Re-encryption (PRE). Attribute-Based Encryption (ABE) Hybrid Secure and Scalable Electronic Health Record Sharing (HSS-EHRS) [14] is a hybrid solution proposed for sharing EHRs in a hybrid cloud environment. The goal is to preserve the security and privacy of medical records. The authors have combined and used two cryptographic methods (KP-ABE and MA-ABE) to provide flexible, secure, fine-grained access to EHR files. In [15], the authors proposed a protocol that allows cloud users to secure their sharing data. This proposed protocol is based on Attribute-Based Encryption (ABE). It allows secure and efficient deletion of access to a file, for a certain user, without having to decrypt and re-encrypt the original data with a new key or policy. To address the user revocation problem encountered in CP-ABE, [16] proposed a secure and flexible personal data sharing using new structure ABBE named EP-ABBE that reduces the user’s decryption calculation, protects the user’s privacy, allows efficient user revocation, and manages access control. [17, 18] used Ciphertext-Policy Attribute-Based Encryption (CP-ABE), in [18], the authors presented a design that provides secure access to EHRs. This design offers effective solutions to some of the issues related to standard encryption mechanisms. A secure approach proposed in [17] to share EHRs among users with minimum content disclosure using combination between CP-ABE and anonymization. [19] proposed a secure healthcare system that combines effective encryption algorithms such as Advanced Encryption Standard (AES) and Multi-Authority Attribute-Based Encryption (MA-ABE) against attacks from unauthorized users. Thus, identifies threats and vulnerabilities in the healthcare field. This proposal serves to enhance the security and access control of existing solutions. A solution proposed by the Data Capture and Auto Identification Reference (DACAR) project [20], to overcome many of challenges like service integration, large scale deployment, and security, integrity and confidentiality of sensitive medical data.

Privacy Preserving Requirements for Sharing Health Data

417

Symmetric Key Encryption (SKE) Security and privacy issues in access and management of EHRs were considered in [10], where the authors proposed an EHR sharing and integration system in healthcare clouds that used SKE to ensure normal and emergency scenarios. In [3], the authors provided an unlinkability mechanism between patients and electronic medical records using the SKE. This mechanism permits electronic medical records to be exchanged in a cloud environment, and at the same time used to protect the patient’s privacy. An EDST [21] is an efficient and secure data sharing system, for dynamic groups in an untrusted cloud. An EDST allows users to be revoked effectively and supports new user joining. Homomorphic Encryption In [22], the authors proposed a data sharing system based on homomorphic encryption, which can be used to store sensitive health information that can be outsourced to the cloud with a high level of security. A new sharing system proposed in [23] to protect the privacy of data and the evaluation process that is carried on the public cloud. As part of this work, security was ensured against unauthorized access by removing unique attributes from outsourced data. Proxy Re-encryption (PRE) The authors in [24] developed a health monitoring system that allows patients to share their health data with health professionals in a secure and confidential manner. This system ensures a secure data sharing protocol. Implemented through a proxy reencryption methodology based on El-Gamal, which allow private and secure data sharing in the Cloud. 4.2

Digital Signature

It is a security reference model designed for the sharing of Electronic Health Records (EHR) to preserve the privacy of healthcare applications in the untrusted cloud proposed in [7]. The model consists of three core components and uses an anonymous signature scheme for authenticity and integrity of the EHRs. The work of [25] proposed a cloud-Based Service for Secure Electronic Medical Record Exchange (CBSMRE). It is a unified and secure service that enables developers, healthcare providers and organizations to retrieve and manage the medical and personal health records (PHR) among various subscribers. 4.3

Access Control

A cloud-based EHR model that divided into access control and the application of encryption and digital signatures using ABAC method proposed in [13]. The proposed model guarantees patient privacy and it is more flexible than existing RBAC systems. The authors in [26] proposed a secure mobile health application based on a hybrid cloud, by integrating cryptographic techniques with role-based access control (RBAC), to protect healthcare data of patients. This system improves medical services by means of providing security and privacy.

418

I. Boumezbeur and K. Zarour

5 Discussion A comparison of the studied approaches in terms of the privacy requirements along with identifying strengths and weaknesses is provided in Table 2. As can be observed, all the presented approaches fulfill the confidentiality requirement. The majority of those approaches achieved the integrity and authenticity. However, the other requirements, such as audit, non-repudiation, anonymity and unlinkability are met only by few approaches. It is also observed that, the unlinkability achieved through the anonymity in the most approaches that fulfill these two requirements. There appears relationship between the authenticity and authentication requirements in some studies. According to [7], we notice that the authenticity maintains through authentication. The cryptographic approaches mostly utilize the Symmetric Key Encryption, Attribute Based Encryption and its techniques, El-Gamal cryptosystem and other cryptographic, such as Homomorphic encryption and Proxy Re-encryption. Interestingly, due to the many advantages offered by the ABE as: the selective sharing of data in an encrypted domain, taking action against collusion and the principal advantage of achieving fine-grained access control. Many existing approaches use ABE techniques, such as Key-Policy ABE (KP-ABE) encryption in [14, 15], CP-ABE encryption in [17, 18], and MA-ABE encryption in [14, 19]. As can be seen, the ABE approaches fulfilled most of the privacy preserving requirements and most approaches using ABE techniques fulfill the collusion resistance which means that collusion resistance is a crucial security feature of ABE. Moreover, the MA-ABE technique enhances system scalability and the approaches that applied the CP-ABE have assured the fine-grained access control, but the efficiency in revocation always remains the main disadvantage of this technique. However, the ABE approaches considered costly in terms of decryption because of bilinear computation steps [9]. Another observation that the authenticity achieved by all approaches using digital signature as cryptographic technique. The SKE technique using by some approaches because of its effectiveness compared with the PKE, which is an expensive technique. However, the SKE introduces complexity in EHR systems. This is due that all healthcare providers tend to use the same key for encryption and decryption. On the other hand, the confidentiality and integrity can be fulfilled by the access control. The policy-based access control techniques improve the flexibility in access control and the most popular one are Role-Based Access Control and Attribute-Based Access Control. However, the static characteristics of RBAC increase the number of roles and policies, resulting in a scalability issue. The ABAC model that is a more flexible, consequently enabling more fine-grained access control and thus, resolving this issue. In addition, the ABAC supports dynamic environments with frequent modifications to user permissions compared to RBAC. Moreover, ABAC provides a platform for easier cross domain authorizations compared to RBAC [12].

Privacy Preserving Requirements for Sharing Health Data

419

Table 2. Comparison of privacy preserving solutions. Work Technique(s)

Co In Au At Nr An Un Strengths

Weaknesses

[18]

CP-ABE (Ciphertext Policy)



✓ ✓









Interoperability Verification user attributes

[14]

KP-ABE MA-ABE



✓ ✓









[17]

CP-ABE Kanonymization



✘ ✘









[16]

EP-ABBE



✘ ✘









[20]

Digital signature



✓ ✓

✓ ✘





[15]

KP-ABE



✓ ✘









[27]

ABE SSE



✘ ✓









Simple integration Ensures flexibility and scalability Fine-grained access control Collusion resistance Increased scalability Revocation problem eliminated Reduced encryption time Scalability Fine grained access control Authorization Fine grained access control Accountability Collusion resistance Efficient user revocation Collusion resistant Preservation of privacy Reduce the decryption computation of user Authorization, secure data transmission, persistence Access control Complete e-Health Cloud services platform Access Revocation Search by keyword Confidentiality of keywords

-

Integrity Data ownership Expensive solution

Guarantees only the confidentiality

Inherited medical records Integrating with other public e-Health Clouds

Dynamic access policies multiple keyword search Dynamic file situations

(continued)

420

I. Boumezbeur and K. Zarour Table 2. (continued)

Work Technique(s)

Co In Au At Nr An Un Strengths

[19]

Digital signature MA-ABE AES



✓ ✓









[22]

Homomorphic encryption Algorithme RSA SKE, Homomorphic encryption



✘ ✘











✘ ✘









[23]

[25]

Digital signature



✓ ✓









[7]

Pseudonymity, digital signature



✓ ✓

✓ ✓





[13]

ABAC XML security Digital signature SKE



✓ ✓











✓ ✓

✓ ✓





[10]

Weaknesses

Collusion resistant Central access control Single point of contact for further authentication and authorization Effective scalability Reduce vulnerability and threats Fine grained access control Guarantees only the Data security confidentiality Data portability Availability Protects the privacy of evaluation process Simple integration Ensures interoperability Secure access to records Ensures all privacy requirements Fine grained access control Data quality and integrity Authorization Accountability Flexible access control Ensures patient ownership of data Protection of patient privacy

Guarantees only the confidentiality

Information systems combine with third party applications in the same environment

Described through a use case

-

Emergency access is not secure Higher computational cost

(continued)

Privacy Preserving Requirements for Sharing Health Data

421

Table 2. (continued) Work Technique(s)

Co In Au At Nr An Un Strengths

Weaknesses

[3]

SKE



✓ ✓









Non-Repudiation, Audit Portability mechanism not feasible

[21]

SKE



✓ ✓

✓ ✘





[24]

Proxy reencryption ElGamal cryptosystem



✘ ✘









[26]

Symmetric and ✓ asymmetric cryptography Cryptocoprocessors IBE Secret sharing ✓ algorithm

✓ ✓

✓ ✓





Unlinkability between electronic health records Portability Access control Effective revocation of users Secure and confidential access control Key generator Solve the revocation problem Manages large volumes of data Guarantees collusion resistant Availability Access control

✓ ✓







Efficiency, scalability and usability

Revocation of access Audit Ownership of information Patient consent for access to data

[2]



-

Guarantees only the confidentiality Less support for complex access policies

Inflexible access control

Confidentiality (Co), Integrity (In), Authenticity (Au), Audit (At), Non-repudiation (Nr), Anonymity (An) and Unlinkability (Un) ‘✓’ and ‘✘’ to denote that privacy requirements are satisfied or not.

6 Conclusion The privacy of sharing health data in cloud environment is an important issue that needs particular attention. In this paper, we have presented a review of solutions based on several studies that mainly focus on the important issue of privacy of sharing health data in a cloud environment. We have categorized the privacy-preserving approaches according to the techniques used. Moreover, we have presented a detailed comparison of the privacy-preserving approaches based on the fulfillment of the privacy-preserving requirements. Various research results have proposed approaches to enhance the privacy of sharing health data in CC. Despite all these efforts, there are also many issues

422

I. Boumezbeur and K. Zarour

still open and needs more attention, to enforce the privacy requirements while keeping the effectiveness of the proposed solutions. As a first observation, the ABE based mechanisms are effective in enforcing privacy requirements; despite they considered costly in terms of decryption because of bilinear computation steps. The encryption approaches based on a symmetric key are more efficient as compared to public key encryption approaches. Usually, the ABAC mechanisms are more flexible, consequently enabling more fine-grained access control, as well as the ABAC model resolve the role management issues in RBAC models. As another observation, it is important to preserve the privacy requirements of other parties in e-health sector not only for patients. Therefore, in order to ensure efficient health data sharing in cloud environment, we must apply privacy preserving to all the sharing process.

References 1. US Department of Health and Human Services: The National Alliance for Health Information Technology report to the Office of the National Coordinator for Health Information Technology on defining key health information technology terms (2008) 2. Ermakova, T., Fabian B.: Secret sharing for health data in multi-provider clouds. In: Business Informatics (CBI), pp. 93–100. IEEE (2013) 3. Li, Z., Chang, E., Huang, K., Lai, F.: A secure electronic medical record sharing mechanism in the cloud computing platform. In: Consumer Electronics (ISCE), pp. 98–103. IEEE (2011) 4. Sharon, R.S., Manoj, R.J.: E-health care data sharing into the cloud based on deduplication and file hierarchical encryption. In: Information Communication and Embedded Systems (ICICES), pp. 1–6. IEEE (2017) 5. Abbas, A., Khan, S.U.: E-Health cloud: privacy concerns and mitigation strategies. In: Medical Data Privacy Handbook, pp. 389–421. Springer, Cham (2015) 6. US Department of Health and Human Services: 45 Code of Federal Regulations, pp. 5566–5702 (2013) 7. Zhang, R., Liu, L.: Security models and requirements for healthcare application clouds. In: Cloud Computing (CLOUD), pp. 268—278. IEEE (2010) 8. AbuKhousa, E., Mohamed, N., Al-Jaroodi, J.: E-Health cloud: opportunities and challenges. Future Internet 4(3), 621–645 (2012) 9. Abbas, A., Khan, S.U.: A review on the state-of-the-art privacy-preserving approaches in the E-Health clouds. IEEE J. Biomed. Health Inform. 18(4), 1431–1441 (2014) 10. Chen, Y., Lu, J., Jan, J.: A secure EHR system based on hybrid clouds. J. Med. Syst. 36(5), 3375–3384 (2012) 11. Kulkarni, K.P., Dixit, A.M.: Privacy preserving health record system in cloud computing using attribute based encryption. Int. J. Comput. Appl. 122(18), 6–11 (2015) 12. Pussewalage, H.S.G., Oleshchuk, V.A.: Privacy preserving mechanisms for enforcing security and privacy requirements in E-Health solutions. Int. J. Inf. Manage. 36(6), 1161–1173 (2016) 13. Seol, K., Kim, Y., Lee, E., Seo, Y., Baik, D.: Privacy-preserving attribute-based access control model for XML-based electronic health record system. IEEE Access. 6, 9114–9128 (2018)

Privacy Preserving Requirements for Sharing Health Data

423

14. Manoj, R., et al.: Hybrid secure and scalable electronic health record sharing in hybrid cloud. In: Mobile Cloud Computing, Services, and Engineering (MobileCloud), pp. 185–190. IEEE (2017) 15. Michalas, A.: Sharing in the rain: secure and efficient data sharing for the cloud. In: Internet Technology and Secured Transactions (ICITST), pp. 182–187. IEEE (2016) 16. Fu, J., Huang, Q., Yang, Y.: Secure personal data sharing in cloud computing using attribute-based broadcast encryption. J. China Univ. Posts Telecommun. 21(6), 45–77 (2014) 17. Mohandas, A.: Privacy preserving content disclosure for enabling sharing of electronic health records in cloud computing. In: Proceedings of the 7th ACM India Computing Conference. ACM (2014) 18. Alshehri, S., Stanislaw, P.R., Raj, K.R.: Secure access for healthcare data in the cloud using ciphertext-policy attribute-based encryption. In: Data Engineering Workshops (ICDEW), pp. 143–146. IEEE (2012) 19. Shrestha, N.M., Alsadoon, A., Prasad, P.W.C., Hourany, L., Elchouemi, A.: Enhanced ehealth framework for security and privacy in healthcare system. In: Digital Information Processing and Communications (ICDIPC), pp. 75–79. IEEE (2016) 20. Fan, L., Buchanan, W., Thummler, C., Lo, O., Khedim, A., Uthmani, O., Lawson, A., Bell, D.: DACAR platform for eHealth services cloud. In: Cloud Computing (CLOUD), pp. 219–226. IEEE (2011) 21. Reddy, R.D., Kiran, P.R.: An efficient data sharing technique in the cloud: an EDST. Int. J. Recent. Innov. Trends Comput. Commun. 2(6), 1718–1720 (2014) 22. Musale, P., Tanuja, S.: Health care data security in cloud. Int. J. Innov. Eng. Technol. (IJIET) 8(3), 254–259 (2017) 23. Alamri, F.S., Lee, K.D.: Secure sharing of health data over cloud. In: Information Technology: Towards New Smart World (NSITNSW), pp. 1–5. IEEE (2015) 24. Thilakanathan, D., Chen, S., Nepal, S., Calvo, R., Alem, L.: A platform for secure monitoring and sharing of generic health data in the Cloud. Futur. Gener. Comput. Syst. 35, 102–113 (2014) 25. Radwan, A.S., Abdel-Hamid, A., Hanafy, Y.: Cloud-based service for secure electronic medical record exchange. In: Computer Theory and Applications (ICCTA), pp. 94–103. IEEE (2012) 26. Nagaty, K.A.: Mobile health care on a secured hybrid cloud. J. Sel. Areas Health Inform. 4(2), 1–9 (2014) 27. Tong, Y., Sun, J., Chow, S.S.M., Li, P.: Cloud-assisted mobile-access of health data with privacy and auditability. IEEE J. Biomed. Health Inform. 18(2), 419–429 (2014)-1

Using IoT and Social Networks for Enhanced Healthy Practices in Buildings Gonçalo Marques(&) and Rui Pitarma Polytechnic Institute of Guarda – Unit for Inland Development, Av. Dr. Francisco Sá Carneiro, no 50, 6300–559 Guarda, Portugal [email protected], [email protected]

Abstract. The health promotion is probably the most ethical, effective, efficient and sustainable approach to achieving good health. People spent more than 90% of their lives inside buildings. Therefore, it is imperative to monitor indoor parameters to identify health problems and plan interventions in buildings for enhanced occupational health. The introduction of embedded systems and the Internet of Things in the world of social networks allows new approaches to the way we access the information from these systems. By default, although we live in a social world, we interact with embedded systems and with the computer systems through web portals. This paper aims to present iAQs, a solution for real-time monitoring of indoor environmental parameters based on Internet of Things (IoT) architecture. This solution is composed of a hardware prototype for ambient data collection and a web service to publish and read information from comments on a specific Facebook page using the Facebook app (Facebook application). iAQs is based on open-source technologies providing real-time environmental monitoring system with several advantages such as modulatory, scalability and social network compatibility. Keywords: IoT (Internet of Things)  Smart cities  Occupational health Healthy buildings  Enhanced living environments  Social networks Facebook

1 Introduction Founded in 2004, Facebook is currently the most significant social networking service. In September 2012, Facebook announced over one billion active users across all platforms. According to Statista in the Q3 of 2015, the number of Facebook users is around 1545 million [1]. Internet of Things (IoT) is based on pervasive presence of a variety of things or objects which are connected to the internet using a unique address. IoT is an emerging technology that provide a new computational resources for creating revolutionary apps, the increase of devices with communicating–actuating capabilities is bringing closer the vision of an IoT, where the sensing and actuation functions seamlessly blend into the background and new capabilities are made possible through access to rich new information sources [2].

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 424–432, 2019. https://doi.org/10.1007/978-3-030-03577-8_47

Using IoT and Social Networks for Enhanced Healthy Practices

425

The indoor living environments are characterized by several polluting sources. Thus, the indoor quality is recognised as an essential factor to be controlled to ensure and improve the health and comfort of the occupants. Considering that nowadays the majority of people spend more than 90% of their time in artificial environments [3] is extremely important to remind that health problems and diseases associated with poor indoor quality. According to the Environmental Protection Agency of the United States [4], human exposure to indoor air pollutants in buildings may be 2 to 5 times, sporadically more than 100 times higher than outdoor pollution. Indoor environments not only accumulate the concentration of contaminants derivative from building materials, equipment but also from daily activities of occupants [5]. Is extremely important to improve the quality of indoor environments, however subsists a lack of interest in the study of scientific new methods to improve occupational health. The quality of indoor environments covers the visual and thermal comfort. Health promotion is probably the most ethical, effective, efficient and sustainable approach to achieving good health. The most widely accepted and utilized definition of health promotion is: ‘The process of enabling people to increase control over the determinants of health and thereby improve their health’ [6]. The implementation of programme actions to health promotion may involve multiple strategies, such as education and new technologies. In this way, the use of new systems helps us to understand the problem and sensitize for the IAQ issue, particularly by highlighting the possible impact on health. Considering the importance of social networks in today’s world, this work presents the iAQs (iAQ Social) - a real-time monitoring solution of indoor parameters using the Facebook social network. These sensors are connected to Arduino that uses an Ethernet shield which is responsible for sending request and response over the Internet. One similar solution is proposed by [7]. The iAQs is based on open-source technologies and has several advantages compared to existing systems, such as its modularity, scalability, low-cost and easy installation. The iAQs presents a solution to connect embedded systems to the Facebook social network. The user can use a specific Facebook page to get data from the system. This approach has been designed to illustrate a solution to access information in a social and user-friendly way. This system uses a web service and a set of web methods that turn possible the communication between the Facebook API (Application Programming Interface) and Arduino API. The use and integration of social networks are becoming increasingly important in many ways social networks provide APIs, which provide flexible tools to enhance and customise the use of the networking platform [8]. An alternative approach that utilizes the existing infrastructures of social network service (SNS) Sina Weibo in China and it’s open platform resource in order to integrate a wireless sensor network (WSN) into the web, offering social status in WSN to provide a home environment monitoring system employing our proposed method has been developed by [9]. Qualcomm Incorporated also publicized a patent that generally relates to IoT device social networking, and in particular to an IoT device publish-subscribe messaging model and automatic IoT device social network expansion where IoT devices from different networks may publish status data that relates to certain topics, wherein the published status updates may be managed in a distributed manner at each IoT network [10].

426

G. Marques and R. Pitarma

Social networking concepts have been applied to several communication network settings, which span from delay-tolerant to peer-to-peer networks. More recently, one can observe a flourish of proposals aimed at giving social-like capabilities to the objects on the IoT [11]. In one hand, is important to mention that we live in a social world where Facebook wins every day more users. On the other hand, the Arduino platform is used in a wide range of projects in which there is a need to store and read results quickly and efficiently. The combination of the IoT and the social network makes easier the interaction between people and environment which leads to the social IoT concept [12]. The paper is structured as follows: besides de introduction (Sect. 1), Sect. 2 presents the methods and materials used in the implementation of the iAQs; Sect. 3 is concerned to the system operation, and experimental results and the conclusion are presented in Sect. 4.

2 Materials and Methods The solution architecture is based on the Facebook API; the authors created a Facebook app to provide secure access to Facebook data. For data exchange and interaction, the authors created a web service that uses the Facebook .NET SDK to read and write data to a specific Facebook page. Figure 1 represents the system architecture.

Fig. 1. System Architecture

To communicate with Facebook, the authors had created a Facebook App. To create a Facebook App, the authors had set the application name, namespace and select the category of the app. Is also necessary to retain two important application information, which is the App ID and App Secret. These properties are required to configure the Web Service that uses this application.

Using IoT and Social Networks for Enhanced Healthy Practices

427

To manage the Facebook App, a web service was created in an ASP.NET website using the Visual Web Developer. The Facebook .NET SDK was installed using the Nugget as (see Fig. 2).

Fig. 2. Install .NET Facebook SDK via Nugget

Web methods are designed to create posts on the social network Facebook; these methods take a string of text to publish and create a post on the page with the user specified by the token and return a success message. To start using this system at the first stage, the user is redirected to a web page where it must accept the application. In this case, after obtaining the token, the application can now publish on the page with the username [13]. Thus, the application of Facebook can publish even with the user offline [14].

3 Discussion and Results The iAQs prototype is based on the Arduino UNO that incorporates an Ethernet Shield to communicate with the Web Service to be able permanently connected to the Internet. The Ethernet controller has an internal buffer of 16 K, a connection speed of 10/100 Mb and the connection to the Arduino is done in SPI port (see Fig. 3) [15]. For testing proposes this system uses temperature and a luminosity sensor. The DHT11 is a low-cost digital temperature, and humidity sensor, sensor readings [16] and the LDR sensor is a photosensitive resistor module most sensitive to the environmental light intensity that is generally used to detect the ambient brightness and light intensity [17]. The DHT11 sensor is connected to the analogue port 1 of the Arduino, and the LDR sensor is connected to the port Analog 2.

428

G. Marques and R. Pitarma

Fig. 3. Solution WorkFlow

The iAQs is always listening to the Facebook page, and when the user inserted data requesting the light and temperature levels, the solution makes the environmental sensing and post the output on the Facebook Page. This approach is a suitable solution for data interaction providing the information to the user in a social way. The iAQs prototype used by the authors for testing proposes is shown in Fig. 4.

Fig. 4. iAQs prototype

Using IoT and Social Networks for Enhanced Healthy Practices

429

Thus, the iAQs can be used as a robust solution to provide real-time data monitoring for enhanced living environments and to support decision making on possible interventions to offer healthy and more productive buildings. This system turns possible to provide the real-time indoor environmental parameters in a more socially and more friendly way using a Facebook page where the user can question the system as is showed in Fig. 5.

Fig. 5. Temperature ask question and answer

As future work, the main goal is to make technical improvements, including the development of important alerts and notifications to post data on the Facebook Page when the ambient quality has serious deficiencies. The actual development of the IoT needs significant issues related to things’ service discovery and composition to be addressed. A novel paradigm of “social network of intelligent objects”, namely the Social Internet of Things (SIoT), based on the notion of social relationships among objects is introduced by [18]. In one hand, compared to other systems such as [19–28] that perform real-time environment data collection this solution stands out by the data access approach. The interaction between the user and the system is performed by an application of Facebook whereupon the embedded system in the largest social network of today. On the other hand, using the Facebook SDK to create the user interface there are serious advantages in maintaining the level of service the costs of storage and data availability.

4 Conclusion This study aimed to present an effective indoor quality monitoring system with Facebook social network integration which aims to avoid the risk of exposure to poor indoor conditions named iAQs. The system was connected to the social network and

430

G. Marques and R. Pitarma

the solution allows the access to the indoor environmental parameters through Facebook. The Facebook has several advantages as we can have a page with information that enables interaction with the user without having costs of storing and access as this is provided for free by Facebook. The implementation of programme actions to health promotion may involve multiple strategies, such as education and new technologies. The Facebook social network gives a really interesting opportunity to share info across the web. In this way, the connection between IAQ monitoring systems and social network provide a important channel of data exchange for enhanced living environments and occupational health. The results obtained are auspicious, representing a significant approach for indoor real-time monitoring systems based on IoT. Compared to existing systems, this solution assumes increased interest due to the use of low cost and open source technologies and social network integration. The system is extremely useful in monitoring the quality conditions of the indoor environments and aims to ensure the permanent perception of the environmental parameters behaviour. Thus, the system can be used to help the building manager to provide more comfortable and productive environments. In addition, to this validation study, physical system and related software improvements have been planned to adapt the system to specific cases. Despite all the advantages in the use of IoT architecture, still exist many open issues as scalability, quality of service problems and security and privacy issues. This paper presents iAQs a real-time monitoring based on IoT architecture with social network compatibility. On the one hand, there is a wide range of uses for this proof of concept in several areas, are examples of data sharing with the caregiver in the field of assisted living done in real time or monitoring and sharing the presence of fire or gas data in homes. On the other hand, it shows how it is possible to use existing social networks capabilities (Facebook) for cost-effective real-time data sharing. The authors believe that in the future, systems like this will be an important part of the indoor spaces. The data collated can also be beneficial to offer support to the clinical analysis by health professionals. Only through real-time monitoring is possible to perceive correctly the conditions that influence the health of occupants and conduct interventions to increase the occupational health.

References 1. Number of monthly active Facebook users worldwide as of 3rd quarter 2015 (in millions), Statista, November 2015. http://www.statista.com/statistics/264810/number-of-monthlyactive-Facebook-users-worldwide/ 2. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of things (IoT): a vision, architectural elements, and future directions. Future Gener. Comput. Syst. 29(7), 1645–1660 (2013) 3. Spengler, J.D., Sexton, K.: Indoor air pollution: a public health perspective. Science 221(4605), 9–17 (1983). ISO 690 4. USEPA - United States Environmental Protection Agency: Questions About Your Community: Indoor Air. http://www.epa.gov/region1/communities/indoorair.html

Using IoT and Social Networks for Enhanced Healthy Practices

431

5. Mukhopadhyay, K., Ramasamy, R., Mukhopadhyay, B., Ghosh, S., Sambandam, S., Balakrishnan, K.: Use of ventilation-index in the development of exposure model for indoor air pollution—a review. Open J. Air Pollut. 3(02), 33 (2014) 6. World Health Organization: Targets for Health for All 2000. WHO, Regional Office for Europe, Copenhagen (1986) 7. Patra, P., Tripathi, D.: Monitoring the room and controlling its temperature and light intensity from a remote location over a network. Int. J. Adv. Comput. Tech. Appl. (IJACTA) 2, 096–101 (2015). ISSN 2321-4546 8. Muller, F., Thiesing, F.: Social networking APIs for companies — an example of using the Facebook API for companies. In: 2011 International Conference on Salamanca Computational Aspects of Social Networks (CASoN). IEEE (2011). ISBN 978-1-4577-1132-9 9. Zhang, J., Chen, J., Gao, Q., Huang, Z., Guo, L., Yang, Y.: A social network service-based environment monitoring system in home, pp 719–728. Springer International Publishing, Cham (2015). ISBN 978-3-319-11104-9 10. Shuman, M., Goel, A., Sharma, S., Gupta, B., Aggarwal, A.: Qualcomm Incorporated, Automatic IoT device social network expansion, US 20140244768 A1 (2014) 11. Atzori, L., Iera, A., Morabito, G.: From “smart objects” to “social objects”: the next evolutionary step of the internet of things. IEEE Commun. Mag. 52(1), 97–105 (2014). ISSN 0163-6804 12. Zhang, Y., Wen, J., Mo, F.: The application of internet of things in social network. In: 2014 IEEE 38th International Computer Software and Applications Conference Workshops (COMPSACW), INSPEC Accession Number: 14600451, pp. 223–228 (2014) 13. Biddington, B.: Facebook Graph API - getting access tokens (2010). http://benbiddington. wordpress.com/2010/04/23/Facebook-graph-api-getting-access-tokens/. Accessed 12 Dec 2015 14. Firake, B.: How to post to the profile of other people using following code (2012). http:// stackoverflow.com/questions/13824428/how-to-post-to-the-profile-of-other-people-usingfollowing-code. Accessed 12 Dec 2015 15. Arduino: Arduino – ArduinoEthernetShield. http://arduino.cc/en/Main/ArduinoEthernet Shield. Accessed 5 Dec 2015 16. DHT11 Datasheet. https://www.adafruit.com/product/386. Accessed 5 Dec 2015 17. Photoresistor Datasheet. http://www.digibay.in/250-photo-resistor-ldr-light-sensor-module. Accessed 5 Dec 2015 18. Atzori, L., Iera, A., Morabito, G.: SIoT: giving a social structure to the internet of things. IEEE Commun. Lett. 15(11), 1193–1195 (2011). ISSN 1089-7798 19. Pitarma, R., Marques, G., Ferreira, B.R.: Monitoring indoor air quality for enhanced occupational health. J. Med. Syst. 41(2), 23 (2017) 20. Marques, G., Pitarma, R.: Health informatics for indoor air quality monitoring. In: 2016 11th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6 (2016) 21. Pitarma, R., Marques, G., Caetano, F.: Monitoring indoor air quality to improve occupational health. In: Rocha, Á., Correia, A.M., Adeli, H., Reis, L.P., Mendonça Teixeira, M. (eds.) New Advances in Information Systems and Technologies, vol. 445, pp. 13–21. Springer International Publishing, Cham (2016) 22. Marques, G., Pitarma, R.: Smartphone application for enhanced indoor health environments. J. Inf. Syst. Eng. Manag. 4(1), 9 (2016) 23. Marques, G., Pitarma, R.: Monitoring health factors in indoor living environments using internet of things. In: Rocha, Á., Correia, A.M., Adeli, H., Reis, L.P., Costanzo, S. (eds.) Recent Advances in Information Systems and Technologies, vol. 570, pp. 785–794. Springer International Publishing, Cham (2017)

432

G. Marques and R. Pitarma

24. Marques, G., Pitarma, R.: Monitoring and control of the indoor environment. In: 2017 12th Iberian Conference on Information Systems and Technologies (CISTI) 2017, pp. 1–6 (2017) 25. Feria, F., Salcedo Parra, O.J., Reyes Daza, B.S.: Design of an architecture for medical applications in IoT. In: Luo, Y. (ed.) Cooperative Design, Visualization, and Engineering, vol. 9929, pp. 263–270. Springer International Publishing, Cham (2016) 26. Ray, P.P.: Internet of things for smart agriculture: technologies, practices and future direction. J. Ambient Intell. Smart Environ. 9(4), 395–420 (2017) 27. Matz, J.R., Wylie, S., Kriesky, J.: Participatory air monitoring in the midst of uncertainty: residents’ experiences with the speck sensor. Engag. Sci. Technol. Soc. 3, 464 (2017) 28. Demuth, D., Nuest, D., Bröring, A., Pebesma, E.: The airquality sensebox. In: EGU General Assembly Conference Abstracts, vol. 15 (2013)

IOT System for Self-diagnosis of Heart Diseases Using Mathematical Evaluation of Cardiac Dynamics Based on Probability Theory Juan Piedrahita-Gonzalez(&), Juan Cubillos-Calvachi, Carlos Gutiérrez-Ardila, Carlos Montenegro-Marin, and Paulo Gaona-García Engineering Faculty, Universidad Distrital Francisco José de Caldas, Bogotá D.C., Colombia {jspiedrahitag,jdcubillosc, chgutierreza}@correo.udistrital.edu.co, {cemontenegrom,pagaonag}@udistrital.edu.co

Abstract. Cardiovascular diseases are one of the most frequent worldwide death causes, this owing to a late detection of that illnesses, however, the majority of people do not carry out proper cardiovascular health check-ups due to a lack of time or resources. For that reason, we propose an IoT services oriented system which throughout patient’s mobile device and heart rate monitor, using an analysis methodology based on the probability theory, allows the patient to take preventive check-ups in his own anytime, and lets him know when it is necessary to consult medical services, all of that supported in a proved and reliable diagnosis. Finally, we show a system implementation test, the results and the future work derived from this research. Keywords: Data analysis  Cardiovascular health Internet of things  Probability theory

 Heart Self-diagnosis

1 Introduction Cardiovascular Diseases (CVD) have been one of the main death causes worldwide, according to American Heart Association in 2013 the CVD reports an amount of 17.3 million of 54 million total deaths in the world or 31.5% of all global deaths [5]. With the aim of reducing this percentage, medicine has to be oriented to prevention and real time monitoring, however, attend to medical checks can demand too much time and money and for that reason patients tend to skip those checks. Therefore, we have developed a tool that allows patients to know an approximate state of his cardiovascular health, our goal is to provide the patient with tools to carry out cardiovascular analysis on his own through devices that he owns in his home or that can be easily accessed. The tool consists in a mobile application for android smartphones which has the capacity of collecting data from a generic Bluetooth Low Energy (BLE) heart rate measurement device, and send that data to a server, in order to be stored and analyzed, for giving as result the diagnosis of the patient. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 433–441, 2019. https://doi.org/10.1007/978-3-030-03577-8_48

434

J. Piedrahita-Gonzalez et al.

2 Related Work There are several research projects and work proposals with the aim of determine mechanisms that allow patients to be monitored [17], by capturing their vital signs. Currently there are two main ways to make an individual heart rate control (i) using apps installed in smartphones or (ii) using heart rate sensors which are included in devices such as watches, wristbands or sports bands [9]. Some of the vital signs that are monitoring with this approach are blood glucose level [14], blood pressure [2], heart rate [4] and electrical activity of the heart (ECG) [8]. Early researches and works as [7] have been developed to monitoring and find critical changes in the vital signs values of a patient, if an abnormal behavior is detected the device sends an alert. Other medical notification systems are shown in [3, 10, 15], where selected different persons can receive a SMS notification when a patient presents any arrhythmia signs, those systems use ECG sensors and uses different methods and algorithms, as Rwave Detection, Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM), in order to perform data analysis and detect arrhythmias. Most of the works with ECG sensors presents some difficulties with ambiguity in locating the electrodes of the sensors, unfixed lead-wires and thick substance of wireless cardiograph. It is important to say that the works are not only oriented to heart rate diseases patients, some developments like [16] are oriented also to sports people who had the requirement of control their heart behavior during training. Those kind of works show the potential of mobile devices in telemedicine, being one of the fastest ways to send the notifications with relevant data as IDs, locations and other details. Nevertheless, with the current technology and the advances in medical science and IoT devices, it is possible to aim for techniques that allows to give anticipated alerts when the system finds signs of a disease, and before the critical changes occur, having a higher probability of giving a proper treatment for the patient disease, works as [11] show that the applications itself cannot make preventive diagnoses to the patient but needs the intervention of a specialist or doctor, for that reason our work is not only oriented to monitoring but diagnosis too, approach of this is shown in [1] that describes an IoT based patient monitoring system and diagnostic prediction tool for the stroke affected elderly people, this system collects data from blood pressure with pulse rate and glucose level in blood sensors, the prediction model is developed using the classifications algorithms in machine learning. The weakness is that even the system is wearable users must activate the system pushing a start button, so continuous automatized monitoring is not presented.

3 System Description The system has two parts, the first part is the architecture which describe all the components and the way that they communicate to each other, the second part is the data analysis, each part will be explained in the following sections.

IOT System for Self-Diagnosis of Heart Diseases

3.1

435

Architecture

The system has five components, these components interact with one another in different ways and allow the system to collect and show information to the user, as seen in the Fig. 1. the heart rate measurements are collected by the sensor that the user wears, then the information is read by an android device and sent to a server where the data is processed and stored in a database. The communication between the android device and the server is done through internet and brings the user the advantage of wearing the device in any place and still collecting and storing data.

Fig. 1. Components of the systems architecture and their information flow.

Each component has specific functions and has characteristics that supports the communication between each other, those specifications are: Heart Rate Sensor. The heart rate sensor works with BLE, this sensor expose services that are consumed by other devices that supports this technology, the exposed services are called (GATT) Services Generic Attribute Profile which establishes in detail how to exchange all data over a BLE connection, each service has a unique id that makes it different of others, those service unique ids are described in the Bluetooth GATT Services Specification [6]. Android Device with Heart Rate Diagnosis App. An app for android devices was developed to create a bridge between the heart rate sensor and the Representational State Transfer (REST) API, the android device must be compatible with BLE because the app uses the cell phone’s Bluetooth to connect to the service exposed by the sensor. When the connection is established the heart rate sensor begins to send the measurements that are read by the app and are showed to the user (See Fig. 2), then they are send to the server where they are stored with the corresponding date.

436

J. Piedrahita-Gonzalez et al.

Fig. 2. App heart rate measurement screen, on the left figure the connection is not established, once the Bluetooth is on and the device is selected the app shows the actual measurement to the user and the amount of measurements read as seen in the figure of the right.

REST API. It performs all the (CRUD) Create, Read, Update and Delete operations on the database, besides it contains all the diagnosis logic in order to make all the data analysis results available, the endpoints defined in the REST API are: – login: Makes the user access with his id and password. – getMeasurements: Obtains all the patients measurements, if the request has an id parameter the specific user measurements associated with the given id. – postMeasurement: Stores in the database the measurements collected from one user. – getPatients: Obtains all the patients data, if the request contains an id parameter the result only contains the specific patient data associated with the given id. – postPatient: It registers a patient and stores his basic information. – getDiagnosis: Reads the diagnoses that are associated with the patient that performs the consult. – postDiagnosis: It stores the diagnosis results when the user selects the option in the app. – makeDiagnosis: Performs the data analysis with the measurements recollected in the last 21 h and returns the diagnosis results.

IOT System for Self-Diagnosis of Heart Diseases

437

DB server. The database stores all user’s basic information, the measurements collected and the diagnosis that the user performs, the information only can be accessed by the REST API and the user can see it through the app or the web client. 3.2

Data Analysis

One of the most important part in the system is the data analysis which allow the user to know a preventive diagnosis of his cardiovascular health, in the REST API the cardiac measurements already stored in the database are analyzed with the use of a method based on the theory of probability. The methodology proposed in [12] is used to analyze the data of the heart rate of cardiac measurements of a patient in a minimum of 21 h, and to establish if a patient is in heart disease status, healthy or is in progress towards heart disease, this methodology was tested in [13]. The methodology determined that, as in dynamic systems, characteristic spaces can be established to determine characteristic behaviors in healthy and sick patients, in order to be applied to particular cases and to be able to set conclusive results regarding their cardiac health, with the cardiac measurements ranges of 5 beats per minute is defined and the probability of the patient being in one of these ranges is calculated, in the same way the probability that the patient presents a number n of beats in one hour is calculated, establishing ranges of 250 beats. According to the results, the following criteria admits to determine a patient’s cardiac health status: 1. If a patient’s heartbeat data appears in more than 17 heart rate ranges, his condition is normal, but if a patient appears in 14 or less heart rate ranges his condition is characteristic of heart disease, if he is in the middle, it is said that the criteria it is not conclusive. 2. The parameters a and b are applied simultaneously. a. Difference between the ranges of the two frequency values with greater probability greater or equal to 15 is characteristic of disease. b. The maximum probability of the number of beats with value equal or less than 0.217 or greater than or equal to 0.304 is characteristic of disease. According to parameters a and b, it is vested that: – – – –

If only the a parameter is presented, then, there is disease. If the a and b parameter are presented, then, there is disease. If only the b parameter is presented, then, there is progress towards heart disease. If only the b parameter is presented and the number of beats in one hour less than 3000 or greater than 6250, then, there is disease.

3. The sum of the two most frequent probabilities in sick Holters is characteristic of disease when it presents higher values than 0.319 in the following cases: – When parameters a and b are also characteristic of disease. – When the parameter b is characteristic of disease.

438

J. Piedrahita-Gonzalez et al.

– When parameter b is characteristic of disease and a number less than 3,000 or greater than 6,250 beats is presented in one hour In that order the possible responses of the REST API are: – For the first analysis: • Characteristic of normality. • Characteristic of heart disease. • Inconclusive. – For the second analysis: • Characteristic of heart disease by (first or second) criteria. • Characteristic of disease in evolution by third test • Health. – For the third analysis: • Characteristic of disease by (first, second or third) test • Health • Inconclusive

4 Case of Study The case of study for this investigation consists in the implementation of the system with a healthy patient. The implementation begins when the user download and install the App, then the user must put on the heart rate sensor and launch the App, then the patient makes the app register and login, finally he selects the sensor connected in the App and the App begins to collect the data, the patient has to wear the sensor for 21 h (as recommended in [12]) while performs his normal daily activities. In this case the heart rate was monitored during this time, when the test finish there were 75603 registers of measurements stored in the database, each one has the exact date and the value of the measurement. When the user has enough data to perform the analysis he goes to the diagnosis screen where he can execute the diagnosis generator process (see Fig. 3) and see the results of that process. In this case as expected at the beginning of the test the result of the diagnosis conclude that the user was completely healthy.

IOT System for Self-Diagnosis of Heart Diseases

439

Fig. 3. Diagnosis screen, in the left figure the user has not performed the diagnosis and in the right one the user has performed the diagnosis, which allows him to see the results and record them if he wants.

5 Conclusions The design of this system, which includes a reliable, analysis component that works over the data collected using mobile devices and IoT technologies, as well as wearable devices which communicate with the system through reliable protocols as BLE, come out as an alternative to the traditional diagnoses, made with professional medical equipment. Using our approach, the patient has the possibility of having a check-up of his cardiovascular health when he wants to, and carry out it as frequently as he needs, even without having the necessity of leave his home, in that way, the only equipment needed to have a diagnosis is composed of an android mobile device and a Bluetooth heart rate monitor, getting as a result, the patients current cardiovascular health state and recommendations if needed, based on a variety of criteria which use data analytics and probability theory. Our future work will be oriented to make authorized test of the system with cardiovascular patients in a hospital.

440

J. Piedrahita-Gonzalez et al.

Acknowledgments. The authors gratefully acknowledge the support for this project to the GIIRA Research Group at the Universidad Distrital Francisco José de Caldas, Engineering Faculty.

References 1. Ani, R., Krishna, S., Anju, N., Aslam, M.S., Deepa, O.S.: IoT based patient monitoring and diagnostic prediction tool using ensemble classifier. In: 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 1588–1593 (2017) 2. Antonovici, D., Chiuchisan, I., Geman, O., Tomegea, A.: Acquisition and management of biomedical data using internet of things concepts. In: 2014 International Symposium on Fundamentals of Electrical Engineering (ISFEE), pp. 1–4 (2014) 3. Azariadi, D., Tsoutsouras, V., Xydis, S., Soudris, D.: ECG signal analysis and arrhythmia detection on IoT wearable medical devices. In: 2016 5th International Conference on Modern Circuits and Systems Technologies (MOCAST), pp. 1–4 (2016) 4. Bathilde, J.B., Then, Y.L., Chameera, R., Tay, F.S., Zaidel, D.N.A.: Continuous heart rate monitoring system as an IoT edge device. In: 2018 IEEE Sensors Applications Symposium (SAS), pp. 1–6 (2018) 5. Benjamin, E.J., Blaha, M.J., Chiuve, S.E., Cushman, M., Das, S.R., Deo, R., de Ferranti, S. D., Floyd, J., Fornage, M., Gillespie, C., Isasi, C.R., Jimenez, M.C., Jordan, L.C., Judd, S.E., Lackland, D., Lichtman, J.H., Lisabeth, L., Liu, S., Longenecker, C.T., Mackey, R.H., Matsushita, K., Mozaffarian, D., Mussolino, M.E., Nasir, K., Neumar, R.W., Palaniappan, L., Pandey, D.K., Thiagarajan, R.R., Reeves, M.J., Ritchey, M., Rodriguez, C.J., Roth, G.A., Rosamond, W.D., Sasson, C., Towfighi, A., Tsao, C.W., Turner, M.B., Virani, S.S., Voeks, J.H., Willey, J.Z., Wilkins, J.T., Wu, J.H., Alger, H.M., Wong, S.S., Muntner, P.: Heart disease and stroke statistics—2017 update: a report from the American heart association. Circulation 135(10), 146–603 (2017) 6. Bluetooth: GATT services. https://www.bluetooth.com/specifications/gatt/services 7. Gao, T., Greenspan, D., Welsh, M., Juang, R.R., Alm, A.: Vital signs monitoring and patient tracking over a wireless network. In: 2005 27th Annual Conference on IEEE Engineering in Medicine and Biology, pp. 102–105 (2005) 8. Gia, T.N., Jiang, M., Sarker, V.K., Rahmani, A.M., Westerlund, T., Liljeberg, P., Tenhunen, H.: Low-cost fog-assisted health-care IoT system with energy-efficient sensor nodes. In: 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 1765–1770 (2017) 9. Mohammed, J., Lung, C., Ocneanu, A., Thakral, A., Jones, C., Adler, A.: Internet of things: remote patient monitoring using web services and cloud computing. In: 2014 IEEE International Conference on Internet of Things (iThings), and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom), pp. 256–263 (2014) 10. Muppalla, V., Suraj, N.S.S.K., Suman, D., Mukherjee, B.: Implementation of an arrhythmia detection scheme with cellular based alert framework. In: 2017 IEEE Calcutta Conference (CALCON), pp. 438–442 (2017) 11. Narváez, R.B., Villacs, D.M., Chalen, T.M., Velsquez, W.: Heart rhythm monitoring system and IoT device for people with heart problems. In: 2017 International Symposium on Networks, Computers and Communications (ISNCC), pp. 1–5 (2017)

IOT System for Self-Diagnosis of Heart Diseases

441

12. Rodríguez, J., Correa, C., Ortiz, L., Prieto, S., Bernal, P., Ayala, J.: Evaluación matemática de la dinámica cardiaca con la teoría de la probabilidad. Rev. Med. 20(4), 183–189 (2009) 13. Rodríguez, J., Correa, C., Prieto, S., Bernal, P., Forero, G., Vitery, S., Alvarez, L., Puerta, G.: Confirmación del método de ayuda diagnóstica de la dinamica cardiaca de aplicación clínica desarrollado con base en la teoría de la probabilidad. Rev. Med. 19(2), 167–178 (2011) 14. Triventi, M., Mattei, E., Censi, F., Calcagnini, G., Strano, S., Bartolini, P.: A SMS-based platform for cardiovascular tele-monitoring. In: IFMBE Proceedings, pp. 295–298. Springer, Heidelberg (2009) 15. Watanabe, H., Kawarasaki, M., Sato, A., Yoshida, K.: Development of wearable heart disease monitoring and alerting system associated with smartphone. In: 2012 IEEE 14th International Conference on e-Health Networking, Applications and Services (Healthcom), pp. 292–297 (2012) 16. Zulkifli, N.S.A., Harun, F.K.C., Azahar, N.S.: XBee wireless sensor networks for heart rate monitoring in sport training. In: 2012 International Conference on Biomedical Engineering (ICoBE), pp. 441–444 (2012) 17. Mendoza-López, P., Gaona-García, D., Montenegro-Marín, C., Vargas-Alvarado, F.: Communication infrastructure for monitoring heart rate of patients on the cloud using IoT devices. Int. Inf. Insti. 21(1). 131–138 (2018)

Modeling the OWASP Most Critical WEB Attacks Yassine Ayachi(&), El Hassane Ettifouri, Jamal Berrich, and Bouchentouf Toumi LSE2I Laboratory, National School of Applied Sciences, Mohammed The First University, Oujda, Morocco [email protected], [email protected], [email protected], [email protected]

Abstract. The tremendous growth of the web-based applications has increased information security vulnerabilities over the Internet. The threat landscape of applications security is constantly evolving (see CVE 1. published reports 2.). The key factors in this evolution are the progress made by the attackers, the emergence of new technologies with new weaknesses, as well as more integrated defenses, and the deployment of increasingly complex systems. Our contribution’s goal is to build a common model of the most famous and dangerous WEB attacks which will allow us to better understand those attacks and hence, adopt the most adapted security strategy to a given business and technical environment. This modeling can also be useful to the problematic of intrusion detection systems evaluation. We have relied on the OWASP TOP 10 classification of the most recent critical WEB attacks 3. and we deduced at the end of this paper a global modeling of all these attacks. Keywords: WEB application vulnerabilities  WEB attack  Attacks modeling OWASP TOP 10 classification

1 Introduction There is no doubt that web application security is a current and important subject. For all concerned, the stakes are high: for businesses that derive increasing revenue from Internet commerce, for users who trust web applications with sensitive information, and for criminals who can make big money by stealing payment details or compromising bank accounts. Hence, it is not a trivial task to obtain reliable information about the state of web application security today. To help companies and decision makers adopt an efficient security strategy, we tried to study the most critical attacks reported by an independent organization, the Open Web Application Security Project (OWASP) 4., which regularly publish a report called OWASP TOP 10. The main goal of our study is to determine a model from the exhaustive possible scenarios of those attacks, we were inspired by a similar work 5. realized for malware attacks.

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 442–450, 2019. https://doi.org/10.1007/978-3-030-03577-8_49

Modeling the OWASP Most Critical WEB Attacks

443

2 Related Works This project falls within a general context of an Intrusion Detection System (IDS) evaluation based on a probabilistic approach via Markov Chains Theory 6. which focuses on WEB application attacks. This is also related to the answer of the problematic of what security strategy reduces the most the technical and business impact of a given attack.

3 Vulnerabilities and Risks Presented by the OWASP 3.1

OWASP Presentation

Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted. The Top10 project is referenced by many standards, books, tools, and organizations, including MITRE 7., PCI DSS 8., DISA 9., FTC 10., and many more. This used release of the OWASP Top 10 marks this project’s tenth anniversary of raising awareness of the importance of application security risks. The OWASP Top 10 was first released in 2003, with minor updates in 2004 and 2007. The 2010 version was revamped to prioritize by risk, not just prevalence. This 2013 edition follows the same approach. For the rest of this paper we represent each studied attack by Ai, with (i) ranging from 1 to 10 and representing the attacks in the following classification (Table 1):

Table 1. OWASP TOP10–2013 version. Attack code Attack name A1 Injection A2 Broken authentication and session management A3 Cross-site scripting (XSS) A4 Insecure direct object references A5 Security misconfiguration A6 Sensitive data exposure A7 Missing function level access control A8 Cross-site request forgery (CSRF) A9 Using known vulnerable components A10 Unvalidated redirects and forwards

3.2

Evolution of an Attack

Attackers can potentially use different paths through an application to adversely affect the business or the company. The overall development of an attack can be schematized as follows:

444

Y. Ayachi et al.

Sometimes, these paths are trivial to find and exploit and sometimes they are extremely difficult. Similarly, the harm that is caused may be of no consequence, or it may make severe damage to the business. To determine the risk, we can evaluate the likelihood associated with each threat agent, attack vector, and security weakness and combine it with an estimate of the technical and business impact to the organization. Together, these factors determine the overall risk.

4 Attack Analysis 4.1

Malware Attack Approach

This section presents the study in 5. which inspired us, it analyzed about 40 famous and dangerous malware samples and the interesting observed results is, in spite of the malware diversity, the attack steps can be categorized in only eight patterns. Each attack step is given a unique symbol as follows: R. Reconnaissance: Search for necessary information about potential victims and their characteristics before targeting them. This enables attackers to select appropriate attack tools, exploits, etc. GA. Gain Access: Gain some access to some victim resources. The level of the required access differs according to the chosen attack. VB. Victim Browsing: after having acquired sufficient access, the attacker tries to explore the victim internals (e.g., browse folders and files, search through user accounts, identify hardware components, identify installed programs, search for trusted hosts, etc.). EP. Execute Program: can be associated to a command line crontab, lynx, nc, etc. IMC. Implant Malicious Code: can be associated to a malicious scp, execution of a metasploit, etc. CDI. Compromise Data Integrity: can be associated to commands cp, rm, mv, configuration file edition, etc. DoS. Denial of Service: it’s typically accomplished by flooding a targeted service or machine with superfluous requests to temporarily or indefinitely disrupting it. HT. Hide Traces: more experienced attackers usually carry out this step to erase traces of what they did on the victim and to make forensics more difficult. We invite the reader to see the deduced model from this study in 5.. In the next section, we will conduct a classification similar to the previous one, which focuses WEB attacks. We will study the TOP 10 attacks one by one and finally in the next paragraph, conclude with a global model that encompasses all cases. 4.2

Malware Attack

We remind that the attacks studied here are those of the OWASP TOP10. The overall development of an attack is schematized in Fig. 1.

Modeling the OWASP Most Critical WEB Attacks

445

Fig. 1. Global evolution of a WEB attack.

By analogy to 5., the followed steps to exploit vulnerabilities in the TOP 10, can be classified into primitive steps, 10 in our case and slightly adapted to the WEB context. We have identified each one by a symbol, as shown below: R. Reconnaissance: Is the same as 5.. GA. Gain Access: Is the same as 5.. VB. Victim Browsing: Is the same as 5.. EP. Execute Program: Is the same as 5.. CDI. Compromise Data Integrity: Is the same as 5.. CDC. Compromise Data Confidentiality: access, steal and leak sensitive and secret business or personal data. DoS. Denial of Service: Is the same as 5.. IT. Identity Theft: obtain sensitive information such as usernames, passwords, and credit card details. MR. Malicious URL Redirection: can be associated to a redirect to a malicious website managed by the attacker. HT. Hide Traces: Is the same as 5.. A1 – Injection. Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. The injection attack process is represented as follows (Fig. 2):

Fig. 2. Injection attack model.

446

Y. Ayachi et al.

A2 – Broken Authentication and Session Management. Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities. This attack process is represented as follows (Fig. 3):

Fig. 3. Broken authentication and session management attack model.

A3 – Cross-Site Scripting (XSS). XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. This attack process is represented as follows (Fig. 4):

Fig. 4. XSS attack model.

A4 – Insecure Direct Object References. Adirect object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. This attack process is represented as follows (Fig. 5): A5 – Security Misconfiguration. Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date.

Modeling the OWASP Most Critical WEB Attacks

447

Fig. 5. Insecure direct object references attack model.

This attack process is represented as follows (Fig. 6):

Fig. 6. Security misconfiguration attack model.

A6 – Sensitive Data Exposure. Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser. This attack process is represented as follows (Fig. 7):

Fig. 7. Security misconfiguration attack model.

A7 – Missing Function Level Access Control. Most web applications verify function level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access functionality without proper authorization. This attack process is represented as follows (Fig. 8): A8 – Cross-Site Request Forgery (CSRF). ACSRF attack forces a logged-on victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information, to a vulnerable web

448

Y. Ayachi et al.

Fig. 8. Missing function level access control attack model.

application. This allows the attacker to force the victim’s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. This attack process is represented as follows (Fig. 9):

Fig. 9. CSRF attack model.

A9 – Using Known Vulnerable Components. Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defenses and enable a range of possible attacks and impacts. This attack process is represented as follows (Fig. 10):

Fig. 10. Using known vulnerable components attack model.

A10 – Unvalidated Redirects and Forwards. Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages. This attack process is represented as follows (Fig. 11):

Fig. 11. Unvalidated redirects and forwards attack model.

Modeling the OWASP Most Critical WEB Attacks

449

5 Global Modelling of Web Attacks A global attack model can be deduced from the previous analysis which we illustrate in Fig. 12 Below:

Fig. 12. Global WEB attack model.

In this context, it should be noted that the threat agent (internal or external) favors certain attacks on others and implies damage depending on the application’s business area and its technical environment.

6 Conclusion and Future Works As this work focuses on WEB attacks, we can combine our efforts with those in the cross-platform development domain to study for a given application the impact according to its platform (WEB, Mobile or Desktop) 11.. These results make it also possible to setup an intelligent and probabilistic security approach for intrusion detection systems, based on Markov Chains Theory, which may predict an attack according to the visitor navigation path 6. and 12..

450

Y. Ayachi et al.

References 1. CVE: Common Vulnerabilities and Exposures (CVE), Cve.mitre.org (2017). http://cve. mitre.org/. Accessed 10 June 2017 2. Vulnerability distribution of CVE security vulnerabilities by types, Cvedetails.com (2017). https://www.cvedetails.com/vulnerabilities-by-types.php. Accessed 10 Jun 2017 3. Top 10 2013-Top 10-OWASP, Owasp.org (2017). https://www.owasp.org/index.php/Top_ 10_2013-Top_10. Accessed 10 June 2017 4. OWASP, Owasp.org (2017). https://www.owasp.org/index.php/Main_Page. Accessed 10 June 2017 5. Abou El Kalam, A., Gad El Rab, M., Deswarte, Y.: A model-driven approach for experimental evaluation of intrusion detection systems. Secur. Commun. Netw. 7(11), 1955– 1973 (2013, in press) 6. Ayachi, Y., Rahmoune, N., Ettifouri, E., Berrich, J., Bouchentouf, T.: Setting up a selflearning IDS based on Markov chains theory. In: 2016 5th International Conference on Multimedia Computing and Systems (ICMCS) (2016, in press) 7. The MITRE Corporation, Mitre.org (2017). https://www.mitre.org/. Accessed 10 Jun 2017 8. Official PCI Security Standards Council Site - Verify PCI Compliance, Download Data Security and Credit Card Security Standards, Pcisecuritystandards.org (2017). https://www. pcisecuritystandards.org/. Accessed 10 June 2017 9. Defense Information Systems Agency, Disa.mil (2017). http://www.disa.mil/. Accessed 10 June 2017 10. Federal Trade Commission: Federal Trade Commission (2017). https://www.ftc.gov. Accessed 10 June 2017 11. Ettifouri, E.H., Rhouati, A., Dahhane, W., Bouchentouf, T.: ZeroCouplage framework: a framework for multi-supports applications (web, mobile and desktop). In: El Oualkadi A., Choubani F., El Moussati A. (eds.) Proceedings of the Mediterranean Conference on Information & Communication Technologies 2015. LNEE, vol 381. Springer, Cham (2016) 12. Ayachi, Y., Rahmoune, N., Ettifouri, E., Berrich, J., Bouchentouf, T.: Detecting website vulnerabilities based on Markov chains theory. In: 2016 5th International Conference on Multimedia Computing and Systems (ICMCS) (2016, in press)

Measurement of Co-deployment of IT Quality Standard: Application to ISO9001, CMMI and ITIL Hind Dahar(&) and Ounsa Roudies Univ Mohammed V-Rabat, Emi, Siweb Team, EMI, Av Ibn Sina BP 765, Rabat-Agdal, Maroc [email protected], [email protected]

Abstract. Standards are not only used to assess quality, they also direct the implementation of the best practices. In the absence of a repository covering the entire development cycle and computer operation, a company has to comply with several quality standards to optimize its business. This diversity generates significant costs for its implementation in terms of time, resources mobilization, and budget. The idea is to study feasibility of the co-deployment of several quality standards and to minimize the effort of this deployment. This article proposed a set of metrics and global ratio, in order to measure the possibility of deploying several standards simultaneously, and to highlight the modalities of their co-deployment in terms of coherence, completeness and redundancy. A demonstration example is presented in this article by applying this methodology to the three famous standards namely ISO9001, CMMI and ITIL. Keywords: Quality references  Quality standards CMMI  ITIL  Alignment  Coherence  Metrics

 ISO 9001

1 Introduction The growing of requirements in terms of “quality” in all domains of activities has imposed norms and standards for good quality practices as essential tools in managing and controlling information systems. There are several references that cover the main activities of the ISD (Information Systems Department) such as ISO 9001, COBIT, CMMI, etc. According to Cigref [1], managing an ISD can no longer be conceived today without resorting to one or more standards of techniques or management. Each standard has its own strengths but none alone satisfies the different requirements of an information system independently [2]. For example, CMMI improves the maturity levels of business processes while ITIL ensures the post-production quality. The problem lies in the support of different standards within the same organization. In such a case, the co-deployment of several simultaneous references emerges as a study subject. For example, Bahsani [3] proposed an approach that is designed to present a unified maturity model of several quality standards. On the other hand, Elhasnaoui [4] presents an autonomous communication system to back up a solution © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 451–459, 2019. https://doi.org/10.1007/978-3-030-03577-8_50

452

H. Dahar and O. Roudies

that aligns governance requirements and IT risk. Most deployments of quality standards aim at a thorough understanding of practices and optimal risk management. However, to our knowledge no approach was interested in a co-deployment oriented towards quality assessment. Assessing conformity with multiple norms, standards, and quality standards simultaneously is a complex and costly process in terms of time, budget, and human resources mobilization, which might also result in problems in terms of inconsistency of requirements. The idea is to study the feasibility of jointly deployment of several standards simultaneously and to minimize the effort of a conjoint usage of those standards in the IT field. In this article, we present a new approach to measure the co-deployment of several standards, based on metrics that will allow for a measure of co-deployment. Firstly, we evaluate each standard according to a set of characteristics. Secondly we evaluate co-deployment by pair of standards. The third step is to measure coherence, completeness and redundancy for the whole standards. In other words, the purpose is to answer the following questions that are to be considered: Question 1: Is it possible to use n standards simultaneously? Question 2: What are the advantages of using them simultaneously? Question 3: What are the disadvantages of using them simultaneously? To validate this approach, we applied it to the three well-known quality standards and widely adopted in the computer domain, namely ISO 9001, CMMI and ITIL. The relevance of this choice is attributed to the fact that the three standards together allow to cover the entire life cycle of the software quality. The remainder of this article is organized in six sections. Section 2 presents the method of measuring the Co-deployment of quality standards. Section 3 describes the results of the measurement applied to the three selected standards namely ISO 9001, CMMI and ITIL. The conclusion highlights the strengths of this approach and leads to new perspectives.

2 Multistandard Co-deployment Measurement Method The objective behind measuring the simultaneous implementation of several standards is to evaluate its feasibility. To start this approach, we are going to divide our study into three axes: Coherence, Completeness and Redundancy. The first one sheds light on the feasibility of standards co-deployment, the second one treats the benefits of codeployment, that is appraised by the study of the completeness of the standards in order to identify the zones in which one of the reference frames fills the weaknesses of the other, otherwise the last one concerns the disadvantages of co-deployment that requires the study of redundancy and confusion to frame the common parts of standards that require a duplicated effort. The approach specifies two steps to follow namely, the determination of the characteristics of the standards and the measurement of the metrics of the codeployment of the standards. These steps are detailed in the following sections.

Measurement of Co-Deployment of IT Quality Standard

2.1

453

Qualification of Standards Characteristics

The fourteen standards characteristics defined are based on a depth analysis, and are used to appraise the coherence, completeness and redundancy of the co-deployed standards. Table 1 contains the symbol, the name and the description of each characteristic, grouped according to the study axis. Table 1. Definitions of standards characteristics. N Criterion Criteria of coherence C1 Strategic intention C2 Alignment C3 Usage objective C4 Type of standard C5 Structure C6 Life cycle Criteria of completeness CL1 Approach orientation CL2 User CL3 Specialization CL4 Field of application CL5 Duration of application CL6 Assessment objective Criteria of redundancy R1 Players R2 Tools

2.2

Description Main purpose of using a standard Part of the company concerned by its use Goal of a standard Norm or standard Organization of standard components Stages of projection of a standard Distribution of work axes that change according to the chosen approach User for whom this model is designed Part of the company targeted by a standard Areas in which a standard is used Time of the standard conformity assessment in the company Company or its individuals

Main purpose of using a standard Aligning a standard with the objectives of the company part affected by its use

Measure of Co-deployment Metrics

In order to concretize the evaluation of coherence, completeness and redundancy, we defined metrics of co-deployment according to each criterion. Let Si, Sj, Sk be co-deployed three standards. Coherence Metric. The coherence metric is: The weight of the coherence criteria Ck between the Si standard and the Sj standard: PCk(Si, Sj). Table 2 presents the symbols, the values and the meaning of the notion of coherence weight of two standards.

454

H. Dahar and O. Roudies Table 2. Weight of the coherence of two standards.

Symbol O (yes) P (Partial) N (No)

Valeur 1 0.5 0

Signification The criterion shows that the standards Si and Sj are coherence The criterion shows implicitly that the standards Si and Sj are coherent The criterion does not show coherence

The coherence average between the Si standard and the Sj standard is (1): P6 MC ðSi; SjÞ ¼

n¼1

PCnðSi; SjÞ : 6

ð1Þ

The coherence average between n standards is (2): MCG ðSi: : SnÞ ¼ ðMCðS1; S2Þ þ MCðS2; S3Þ. . .. . . þ MCðSn  1; SnÞÞ=n:

ð2Þ

Completeness Metric. The completeness metric is: The weight of the completeness criterion Lk between the SI standard and the Sj standard: PLk (Si, Sj). Table 3 presents the symbols, the values and the meaning of the notion of completeness weight of two standards. Table 3. Weight of the completeness of two standards. Symbol O (yes)

Valeur 1

P (Partial) N (No)

0.5 0

Signification The criterion showed that the standards Si and Sj complement each other The criterion showed implicitly that the standards Si and Sj complement each other The criterion did not show completeness

The completeness average between the Si standard and the Sj standard (3): P6 MLðSi; SjÞ ¼

n¼1

PLnðSi; SjÞ 6

ð3Þ

The completeness average between n standards (4):   MLG S1 ; ::; Sn ¼ ðMLðS1 ; S2 Þ þ MLðS2 ; S3 Þ þ . . .. . .:MLðSn1 ; Sn ÞÞ=n:

ð4Þ

Redundancy Metric. The Redundancy metrics is: The weight of the redundancy criterion Rk between the SI standard and the Sj standard: PRk[Si, Sj]. Table 4 presents the symbols, the values and the meaning of the notion of redundancy weight of two standards.

Measurement of Co-Deployment of IT Quality Standard

455

Table 4. Weight of the redundancy of two standards. Symbol O (yes) P (Partial) N (No)

Valeur 1 0.5 0

Signification The criterion shows that the standards Si and Sj were redundant The criterion shows implicitly that the standards Si and Sj were redundant The criterion did not show redundancy

The redundancy average between the Si standard and the Sj standard (5): P2 MRðSi; SjÞ ¼

n¼1

RnðSi; SjÞ 6

ð5Þ

The redundancy average between n standards (6):   MRG S1 ; . . .; Sn ¼ ðMRðS1 ; S2 Þ þ MRðS2 ; S3 Þ þ . . .. . .:MLðSn1 ; Sn ÞÞ=n:

ð6Þ

3 Application to ISO 9001, CMMI and ITIL Standards This section contains the results of applying the co-deployment measurement method on the three famous standards which are: ISO9001, CMMI and ITIL. 3.1

Determination of the Characteristics of ISO 9001, CMMI and ITIL Standards

Table 5 contains the results of the determination of ISO 9001, CMMI and ITIL according to the characteristics of coherence, completeness and redundancy.

Table 5. The results of the determination of ISO 9001, CMMI and ITIL characteristics. N ISO9001 Criteria of coherence C1 Guarantee of the quality of the product C2 Strategy of the company C3 Customer satisfaction C4 Norm C5 10 items C6 Product life cycle

CMMI

ITIL

Successful improvement of business processes Strategy of the company

Successful improvement of business processes Strategy of the company

Customer satisfaction Best practice 24 process area Project life cycle

Customer satisfaction Best practice 23 process and 4 functions Service life cycle (continued)

456

H. Dahar and O. Roudies Table 5. (continued)

N ISO9001 Criteria of completeness CL1 Customers CL2 Supplier CL3 Basic quality management CL4 Any domain CL5 Permanent CL6 Company Criteria of redundancy R1 All the employees

CMMI

ITIL

Process IT suppliers Business process improvement Computing Permanent Company

Services IT suppliers Successful maintenance

Developers

R2

-

Customer, user, internal provider, external provider User interface

3.2

-

Computing Permanent Company

Measurement of ISO 9001, CMMI and ITIL Co-deployment Metric

This step aims to evaluate the characteristics determined in the previous section through the measurement of consistency metrics, completeness and redundancy, and the calculation of the co-deployment ratio of the three ISO 9001, CMMI and ITIL standards. Table 6 shows the measures of the coherence metrics of ISO 9001, CMMI, and ITIL. Table 6. Measure coherence metrics for ISO 9001, CMMI, and ITIL. C1 C2 C3 C4 C5 C6 MC MCG

ISO9001; CMMI CMMI; ITIL ITIL; ISO9001 O O O O O O O O O P O P N P N O O O 0,75 0,91 0,75 0,80

Table 7 shows the measures of the completeness metrics of ISO 9001, CMMI, and ITIL.

Measurement of Co-Deployment of IT Quality Standard

457

Table 7. Measure completeness metrics for ISO 9001, CMMI, and ITIL. CP1 CP2 CP3 CP4 CP5 CP6 MP MPG

ISO9001; CMMI CMMI; ITIL ITIL; ISO9001 O O O P P P O O O O O P O O O O O O 0,91 0,91 0,83 0,88

Table 8 shows the measures of the redundancy metrics of ISO 9001, CMMI, and ITIL.

Table 8. Measures of the redundancy metrics. R1 R2 MR MRG

3.3

ISO9001; CMMI CMMI; ITIL ITIL; ISO9001 P P P N N N 0,25 0,25 0,25 0,25

Discussion

In the context of the simultaneous deployment of the three standards ISO9001, CMMI and ITIL, we rely on the standards characteristics and the measurement of their metrics determined in the previous section to answer the main questions of the contribution Question 1: Is it possible to use ISO9001, CMMI and ITIL simultaneously? Answer 1. The coherence criteria showed that ISO 9001, CMMI and ITIL are 80% coherent. The intentions of the three standards are coherent because the growing success of the enterprise is based on the guarantee of the quality of the product supplied, which is concretized by the success of the improvement of its processes. Alignment with the enterprise’s strategy for ISO 9001, CMMI and ITIL is at the same level of abstraction. ISO 9001, CMMI and ITIL aim for customer satisfaction by providing a quality product, process or service, respectively, to build mutual trust. The study of the criteria determined in the previous section showed that CMMI and ITIL were best practices guides that project on well-defined life cycles and whose coherence with ISO 9001 exists in the case of their use in an enterprise that adopts IT projects [5].

458

H. Dahar and O. Roudies

From within, the recommendations of ISO 9001, CMMI and ITIL were not contradictory. In terms of the form, it is up to the enterprise to find its way in a process organization that is adapted to it while respecting the standards. Question 2: What are the advantages of using ISO 9001, CMMI and ITIL simultaneously? Answer 2. The qualification of the completeness criteria of ISO 9000, CMMI and ITIL showed that they complement each other by 88%. The completeness of the three standards is reflected in the diversity of the orientations of the four approaches, the users, the specializations of each standard and the nature of the evaluation. In other words, the three standards generally have complementary orientations, ISO9001 guarantees management process quality generally, CMMI improves the maturity levels of the enterprise’s software engineering process and ITIL ensures the after production quality. ISO 9001 offers the general basics of quality guidelines to follow in order to have a quality product; CMMI completes this overall coverage by a more detailed coverage that encompasses the operations of the entire development process. And ITIL deals with the last phase which is the maintenance phase. Question 3: What are the disadvantages of using ISO9001, CMMI and ITIL simultaneously? Answer 3. ISO 9001, CMMI, and ITIL represent a 25% redundancy. ISO 9001 and CMMI are intended for all employees of the enterprise while ITIL has several roles to perform the responsibilities and activities related to the management of the services, which can result in a duplicated efforts by the resources in relation to their mobilizations. The quality manager should take charge of this articulation. The absence of a well-defined common tool for the complementary management of the standard processes generates a certain difficulty to implement them simultaneously because they do not propose integrated approach and tools of work. They provide objectives and best practices without explaining the means to implement them [6]. The corporate standard could play this integrating role.

4 Conclusion The simultaneous deployment of standards makes it possible to cover all sectors of activity in a company, given the absence of a common standard projected on the entire organization. The goal is to be able to answer three main questions: Question 1: Is it possible to use n standards simultaneously? Question 2: What are the advantages of using them simultaneously? Question 3: What are the disadvantages of using them simultaneously? In this article, we proposed a new approach to measure the co-deployment of quality standards. We validated this approach by applying it to three well-known standards, namely ISO 9001, CMMI and ITIL. We obtain a coherence rate of 80%, a completeness rate of 80% and a redundancy rate of 25%. We conclude that ISO 9001, CMMI and ITIL can be combined to expand the areas covered by quality standards.

Measurement of Co-Deployment of IT Quality Standard

459

Indeed, the three standards are coherent and complementary, which makes their concurrent use an asset for the company without risk of incoherence. The redundancy rate is low but not negligible, which shows that the disadvantages remain limited compared to the contributions in this case. The results of the application to the three standards correspond to our experience of the field and seem significant. We intend to consider an expert panel validation. In general, it seems important to us that the company starts thinking about adapting several standards and that it is useful to have a measurement method at this stage. It is certain that an integrated and tooled approach must be implemented and for this purpose it is necessary to develop a theoretical alignment framework.

References 1. CIGREF, McKinsey & Compagny: dynamics of Value Creation by Information Systems. A Shared Responsibility within Large Business Departments (2008) 2. Carlier, A.: Quality Management for the Mastery of IS, 1st edn. Paris, Hermes Science Edition (2006) 3. Bahsani, S., Semma, A., Sellam, N.: Towards a new approach for combining the IT frameworks. IJCSI Int. J. Comput. Sci. Issues 12(1) (2015) 4. Elhasnaoui, S., Chakir, A.: Communication system architecture based on sharing information within an SMA. Int. J. Eng. Innov. Technol. (IJEIT) 5(3) (2015) 5. Oriol, M., Marco, J., Franch, X.: Quality models for web services: a systematic mapping. Inform. Softw. Technol. (2014) 6. Padayachee, Y., Duma, M.: The missing link: an innovation governance framework for large organisations. In: Proceedings of Annual South Africa Business Research Conference, 11–12 January 2016

Rating Microfinance Products Consumers Using Artificial Neural Networks Tarik Hajji(&) and Ouazzani Mohammed Jamil(&) Laboratoire Systèmes et Environnements Durables (SED), Faculté des Sciences de l’Ingénieur (FSI), Université Privée de Fès (UPF), Fez, Morocco {hajji,ouazzani}@upf.ac.ma

Abstract. Assessing the loan repayment capacity of a client is the most Obsession of Financial institutions. In fact, loan portfolio management is optimal when incapable clients are identified and dismissed from the outset. This reduces delinquency and radiation, and thus increases the profitability of financial institutions. This paper presents an approach that uses artificial neural networks for the rating of corporate clients offering microfinance services. We started our work with a survey of several microfinance companies to understand closely the problems encountered by using customer-rating tools, and then we have used data analysis tools to explain the results of the survey. After that, we have collected masse of data containing real customer profiles provided by partner companies. Then, we have filtered and studied this data to create a learning database for the artificial neural network-based scoring system. Finally, we have designed an expandable, flexible, versatile and configurable scoring system. Keywords: Artificial neural networks Microfinance

 Business intelligence  Data analysis

1 Introduction Microfinance acts as a key factor in the development of countries. It allows people to find reasonable solutions for financing their projects and small projects. However, this sector suffers from a number of problems related to the behavior of consumers who are unable to make the payment of their loans. The main reason of this result is low estimation of the appropriate product according to client ability to pay. According to previous studies [1], microfinance services provide a set of financial products to people excluded from traditional or formal financial system. They generally concern people in Third World countries. Scoring is defined as an operation used to note the importance of a client to a collection of data related to his behavior, it is based on three main axes [5–8]: The quality of the data used in terms of correctness, reliability and relevance, variables that represent the importance given to each of these data and the methodology for developing the Scoring model. We can distinguish several types of Scoring: first name score, common score, RFM score, appetence score, attrition score and risk score. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 460–470, 2019. https://doi.org/10.1007/978-3-030-03577-8_51

Rating Microfinance Products Consumers Using Artificial Neural Networks

1.1

461

KPIs Description

Kindly assure that the Contact Volume Editor is given the name and email address of the contact author for your paper. The Contact Volume Editor uses these details to compile a list for our production department at SPS in India. Once the files have been worked upon, SPS sends a copy of the final pdf of each paper to its contact author. The contact author is asked to check through the final pdf to make sure that no errors have crept in during the transfer or preparation of the files. This should not be seen as an opportunity to update or copyedit the papers, which is not possible due to time constraints. Only errors introduced during the preparation of the files will be corrected. 1. Whatever the strategies developed by MFIs, it is important to include the following measures to evaluate their performance [2, 10–12], (Estapé-Dubreuil and Torreguitart-Mirada 2010), (Borden 2009), [16], (Imai et al. 2010), [5]: 2. The number of clients: This indicator measures the number of active clients, preferably the number of affiliated “members” or the number of loans granted during a certain period. Indeed, the first reflects the actual service delivery, while the last two are biased. One member can be inactive for a certain period. 3. Average Outstanding Balance (AOB): The goal of any MFI is the fight against poverty through the allocation of credit to the most vulnerable people. MFIs need to measure their social impact. The Average Balance Remaining Credits is the most used indicator. It takes into account the loan amounts that customers have not yet repaid (or the savings that customers have not yet withdrawn as part of a savings account). It is expressed as a percentage of GDP per capita (poverty is found when the AOB is less than 20% of GDP per capita (or US $ 150). 4. Portfolio at Risk (PAR): The loan portfolio is the largest asset of a financial institution. In order to manage the risk associated with loan volatility and lack of collateral in the area of microfinance, and to minimize otherwise delinquency, it must be subject to regular analysis by executives (Maotela 2015). The PAR is the international standard measure of the quality of a loan portfolio in the banking sector. It provides information on the quality of the portfolio and the degree of risk involved in the credit. It is expressed as the ratio of the outstanding balance due x days late to the total outstanding balance (all credits), where “x” is the number of days of late payment. 5. Loans at Risk (LAR): For MFIs that cannot calculate PAR, mainly because of their small size, LAR is the appropriate proxy indicator. Indeed, since the repayment process is generally the same in a large MFI as in a smaller MFI, the LAR does not differ much from the PAR. It is expressed as the ratio between the number of outstanding credits late x days and the total number of unpaid credits, where “x” is the number of days of late payment. 6. Current Recovery Rate (CRR): This is a very important performance indicator for MFIs, as a high credit recovery rate is a strong signal of customer satisfaction and credit enhancement with them. Indeed, the strongest incentive for unsecured credit is not so much the pressure on the customer, but his desire to be able to access a financial product that he has found a generator of well-being for his family. The CRR expressed as the quotient between the amount of payments recovered and the amount of payments due.

462

T. Hajji and O. M. Jamil

7. Annual Loan-loss Rate (ALR): This indicator is used in complementarity of the CRR. Indeed, when the latter is used in short periods, the result may be distorted (when an MFI displays a CRR of 95%, one could believe that its losses are 5% per year. 95% per week for a credit of 3 months, the loss is much greater). By converting the CRR to ALR, the interpretation of the result is done over a longer period, avoiding errors of appreciation due to short durations. 8. Return on Assets (ROA): Any MFI, in order to ensure its sustainability in order to continue fulfilling its mission, must ensure its financial sustainability. The question for management is whether the institution will have enough financial resources to continue to provide services tomorrow as it does today. One of the basic KPIs measuring profitability is the ROA. It measures the ability of the MFI to convert its assets into profit, and is a very practical indicator of comparison with other MFIs (a low ROA compared to that of the sector indicates a lack of efficiency in the operation of the MFI). It is expressed as the ratio between the net results (excluding taxes) and the total assets. 9. Return on Equity (ROE): Return on equity (ROE) is another basic indicator of profitability. It measures the ability of the MFI to generate enough profit to offset the risk of being in the business. It is expressed as the ratio between net income (excluding taxes) and equity. 10. Operating Expense Ratio (OER): To measure the effectiveness of an institution, the most commonly used indicator is the Operational Expense Ratio (OER). It compares operational expenses with gross revenue generated by the MFI. A weaker OER is preferable because it indicates a higher efficiency of the MFI. 11. Cost per Client (CPC): Another indicator of efficiency is the Cost Per Customer (CPC). Unlike the OER, which is based on loan amounts; it is expressed as the ratio between operating expenses and average active customers. It avoids the biased interpretation that could be made using the OER, because when efficiency is measured against credit amounts, smaller credits cost the MFI more than large ones, while MFIs that do produce are managed with the same efficiency.

2 Methodology Our scoring solution consists of the following tools: 1. Application Scoring, in which the probability of error is evaluated from the social and financial data collected during the credit application by both individuals and SMEs. 2. Since traditional financial information for low-income people is difficult to collect, we use alternative data based on the use of their Mobile phone, the payment of their rent as well as the quality of their housing, and on the financial transactions made by their mobile phone. 3. The qualification process of our credit scoring system is schematized as follows (Fig. 1):

Rating Microfinance Products Consumers Using Artificial Neural Networks

463

Fig. 1. Credit scoring system process

We provide an example of the analytic architecture, which could be extended, to a formal data warehouse. It could be developed in two phases and gradually include several data sources depending on the needs and requirements of the models (Fig. 2).

Fig. 2. The scoring model analytic architecture

2.1

Alternative Data

Traditional financial data for credit risk estimation is often not available in some emerging markets. For example, MFI Insight Analytics has been focusing on “Alternatives Data” techniques, which have already been proven for years in some countries,

464

T. Hajji and O. M. Jamil

such as Colombia, which has been using them for 40 years: “Alternatives Data” consists of using, when profiling a borrower and when financial data are lacking, alternative information such as electricity and gas bills, telecommunications bills, proof of payment of rent, electronic payments (sending or withdrawing funds, transfer, etc.) by mobile phone, and proof of payment of school fees of children. Behavioral Scoring based on the borrower’s payment history to associate a score with existing customers. This score makes it possible to identify the first signs of a possible default of payment and thus to monitor the portfolio. In addition, it allows credit agents to offer new products adapted to existing customers (Up Selling). Our scoring solution uses innovative techniques that are based on a two-layer architecture: 1. Data layer: This level consists of preparing the most relevant data for the profiling of the credit applicant. These data are already preconfigured in our standard canonical model, thus allowing the user to choose, through an administrator interface, those that are necessary to him. Data preparation will require validation, availability verification and data relevance. Indeed, the classification rate of a scoring model depends on the quality of the data in the learning database of this model. The more semantically correct and available the data, the higher the rate. 2. Discrimination Models Layer (Scoring): The Modeling layer, which includes the scoring model based on Artificial Intelligence. In order to ensure the good quality of the data, we proceed in three main stages: 1. Categorization: list among the data, the relevant attributes in 3 types: Internal data, External data, and Alternative data. 2. Segmentation: subdivide the entire population into segments, using either our “M-Clustering” algorithm, the well-known “K-means” partitioning algorithm. 3. Association: get along with the credit experts of the MFI on the number of segments and on the level of supervision of the score to be assigned to each segment (Very Low, Low, Insufficient, Acceptable, Good, Very Good, and Excellent). 2.2

Learning Database

We began by collecting customer profile data from our partner companies, and then performed filtering, standardization and loan processing operations to create the learning database. Given the limited amount of financial information held about customers, we have designed an agile solution for credit risk management. As a result, it uses alternative data to optimize scoring credit for customers who do not have traditional financial data. Non-Financial Data used in our tool for individuals are: 1. Demographics: age, sex, status, marital, education. 2. Social: salary, owner. 3. Tenant, gas bills, electricity bills.

Rating Microfinance Products Consumers Using Artificial Neural Networks

2.3

465

Data Gathering

We concede historical and anonymous data from Burundi’s Wises [Women’s Initiative for Self Empowerment] to calibrate the scoring model. The data we used represents a backup of more than 7 years in size of 4 Gigabytes in the form of flat files generated by SQL Server tools. The following figure shows an excerpt from these data. 2.4

Sampling of Data

The purpose of the sampling is to collect the necessary data to practice inferential statistics. We can distinguish two types of sampling methods: non-random and random methods: 1. The most common non-random method is quota sampling: from the specificities of a population, the sample is created by respecting the desired levels of the sample by randomly choosing individuals with the same characteristics. 2. Random methods use probabilities in sample formation. Only these methods make it possible to estimate the level of confidence of the results that the sampled population will provide. 2.5

Variables’ Coefficient of Correlation

The simple correlation coefficient is an index of the intensity of a link that can exist between two variables. The correlation coefficient can take a value between –1 and +1. If it is 0, it means that there is no link between these 2 variables. It is very commonly used in the context of the analysis of quantitative variables. Using the SPSS software we have determined the most important variables that must be used in learning. 2.6

Learning Database

We designed a learning database under the SQL Server SGBD using stored procedures on the data repository we collected. Our database is composed by the variables that have a high correlation rate, it is mainly composed by the following tables: Alternative Data, Borrower_info_indiv, Borrower_info_sme, Business_info_indiv, Business_info_sme, Credit_Reference_Bureau_info, Alternative Data, Financial_info, Mobile Info, Product_info. 2.7

Scoring Model Design

We have used multilayer perception (Fig. 3) as a scoring model for clients with supervised learning. The web application that we designed with the C # language is able to automatically generate the neural structure of the model thanks to the configuration space of the application (back office), and we used the retro gradient propagation as the learning algorithm for the model.

466

T. Hajji and O. M. Jamil

Fig. 3. Multilayer perception used

The back office of the application allows defining the variables that we have information on the consumer, which allows us to define the number of neurons in the input layer. The back office also allows setting the transfer function, the activation function, the quadratic error and the number of iterations for the learning algorithm.

3 Results The following table contains the description of the test environment of the scoring module (Table 1).

Table 1. The test environment description Database server

Application server for visualisation software

Local Hardware, IT infrastructure and networks Basic version (without redundancy, not recommended) 1 Server: 8 CORE, 14 GB of memory Windows Server 2008 or higher SQL Server 2012 ASP.Net latest version Professional version (recommended) 3 Servers: 8 cores, 14 GB of memory Load balancing between the two WFE servers Premium version (high availability, minimum downtime) 4 Servers: 8 CORE, 14 GB 2 WFE and 2 database servers Log transmission between database servers for real-time data replication Windows Server 2008 or above for servers There is no special requirement for the network for our solution. except the following needs 1- a bandwidth of 100 Mb / s at the LAN level 2- an existing secure DSL connection for Agent Credits (POS)

Rating Microfinance Products Consumers Using Artificial Neural Networks

3.1

467

Correlations of the Variables

The data in the following figure (Table 2) shows the correlations of the variables used in the scoring process. Table 2. Correlations of the variable Credit history 100% repayment on time for more than 2 loan cycles 100% repayment on time for one cycle Defaulted one instalment only Defaulted more than one instalment No loan records Age of the business >5 years >2 and = =10%

3

2

5–9.9%

2

1 0 0

0–4.9% No savings account

1 0

3 2

2 or shorter years

1

Any other Debt service coverage ratio >=3.0 2.0–2.99 1.5–1.99 180% 150–179% 120–149% 120% 100–119% 80–99% 120% 100–119% 80–99% 0 P (xi )

(1)

The real P (xi |yj ) is reads “probability of yj , knowing xi ” according to the common use of the sets of data between xi and yj . The conditional probability imposes the creation of the conditional probability matrix CP M [|T |] [|T |] based on contingency table CT [|T | + 1] [|T | + 1] and the joint and marginal probability table JM P [|T | + 1] [|T | + 1] of each task pair in the workflow. First, we create contingency table : CT [i] [j] = DataSize (Ii ∩ Ij ) ; 1 ≤ i ≤ |T | ; 1 ≤ j ≤ |T |

CT [i] [|T | + 1] =

|T | 

(2)

CT [i] [j]; 1 ≤ i ≤ |T |

(3)

CT [i] [j]; 1 ≤ j ≤ |T |

(4)

j=1

CT [|T | + 1] [j] =

|T |  i=1

CT [|T | + 1] [|T | + 1] =

|T |  i=1

CT [i] [|T | + 1]

(5)

486

S. A. Makhlouf and B. Yagoubi

Ii ∈ F and Ij ∈ F are respectively inputs files set of task xi ∈ T and yj ∈ T . Each value in the contingency table is the common size of the input files for each task pair in the workflow. The joint and marginal probability table can be created from the contingency table. The joint probability of Ti and Tj is: JM P [i] [j] =

CT [i] [j] ; 1 ≤ i ≤ |T | ; 1 ≤ j ≤ |T | CT [|T | + 1] [|T | + 1]

(6)

The marginal probability of Ti is: JM P [i] [|T | + 1] = JM P [|T | + 1] [i] =

CM [i] [|T | + 1] ; 1 ≤ i ≤ |T | (7) CM [|T | + 1] [|T | + 1]

The conditional probability for each task pair can be calculated as follow: CP M [i] [j] =

JM P [i] [j] ; 1 ≤ i ≤ |T | ; 1 ≤ j ≤ |T | JM P [i] [|T | + 1]

(8)

Figure 1b show results of applying our clustering algorithm on the CP M . After the creation of the CP M matrix, we will fragment the workflow into a clusters of tasks. Each cluster will be assigned to a virtual machine, so the number of virtual machines created depends of the number of clusters. The goal in this work is to maximize the data transfer in a cluster and minimize it between clusters. The question that arises then is: What is the minimum number of clusters needed to correctly execute the workflow to meets user’s budget? So, in our work, the workflow execution must be efficient, meets the workflow budget and with a minimal cost. In order to answer to this question, we have implemented some algorithms and technics for determining the optimal number of clusters in CP M data set and offers the best workflow clustering scheme from different results. In [6], the authors identify 30 clustering algorithms that determine the optimal number of clusters in a data set. We tried to apply these 30 algorithms to our CP M matrix and we found that only 13 algorithms are compatible with the CP M data matrix. For the rest of the 17 algorithms, the clustering result tends to infinity. We used the following clustering algorithms: Krzanowski and Lai 1988, Cali´ nski and Harabasz 1974, Hartigan 1975, McClain and Rao 1975, Baker and Hubert 1975, Rohlf 1974 and Milligan 1981, Dunn 1974, Halkidi et al. 2000, Halkidi and Vazirgiannis 2001, Hubert and Levin 1976, Rousseeuw 1987, Ball and Hall 1965, Milligan 1980, 1981. 4.2

Tasks Distance Measures

The data set is represented by the CP M matrix. Each element in the CP M matrix represents the distance between two tasks in the workflow. So, the clustering is done compared to the distance between each pair of tasks of the matrix CP M . In order to measure the distance, we used the following metrics:

Workflow scheduling Approach in IaaS Cloud

487

Euclidean Distance: is the distance between two tasks x and y in a Rn space and is given by.   n     2 (xi − yi ) ; xi , yi ∈ CP M 1, |T | 1, |T | (9) d= i=1

It is the length of the diagonal segment connecting x to y. Manhattan Distance: is absolute distance between tasks x and y in Rn space and is given by. d=

n 

   |xi − yi | ; xi , yi ∈ CP M 1, |T | 1, |T |

(10)

i=1

As opposed to the diagonal distance in Euclidean distance, the distance between two tasks in a grid based on a strictly horizontal and/or vertical path. The Manhattan distance is the sum of the horizontal and vertical components. As mentioned above, we used 13 partitioning algorithms to determine the number of clusters needed. In addition to the CP M matrix and the distance metric, these algorithms require as input parameters the minimum and maximum cluster interval. We have set the cluster interval calculation between 2 and 20 clusters. This interval choice is based on the Amazon EC2 resource allocation policy [1]. Amazon EC2 allows the possibility of reserving only 20 virtual machines. This is why we cannot create more than 20 clusters. So, we have fixed the user budget to 20 virtual machines. 4.3

Hierarchical Clustering Method

The average obtained will be used to do a hierarchical clustering of the CP M matrix. Hierarchical clustering is one of the domains of automatic data analysis and data classification. In our work we have used agglomerative clustering of a set of tasks T of n individuals. Our goal is to distribute these tasks in a certain number of clusters, where each cluster represents a virtual machine. The agglomerative hierarchical clustering assumes that there is a measure of dissimilarity between tasks; in our case, we use CP M matrix as a measure for dissimilarity calculation. The dissimilarity between tasks x and y will be noted dissimcpm (x, y). Initially, in our approach, each tasks forms a cluster. We try to reduce the number of cluster to the average calculated previously by the 13 partitioning algorithms; this is done iteratively. At each iteration, two clusters are merged, which involves reducing the number of total clusters. The two clusters chosen to be merged are those that are the most “similar”, in other words, those whose dissimilarity is minimal (or maximal). For agglomerative clustering, and in order to decide which clusters should be merged, a measure of dissimilarity between sets of clusters is required. This is achieved by measuring a distance between pairs of clusters.

488

S. A. Makhlouf and B. Yagoubi

The dissimilarity of two clusters Ci = {x} , Cj = {y} ; 1 ≤ i, j ≤ n, each containing one tasks, is defined by the dissimilarity between its tasks dissim (Ci , Cj ) = dissim (x, y) ; 1 ≤ i, j ≤ n. When clusters have several tasks, there are multiple criteria for calculating dissimilarity. We used the following criteria: Single link: the minimum distance between tasks of Ci and Cj : dissim (Ci , Cj ) =

min

x∈Ci ,y∈Cj

(dissimcpm (x, y)) ; 1 ≤ i, j ≤ |T |

(11)

Complete link: the maximum distance between tasks of Ci and Cj : dissim (Ci , Cj ) =

max

x∈Ci ,y∈Cj

(dissimcpm (x, y)) ; 1 ≤ i, j ≤ |T |

(12)

Algorithm 1 models the agglomerative hierarchical clustering of the CP M matrix. It receives as input parameters the CP M matrix and the cluster number that we want to create. The cluster number is the average of all cluster numbers obtained from the 13 clustering algorithms.

Algorithm 1. Clusters creation Require: T , CP M , avgClustersN b  Tasks set, CPM matrix and clusters number Ensure: ClustersList  Return clusters list 1: ClustersList ← φ 2: for each xi ∈ T ; i = 1, |T | do 3: Cluster ← {xi } 4: ClustersList ← ClustersList ∪ Cluster 5: end for 6: while |ClustersList| > avgClustersN b do 7: for i = 1, |ClustersList| do 8: for j = 1, |ClustersList| do 9: DissimM at [i] [j] ← dissimcpm (ClustersList.get (i) , ClustersList.get (j)) 10: end for 11: end for 12: {Cluster  , Cluster  } ← get2SimilarClusters (DissimM at) 13: N ewCluster ← Cluster  ∪ Cluster  14: ClustersList ← ClustersList − {Cluster  , Cluster  } 15: ClustersList ← ClustersList ∪ N ewCluster 16: end while

5

Performance Evaluation and Results

In order to validate the proposed approach, we have implemented our system in a discrete event simulator “Cloud Workflow Simulator” (CWS) [8]. The CWS is extension from the “CloudSim” [5] simulator, has a general architecture of IaaS Cloud and supports all of the assumptions stated in the problem description in Sect. 3. We simulated workflow scheduling with various parameters.

Workflow scheduling Approach in IaaS Cloud

489

We evaluated our algorithm using synthetic workflows. The selected workflows include: LIGO (Laser Interferometer Gravitational Wave Observatory), a data-intensive application involving a network of gravitational-wave detectors, with observatories in Livingston, LA and Hanford, WA, and Montage, an I/Obound workflow used by astronomers to generate mosaics of the sky. In our work, we simulated workflows whose size does not exceed 200 tasks. Because, according to our simulations, the execution of the workflows whose size is greater than 300 tasks will exceed our budget, which is fixed to 20 virtual machines. The experiments model cloud environments with an infinitely NFS-like file system storage. We have compared our approach with the following algorithms: Static Provisioning Static Scheduling (SPSS) [8]. SPSS is a static algorithm that creates provision and schedules before running workflow. The algorithm analyzes if the workflow can be completed within the cost and deadline. The workflow is scheduled if it meets the cost and deadline constraint. The workflows are scheduled in the VM which minimizes the cost. If such VM is not available, a new VM instance is created. In this algorithm, file transfers take zero time. Storage-Aware Static Provisioning Static Scheduling (SA-SPSS) [4], it is a modified version of the original SPSS algorithm to operate in environments where file transfers take non-zero time. It handles file transfers between tasks. So, we have simulate the following algorithms: Static Provisioning Static Scheduling (SPSS), Storage-Aware Static Provisioning Static Scheduling (SA-SASS), Data-Aware Euclidean Complete Clustering (DA-ECC), Data-Aware Euclidean Single Clustering (DA-ESC), DataAware Manhattan Complete Clustering (DA-MCC), Data-Aware Manhattan Single Clustering (DA-MSC). To analyze the results relating to experimentation of our approach, we measured the cost witch is the number of VMs created. This metric is used to evaluate the proposed approach compared to the SPSS and SA-SPSS approach. To do this, we simulated the execution of the synthetic workflows Montage and Ligo. We varied the size of simulated workflows between 50 and 200. Our objective is to study the impact of the workflow type on the cost. From Fig. 2a, we note that regardless of the size of the workflow, our policies give good results by reducing the number of VMs. Especially for large workflows whose size is 200 tasks; we note that the DA-MCC policy uses only 08 virtual machines. This result depends on the CP M matrix data structure and proves that there is not a better distance measure. From Fig. 2b, we note that regardless of the size of the workflow, our policies, give good results. We note that our policies allocate between 18 and 19 virtual machines. In particular, the policy DA-ESC that allocate 19 machines for the execution of the workflows whose size is 200 tasks. This policy uses “single” agglomeration method coupled with the Euclidean metric to measure the distances and distributes the tasks between virtual machines so that the distance between the VMs is as minimal as possible. This will naturally involve grouping

490

S. A. Makhlouf and B. Yagoubi

(a) Impact of the Montage workflow on the cost (b) Impact of the LIGO workflow on the cost

Fig. 2. Impact of the workflow type on the cost

highly dependent tasks into the same virtual machines (cluster) and therefore reducing file transfer between machines to a minimum. By comparing Fig. 2a and b, we note that the Montage workflows allocate fewer resources compared to the LIGO workflow. This confirms that Montage is processing-aware workflows. Therefore, the application of a scheduling algorithm depends on the type of the workflow. Through the two graphs, we note that the SPSS policy gives in some cases good results. These results do not reflect reality because this policy does not support the data transfer time. Hence the importance of using a scheduling algorithm that is specific to the type of workflow [9,11].

6

Conclusion and Future Works

In this paper, a new clustering workflow method in cloud was proposed in order to reduce the virtual machines costs. In our strategy, the amount of global data movement can be reduced, which can decrease the inter-VMs communications rate. The proposed clustering algorithm could improve resource utilization efficiency and decrease virtual resources consumption. Experiment results demonstrated that the proposed scheduling method could decrease virtual resources consumption. In Sect. 4.1, we found that 17 clustering algorithms do not match the CP M matrix data. As a perspective, we will try to understand the reasons why these algorithms tend to infinity, and if possible to find a solution to standardize or normalize the data of the CP M matrices.

Workflow scheduling Approach in IaaS Cloud

491

References 1. Amazon AWS service limits. http://docs.aws.amazon.com/general/latest/gr/aws service limits.html. Accessed 29 Oct 2017 2. Alkhanak, E.N., Lee, S.P., Ur Rehman Khan, S.: Cost-aware challenges for workflow scheduling approaches in cloud computing environments: taxonomy and opportunities. Future Gener. Comput. Syst. 50, 3–21 (2015) 3. Alkhanak, E.N., Lee, S.P., Rezaei, R., Parizi, R.M.: Cost optimization approaches for scientific workflow scheduling in cloud and grid computing: a review, classifications, and open issues. J. Syst. Softw. 113, 1–26 (2016) 4. Bryk, P., Malawski, M., Juve, G., Deelman, E.: Storage-aware algorithms for scheduling of workflow ensembles in clouds. J. Grid Comput. 14(2), 359–378 (2016) 5. Calheiros, R.N., Ranjan, R., Beloglazov, A., Rose, C.A.F.D., Buyya, R.: CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw. Pract. Exper. 41(1), 23–50 (2011) 6. Charrad, M., Ghazzali, N., Boiteau, V., Niknafs, A.: NbClust: an R package for determining the relevant number of clusters in a data set. J. Stat. Softw. 61(6), 1–36 (2014) 7. Djebbar, E.I., Belalem, G., Benadda, M.: Task scheduling strategy based on data replication in scientific cloud workflows. Multiagent Grid Syst. 12(1), 55–67 (2016) 8. Malawski, M., Juve, G., Deelman, E., Nabrzyski, J.: Algorithms for cost and deadline-constrained provisioning for scientific workflow ensembles in IaaS clouds. Future Gener. Comput. Syst. 48, 1–18 (2015). Special Section: Business and Industry Specific Cloud 9. Masdari, M., ValiKardan, S., Shahi, Z., Azar, S.I.: Towards workflow scheduling in cloud computing: a comprehensive analysis. J. Netw. Comput. Appl. 66(Supplement C), 64–82 (2016) 10. Mon, E.E., Thein, M.M., Aung, M.T.: Clustering based on task dependency for data-intensive workflow scheduling optimization. In: 9th Workshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS2016), pp. 20–25. IEEE Computer Society (2016) 11. Sahni, J., Vidyarthi, D.P.: Workflow-and-platform aware task clustering for scientific workflow execution in cloud environment. Future Gener. Comput. Syst. 64(Supplement C), 61–74 (2016) 12. Wang, X., Yeo, C.S., Buyya, R., Su, J.: Optimizing the makespan and reliability for workflow applications with reputation and a look-ahead genetic algorithm. Future Gener. Comput. Syst. 27(8), 1124–1134 (2011) 13. Wu, C., Buyya, R.: Chapter 12 - cloud storage basics. In: Cloud Data Centers and Cost Modeling, pp. 425–495. Morgan Kaufmann (2015) 14. Zhao, L., Ren, Y., Sakurai, K.: Reliable workflow scheduling with less resource redundancy. Parallel Comput. 39(10), 567–585 (2013)

Open Government Data: Problem Assessment of Machine Processability Hanae Elmekki(&), Dalila Chiadmi, and Hind Lamharhar Université Mohammed-V, Ecole Mohammadia d’Ingénieurs (EMI), Rabat, Morocco [email protected]

Abstract. Publishing OpenData with high quality and high value is the interest of many scientific research communities. The huge part of these published data concern government domain under the notion of Open Government Data “OGD”. This concept aims to expose appropriate information to the government users (citizens, enterprises and public administrations) in various domains (health, economic, etc.). For this reason, a set of datasets has been published through OGD portal. However these datasets couldn’t be efficiently used by OGD portals users that can be citizens or machine. Therefore, in this paper, we aim to study and identify the main problems, which don’t enable using (by citizens) or processing (by machine) efficiently the OGD datasets. To this end, we use Moroccan OGD to extract and illustrate these problems. This paper proposes as well an approach of OGD machine processability that can help to improve the quality of published OGD datasets and enable extracting useful information from them, helping thus citizens in their daily life and companies in their decisions making process. Keywords: OpenData OGD portal

 Open Government Data (OGD)  Processable data

1 Introduction OpenData represents free accessible data that can be used and distributed for any useful purpose without restrictions or additional fees. Among these exposed data, there is a wide part developed in the context of government field, under the concept of Open Government Data “OGD”. OGD is “a set of policies - that promotes transparency, accountability and value creation by making government data available to all” [1]. OGD are generally published through government portals that are the most valuable sources of a large volume of data. The main objective of these data is facilitating and enhancing citizens’ life through increasing transparency, participation and Collaboration [2]. OGD can contribute, as well, in enhancing the citizen’s life and transforming countries and cities [3]. For these reasons, the access to OGD doesn’t require any authentication or authorization. They are freely accessible without any limitations or controls. To this end, many applications have been developed for implementing the OGD concept and helping citizens to access easily to information they need. This implementation leads to © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 492–501, 2019. https://doi.org/10.1007/978-3-030-03577-8_54

Open Government Data

493

develop several OGD portals where data are available and open, such as “The home of the U.S. Government’s open data portal” of USA [3]; “Find open data portal” of UK [4]; “Données ouvertes de l’administration marocaine” or “Open Data Morocco” [5]. However, these published OGD have different issues, which don’t enable an efficient usage of them (users can’t find easily the appropriate information). In this paper, we focus on two types of data issues: Data structure issues and Data content issues [6]. Below set of problems and weaknesses that impacted the quality of OGD: – Structure issues: • Lack of metadata; • Heterogeneity of dataset formats; • Non-compliance to standards; – Data (Content) issues: • Data Incoherence; • The ambiguity of senses; • Data Incompleteness. We will describe those issues in the next section using as an example Moroccan portal. Therefore, to take the most advantage of OGD, it’s become necessary to improve the quality of published OGD datasets to be easily used by humans as well automatically processable by machines. Therefore developing intelligent mechanisms for data processability could improve the quality of published data and then helping citizens and companies in their daily life to get valuable information for decisions making. In this paper, we address our research question: “How an OGD portal user (Citizens or machine) can extract useful information from different datasets that have a heterogeneous structure and different format?”.To answer this research question, we study a set of problems and we identify that can have a negative impact on the use and the processability of OGD datasets by both citizens and machine. Based on this study, we propose our approach of preparing OGD through addressing some OGD problems. The approach aims specifically to transform OGD datasets to a processable data that can be used by applications (i.e. mobile applications) and systems (i.e. data warehouses). The rest of this article is organized as follows: we present, in Sect. 2, the problematic of the published OGD portals with an illustration of Moroccan OGD portals, next in Sect. 3, we represent our OGD machine processability approach, in Sect. 4, we expose a set of related work, in the last section, we conclude and present our future works.

2 Problem Assessment Several countries including Morocco have created public web portals to allow citizens to access their OGD relating to the daily life of citizens such social or economic studies. USA [3] and UK [4] have been the first countries that implement OGD portals. These portals publish data in a large and numerous set of separated datasets. This later in the majority are files that may have a different format: CSV, EXCEL, PDF, and

494

H. Elmekki et al.

WORD among others. The problem of those files is their large size and their diverse structure. The incessant evolution of OGD and their huge volume leads to a large number of datasets. However, these datasets can be viewed or downloaded by the ordinary user (. i.e. citizens), for example, scientists who consider OGD a fruitful space to apply data processing algorithms (i.e. Datamining). Nevertheless, processing automatically these datasets by machine is not an easy task because of various kinds of problems such as heterogeneity of format, the incoherence of data, etc. These problems impact OGD quality and prevent efficient usage and integration of data to get relevant and useful information by both human user and machines. Therefore, in this paper we will try to answer our underlying research question: “How an OGD portal user (Citizens or machine) can extract useful information from different datasets that have heterogeneous structure and different format?”

Next, we illustrated our research question through a description of Moroccan OGD portal. 2.1

Moroccan OGD Portal Issues

Moroccan OGD portal [5], as illustrated in Fig. 1, contains data about nine topics: Cartography; Education; Job; Finance; IT & Telecom; Research & Development; Health; Tourism; Society. Each theme groups a set of datasets. To each dataset, a description is associated to describe its content and the name of the institution that published this dataset, example of datasets “Assurance maladie obligatoire –cnops population” published by “Caisse Nationale des Organismes de Prévoyance Sociale”. 2.2

Description

In our study, we have extracted the number of Moroccan OGD datasets, and then we have classified them according to these formats: EXCEL, DOC, CSV and so on, as shown in Fig. 2. From this graph, we conclude that XLS and DOC are the most frequent format used in Moroccan OGD portal. Additionally, comparing the number of datasets to other countries OGD datasets, for example UK with 44,674 datasets; USA with 281,005 datasets, whereas, for Moroccan, we find 135 datasets. The huge difference of the number of datasets can be explained by the fact that the published data in OGD Moroccan portal are often timeless and out-of-date which ranks Morocco among the last countries in the world concerning OpenData publishing. According to [7], Morocco is ranked 79 in the world. In addition to the slowness of OGD publications, the data currently published in the Moroccan portal have also structural and content problems that will be explained in details in the next section of this paper. 2.2.1 Problems We classify Moroccan OGD portal problems to two types of issues: the first one concerns the content issues and the second one concerns the structure issues of the dataset. We detail each kind in the next section.

Open Government Data

495

Fig. 1. Moroccan OGD portal.

2.2.1.1 Content Issues Data Incoherence: we have identified a set of related problems as listed below: – The confusion between Uppercase and Lowercase characters: The random use of uppercase and lowercase letters is one of the common problems in OGD datasets. This confusion doesn’t allow the integration of OGD in machine processes in order to be treated automatically. – Using multiples languages: such as Arabic and French. In addition, some datasets contain data in both French and Arabic. This diversity of datasets languages doesn’t enable an automatic processability of data by machine even their usage by a citizen, specifically this latter doesn’t understand French or Arabic. Lack of Metadata: for example, in the case of a tabular dataset, the data is represented in columns. The names of the columns are not sufficient to interpret and understand the meaning of the contained data, which leads to the ambiguity of senses issue. Hence, we need to identify a metadata to describe these columns, for example type, size, description of abbreviations. In the case of other formats, we have to use specific tools that can help us to extract the metadata from irregular dataset files such as txt and pdf.

496

H. Elmekki et al.

Fig. 2. Percentage of usage of formats of datasets in Moroccan OGD portal

Data Incompleteness: to benefit from the advantages expected from the creation of OGD portals, the coherence and the completeness of published datasets must be guaranteed by the OGD Publishers. 2.2.1.2 Structure Issues The majority of the published datasets in Moroccan portal are in EXCEL format. For this reason, we illustrate structures issues through tabular data format. The complexity of the structure of other formats is beyond the scope of this paper because they can produce other specific structure issues. Lack of Uniform and Standard Structure: for example in the case of tabular data, headers are not standardized. They may not occupy a single line. This leads to difficulties for an automatic processing of dataset, which requires a standard and uniform structure. Dataset Formats Heterogeneity: concern complex file format. Moroccan OGD portal has about twenty datasets in PDF, WORD and even in PPT format. It’s difficult to integrate such kind of datasets into automatic programs and processes because they are heterogeneous and contain several types of data: images, tables or texts.

Open Government Data

497

In the next section, we propose an approach to overcome OGD datasets problems. In this paper, we focus only on exposing our main components required to prepare an OGD portal to be processable by machine.

3 Approach to Prepare Processable OGD Our proposed approach of “preparing processable OGD” allows transforming the format of existing datasets (i.e. EXCEL, CSV) to a readable and processable format. In fact, the approach aims at improving OGD quality of published datasets, in order to create valuable data, which can be used to create useful information for the different type of OGD users. Our approach is based on three levels of OGD discoveries (Sect. 3.1) and composes of four components (Sect. 3.2).

Source discovery

Structure discovery

Content discovery

Fig. 3. OGD discovery levels

3.1

OGD Discovery Levels

Our approach of preparing OGD datasets is based on three main levels of discoveries: Source discovery, Structure discovery and Content discovery, as illustrated in Fig. 3. We illustrate each level through the below example illustrated in Fig. 4: we suppose we have three portals on which we aim to apply our three levels of OGD discovery process: • Portal1 provides datasets about the reimbursement rate of each medicine by the health insurance entity. • Portal2 contains datasets about the availabilities of medicines in Moroccan regions and also datasets describing the composition of drugs. • Portal3 contains general information in the health field. Source discovery: this level aims to address the problem of “dataset formats Heterogeneity”, which may be available in different formats: CSV, EXCEL, and PDF and so on. In fact, this level aims to create a gateway to access OGD datasets available in OGD portals. Therefore, for this level, we develop a mechanism that enables to discover existing datasets in OGD portals and then access to them. The application of source discovery process on our example enables to get appropriate data from these OGD portals. Structure Discovery level: This level aims to overcome problems of “lack of metadata models” and “lack of uniform and standard structure” of published datasets. Therefore, our approach extracts the structure of OGD datasets and determine their descriptive and structural metadata. The determination of descriptive metadata

498

H. Elmekki et al.

Fig. 4. Application process of three level datasets discoverability.

consists of extracting data that describe the global characteristics of the dataset, for example, the title of the dataset, its date of publication, and the publisher entity of the dataset. The extraction of structural metadata is intended to get the elements that construct the structure of the dataset, for example, the type and size of the columns in the case of CSV or Excel datasets [8]. The application of structure discovery process on our example enables to get appropriate metadata for the previously collected data. Content Discovery: :This level aims to overcome specifically “Content issues” such as “Incompleteness of data “, “Incoherence of data “, “Ambiguity of senses”. Therefore, in our approach, we develop an ontology for OGD datasets and use semantic technology [9, 10] for to determine the semantic relationships between data in separate datasets in order to create linked opened data to prepare a machine-processable data. The application of the content discovery process on our example enables to get processable data. 3.2

Preparing OGD Datasets for Machine Processability

Our approach proposes four major components to implement the previous levels: Source Discovery, Structure Discovery and Content Discovery, as shown in Fig. 5: Access Dataset Engine (ADE), Structure Analyser Engine (SAE), Data Transformer engine (DTE) and Linking Data Engine (LDE).

Open Government Data

499

Fig. 5. Components for preparing OGD machine Processability.

• Access Dataset Engine (ADE): this component represents the “Discovery Source” level and enables accessing and uploading opened data datasets. It is composed of some mechanisms such as “source discovery mechanism” for discovering pertinent sources of OGD datasets that can fulfil a user request, “source upload and adaptation mechanism” to access OGD portals and upload the datasets, which can have different format (CSV, Excel, PDF) and finally, “topics identifying mechanism” to associate each dataset to a specific theme or various theme. • Structure Analyser Engine (SAE): this component represents the “Structure discovery” level. The role of this engine is to extract and analyse the structure of uploaded files or datasets. It aims to determine the metadata associated to each uploaded file. It extracts both general metadata describing datasets as the title or the type of dataset and also metadata related to the structure that depends on the format of each dataset, for example, extracting columns types in the case of CSV or EXCEL dataset files. The next components represent the “Content discovery” level • Data Transformer Engine (DTE): it’s the responsible to convert OGD datasets from their raw format (text, tabular…) to RDF/XML [11] format in order to create Linked Data. The technique of conversion OGD to reusable and processable format is one of value creation techniques used during the process of data harmonization [12]. • Linking Data Engine (LDE): after the transformation step, this component integrates and links semantically the resulting data. For this step, we will apply semantic technologies specifically ontologies.

4 Related Works With the emergence of OGD, several approaches and prototypes have been developed to facilitate the use and the integration of data. [13] proposes an interesting approach STOPaD to clean up tags as a type of metadata to define OGD datasets. The approach aims also to create links between OGD datasets issued from different portals using their tags. Since several OGD portals are based on the OpenSource solution CKAN [14], the proposed solution is tested on data accessible using CKAN API. The approach is based on tags to link between datasets, however, the tags as a metadata must be available for every dataset and also must have high quality to provide the possibility to interlink datasets.

500

H. Elmekki et al.

Our work comparing to this approach proposes the engine called SAE to analyze the structure of the dataset, define their metadata and design an engine to convert raw dataset to RDF/XML to create semantic links between OGD datasets. Authors of [15] describe a canonical formalization of tabular data and also present an approach to map incrementally and semantically tabular data to RDF in order to create linked data. The approach is tested on CSV datasets download from the PanEuropean data portal that provides a large amount of OpenData datasets. The paper limited the determination of the canonical form of tabular data to only unidimensional CSV datasets and doesn’t propose an algorithm to detect automatically the header of files. Our work comparing to this approach proposes designs a generalized solution to treat different format of files CSV, EXCEL, PDF that may have a complex structure. The authors of [16] propose a prototype called ChartViz to visualize graphically CSV OGD datasets using their URLs. Indeed The graphical representation helps to assess the quality of OGD by providing a visual measurement of the degree of the homogeneity of data types and the completeness of data, but it’s insufficient to have concentrate information constituted from data issued from separated datasets existing in different portals. Our work comparing to this approach proposes three levels of datasets discovery (source, structure, and content) helping, thus, to access several OGD datasets available in heterogeneous portals.

5 Conclusion and Future Works In this paper, we have identified a set of problems relating to the efficient usage of OGD portal through studying the case of Moroccan OGD portals. Indeed, many problems have been identified and discussed, such as data structure and format heterogeneities of published datasets. Moreover, we have proposed an approach to enhance OGD processability. This approach composed of four main components: 1. Access Dataset Engine (ADE) component as a gateway to reach and adapt OGD datasets from public portals, 2. Structure Analyzer Engine (SAE) component as a principal element to extract metadata composition of datasets 3. Data Transformer engine (DTE) component to transform datasets to a structure and organized format using XML/RDF and Linking Data Engine (LDE) component to enrich data with semantic and to links between transformed datasets. Our future work focuses on developing and implementing our OGD processability approach. Indeed, we will develop a framework that will implement our four components: Access Dataset Engine (ADE); Structure Analyzer Engine (SAE); Data Transformer engine (DTE) and Linking Data Engine (LDE). Additionally, we aim to develop a methodology that can help to improve Moroccan OGD portal.

Open Government Data

501

References 1. OECD: Open Government Data. http://www.oecd.org/gov/digital-government/opengovernment-data.htm 2. Nafis, F., Yousfi, S., Chiadmi, D.: How big open data can improve public services. In: Proceedings of the Mediterranean Conference on Information and Communication Technologies, pp. 607–612 (2015) 3. The home of the U.S. Government’s open data. https://www.data.gov/applications. Accessed 15 May 2018 4. Find open data. https://data.gov.uk/ 5. Les Données ouvertes de l’administration marocaine. http://data.gov.ma/fr. Accessed 15 May 2018 6. Attard, J., Orlandi, F., Scerri, S., Auer, S.: A systematic review of open government data initiatives. Gov. Inf. Q. 32(4), 399–418 (2015) 7. Open Data Barometer. https://opendatabarometer.org 8. Mitlöhner, J., Neumaier, S., Umbrich, J., Polleres, A.: Characteristics of open data CSV files. In: 2016 2nd International Conference on Open and Big Data (2016) 9. Patel-Schneider, P.F., Hayes, P., Horrocks, I.: OWL web ontology language semantics and abstract syntax. In: W3C Recommendation, 10 February 2004 (2004). http://www.w3.org/ TR/owl-semantics 10. Lamharhar, H., Chiadmi, D., Benhlima, L.: Ontology-based knowledge representation for eGovernment domain. In: Proceedings of the 17th International Conference on Information Integration and Web-based Applications and Services, iiWAS2015, 11–13 December 2015, Brussels, Belgium, published by ACM International Conference Proceedings Series, ISBN 978-1-4503-3491-4/15/12 (2015). http://dx.doi.org/10.1145/2837185.2837203 11. Gandon,F., Schreiber, G.: RDF 1.1 XML Syntax W3C Recommendation 25 February 2014 (2014). https://www.w3.org/TR/2014/REC-rdf-syntax-grammar-20140225/ 12. Attard, J., Orlandi, F., Auer, S.: Value creation on open government data. In: 2016 49th Hawaii International Conference on System Sciences (2016) 13. Tygel, A.F., Auer, S., Debattista, J., Orlandi, F., Campos, M.L.M.: Towards cleaning-up open data portals a metadata reconciliation approach. In: 2016 IEEE Tenth International Conference on Semantic Computing (ICSC) (2016) 14. CKAN. https://ckan.org/ 15. Ermilov, I., Auer, S., Stadler, C.: User-driven semantic mapping of tabular data, Conference: In Proceedings of 9th International Conference on Semantic Systems, I-SEMANTICS 2013, Graz, Austria, 4–6 September 2013 (2013) 16. Pirozzi, D., Scarano, V.: Support citizens in visualising open data. In: 2016 20th International Conference Information Visualisation (2016)

Toward an Evaluation Model for Open Government Data Portals Kawtar Younsi Dahbi ✉ , Hind Lamharhar, and Dalila Chiadmi (

)

Ecole Mohammedia d’Ingénieurs, Mohammed V University, Rabat, Morocco [email protected], [email protected], [email protected]

Abstract. In order to promote transparency, accountability, innovation and public participation Government worldwide started adopting Open Govermement Data (OGD) initiatives by making their governmental data available for the public through OGD national portals where users can search, interact and analyze published data. Promoting the usage of these portals is necessary as the value of OGD is perceived only when data is used. Therefore, evaluating their compliance with users ‘requirements is a major challenge to allow the achievement of expected benefits. For this aim, we propose in this work an evaluation model for OGD portals based on five main dimensions identified as having a high influence on their usage. The proposed evaluation model was applied to perform an eval‐ uation of four national OGD portals. Keywords: OGD · OGD portal · Evaluation model · OGD portal evaluation

1

Introduction

Open Government Data (OGD) can be defined as data and information produced or collected by government organisations that can be freely accessed, used and shared by anyone for any purpose [1, 2]. In recent years, Governments started adopting OGD initiatives by publishing their data which is related to several domains such as economy, geography, society, health, education, employment and Transport [1, 3]. Releasing this data can generate several benefits such as stimulating innovation, promoting economic growth, providing new innovative social services, enhancing transparency, democratic accountability, citizen’s collaboration, participation and self-empowerment [1, 3, 4]. The achievement of this expected benefits is related to the effective and efficient usage of OGD portals as these portals are the interface between government data and users [5]. Therefore, unlocking the potential of OGD portals implies a good understanding of users’ needs, requirements and expectations. Identifying factors influencing the usage of OGD portal and evaluating OGD Portals according to these factors will help OGD portals ‘maintainers identify straights and weaknesses and define the portal‘s continuous improvement plan to achieve its effective usage. Therefore, we propose in this paper an evaluation model to evaluate OGD portals’ fulfillment of users’ requirements. For this aim, we first investigated factors that have a high impact on the usage of OGD portals. Based on five identified dimensions (Richness of information, Discoverability, © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 502–511, 2019. https://doi.org/10.1007/978-3-030-03577-8_55

Toward an Evaluation Model

503

Reusability, Interactivity and Data quality), we proposed an evaluation model to eval‐ uate portals The rest of this paper is structured as follow, in Sect. 2, we present related works, in Sect. 3, we present our methodology and our proposed evaluation model. In Sect. 4, we present the use case, we finally draw our conclusions.

2

Related Works

In this section, we explore and discuss a set of works that focused on the evaluation of OGD Portals, the literature review intends to reflect on how existing research propose to evaluate OGD portals and which criteria are identified to be evaluated. In [6], authors presented an assessment of six countries by evaluating their compli‐ ance with OGD principles, the achievement of OGD benefits and the level of activity performed on portals. In [7], authors perform an evaluation for 13 Brazilian OGD portals according to several criteria including data access, quantity, size, format, data and meta‐ data quality. In [5], authors present a benchmarking framework for the quality evaluation of open data portals on the national level the framework proposes metrics related to different aspects such as the quantity of datasets and applications, the portal’s technical aspects, the portal’s features and the available metadata. [8] Presented a set of metrics to assess data quality issues in OGD portals, the metrics were applied to 12 portal and was linked to several data quality dimensions. In [9, 10], authors focus on the evaluation of metadata quality in the context of OGD Catalogues. Each of the previous works focuses in their evaluation on specific criteria and do not present a full evaluation of portals from different perspectives (portal’s features, content, quantity, quality, metadata..). we also noticed that few works present indi‐ cators or metrics to measure the different evaluation criteria. In our work, we try to full this gap by proposing an evaluation model with measurable indicators that eval‐ uates the portal’s from different perspectives that we call evaluation dimensions identified as influencing the most the usage of OGD portals.

3

The Proposed Evaluation Model

In this section, we present a proposition for an evaluation model for OGD portals based on two main components: ‘Evaluation Dimensions’ and ‘Evaluation Indicators’ as illustrated in Fig. 1. To define this evaluation model, our research methodology was conducted based on two well-defined steps: Step 1- this step aims to identify the evaluation dimensions which represent a consol‐ idation of factors influencing the usage of OGD portals and having a high impact on the fulfillment of users ‘requirements. To achieve this objective, we conducted a literature review and explored existing OGD portals. The identified dimensions are: Richness of Information which deals with the compliance of the portal with user’s needs in terms of content, Discoverability which deals with means and mechanisms that enhance data access on the portal, Reusability which deals with the technical Openness of published

504

K. Y. Dahbi et al.

Fig. 1. The proposed evaluation model for OGD portal

data on the portal and the possibility of its reuse, Interactivity which deals with the openness of the portal to user’s feedback, collaboration and interaction with published data and Data quality which deals with the quality of data published on the portal Step 2- this step aims to identify the evaluation indicators by proposing appropriate indicators that can measure the performance on each dimension. 3.1 Richness of Information Dimension (RI) The Richness of Information dimension presents user’s requirements in terms of data content and quantity. In fact, to allow a high usage of OGD portals, Governments should publish a wide range of data covering several domains and compliant with citizen‘s needs in terms of information. Creating value for users requires the identification of high-value and high-impact data for the public [1]. Evaluating the RI dimension requires, then, the identification of indicators to assess content and quantity and the degree of openness of the portal to user’s requests in terms of Data. For this aim, we proposed three indicators for this dimension: Data Quantity Indicator (DQI), Required Data Indi‐ cator (RDI) and Public Participation Indicator (PPI). Data Quantity Indicator (DQI) This Indicator evaluates the quantity of data on the portal, it is calculated based on the number of available datasets (ND) on the portal according to Table 1. M represents the median of the number of datasets (ND) of the Top ten national OGD portals having the highest number of datasets, we chose the median for this indicator because it is not deviated by extremely large or small values. Table 1. DQI calculation ND value DQI

0-M/4 M/4-M/2 0,25 0,5

M/2-3M/4 0,75

>M 1

Toward an Evaluation Model

505

Required Data Indicator (RDI) This Indicator examines the existence data categories identified as being the most valuable for users’. The definition of the required categories was based on the analysis of valuable data categories and datasets identified by international index [11, 12] and previous works [1, 13]. These categories are seven and can be classified as follow: Finance and economy, Health, Education, Environment, Mapping and cartography, Transport and logistics, Society and Demographics. If one dataset related to the category exists, the category is supposed to be existing in the portal. RDI is calculated as follow: RDI = N/7

N is the number of categories that exists in the portal.

(1)

Public Participation Indicator (PPI) This Indicator evaluates the degree of openness to users’ request, it examines the exis‐ tence of features allowing users’ to request new datasets. It takes two value 1 or 0 according to the existence of data request feature. 3.2 Discoverability Discoverability dimension deals with means and mechanisms that enhance data access and navigation on the portal. In fact, to help users take advantage from OGD portals, data must be discoverable on the portal, by discoverable, we mean that users should be able to search and access relevant datasets in a simple and efficient way. This can’t be achieved if metadata is not provided [14]. Metadata enable a better comprehension of data significance and data structure and help users to access data relevant to their need [15]. To structure and standardize metadata, several standards already exists such as DCAT vocabulary which is an RDF vocabulary designed to facilitate interoperability between data catalogs [16]. The DCAT vocabulary proposes a set of descriptive metadata necessary to allow data discovery such as title, description, tags and publisher. Discoverability is also related to the existence of search and navigation mechanisms and tools that allow data accessibility [14]. Evaluating the DI dimension requires the assessment of the completeness of descrip‐ tive metadata and the existence of data access features, we propose two indicators: Metadata Completeness Indicator (MCI) and Data Access Indicator (DAI).

Metadata Completeness Indicator (MCI) This indicator evaluates the completeness of descriptive metadata at the portal level. For each dataset Dn, a score MCIn is given based on the completeness (not null values) of descriptive metadata fields: Title, Description, Tags and publisher. The MCI is the average of MCIn of datasets existing on the portal:

𝐌𝐂𝐈 =

𝐍 ∑ 𝐌𝐂𝐈𝐧 𝐧=𝟏

𝐍

.

(2)

506

K. Y. Dahbi et al.

Data Access Indicator (DAI) This indicator assess the existence of features that enhance data discovery on the portal, it especially the existence of three features: search, sort and filter, it takes its values in the range [0,1] according to the existence of these features. 3.3 Reusability The reusability dimension deals with factors affecting the reuse of published data as the value of OGD materializes only upon its use [17]. OGD is considered reusable if data is published under an open license which grants permission to access, re-use, and redis‐ tribute data without restrictions [2] and published in an electronic format that allow its reusability (machine readable and non-proprietary) [18]. Publishing data in non-read‐ able format implies a substantial amount of time to prepare data before being usable [14]. Reusability deals also with the existence of features that provide an easy way to reuse data such as applications and API (Application Programming Interface). Evaluating Reusability Dimension requires, then, the evaluation of the legal and technical openness of datasets and resources and the existence of features that allow data reuse of the portal. For this aim, we identified three indicators: License Openness Indicator (LOI), Format Openness Indicator (FOI) and Data Reuse Indicator (DRI). License Openness Indicator (LOI) This Indicator (LOI) evaluates the openness of the dataset’s license according to the Open Definition [2] which presents a list of licenses considered as open. For each dataset Dn, a score LOIn is given based on the license of the dataset as follow: If the license is non open: LOIn = 0 else LOIn = 1.

(3)

The LOI is the average of LOIn of datasets existing on the portal: 𝐋𝐎𝐈 =

𝐍 ∑ 𝐋𝐎𝐈𝐧 𝐧=𝟏

𝐍

.

(4)

Format Openness Indicator (FOI) This indicator evaluates the openness of resources ‘s format. For each resource Rn, a score FOIn is given based on the format of the resource: – If the format is non machine readable: FOIn = 0 (PDF for example) – If the format is machine-readable and non-open: FOIn = 0,5 (XLS for example) – If the format is machine-readable and open: FOIn = 1 (JSON or CSV for example) A list of machine-readable formats identified in [9] was selected for this evaluation. The FOI is calculated for the portal as the average of the FOIn of published resources. 𝐅𝐎𝐈 =

𝐍 ∑ 𝐅𝐎𝐈𝐧 𝐧=𝟏

𝐍

.

(5)

Toward an Evaluation Model

507

Data Reuse Indicator (DRI) This indicator evaluates the existence of specific features that have a high impact on data reusability. it especially examines the existence of three features: search, sort and filter. 3.4 Interactivity Interactivity dimension deals with the openness of the portal to user’s interaction. In fact, to unlock their potential, OGD portals must provide users with features to interact, collaborate, and engage with data such as the possibility to rate, comment and evaluate the quality of the portal and the available datasets, form social and collaborative groups to have knowledge exchange [19]. Moreover, to enhance the user interactivity, the portal must provide visualiza‐ tion tools to improve data exploration and analysis. Visualization features come in various forms including tables, graphs, maps etc. to summarize an give a meaning to a large amount of data even if the citizen lacks the required skills for data processing [20, 21]. Evaluating Interactivity dimension requires then the examination of the existence of interactive, feedback and visualization features on the portal, for this aim two indicator are proposed: Collaboration and Feedback Indicator (CFI), and Visualization Indicator (VI) Collaboration and Feedback Indicator (CFI) This indicator examines the existence of features related to collaboration, feedback and evaluation. It assess the existence of three features: Commenting datasets, rating data‐ sets, giving feedback about the portal. Visualization Indicator (VI) This indicator evaluates the existence of visualization tools and features such as maps, graphs or applications to visualize and interact with data on the portal. 3.5 Data Quality The Data Quality dimension deals with the quality of data on the portal. Publishing a high quality data is essential, missing or lacking data hinder data usage and reusability and have a high impact on the quality of applications that reuse data [15]. The data quality literature proposes several data quality sub dimension. In this work, we propose to evaluate three main sub-dimensions: Accuracy, completeness and timeliness [21] for this aim, two indicators were identified: Data Accuracy and completeness Indicator (DACI) and Timeliness Indicator (TI). Accuracy and Completeness Indicator (ACI) This Indicator evaluates data accuracy and completeness for each published resource. For each resource Rn, a score AC in is given based on the accuracy and completeness of resource fields and is calculated as the number of accurate values over the total number of values, all null values (non-complete) are considered as non-accurate. ACI is the average of ACIn for the portal’s published resources

508

K. Y. Dahbi et al.

𝐀𝐂𝐈 =

𝐍 ∑ 𝐀𝐂𝐈𝐧 𝐧=𝟏

𝐍

.

(6)

Timeliness Indicator (TI) The Indicator evaluates data timeliness for published datasets, it is based on temporal metadata especially dataset creation date and dataset update date and take foundation from the following hypothesis: If datasets creation date equals dataset update date, the dataset is supported never being updated and calculated as the number of datasets that have been updated over the total number of datasets.

4

Case Study

In this section, we present an evaluation of four OGD portals according to our proposed evaluation model. These portals are: the Moroccan portal [22] which is the case study of our research lab and the portals of three leading countries (Canada [23], France [24]

Fig. 2. Results after applying the evaluation model to four countries

Toward an Evaluation Model

509

and Australia [25]). The use case results are presented in Fig. 2 and Table 2. To prepare the calculation of our evaluation indicators, data was collected from selected portals based on two main steps. We, first, performed an analysis of selected portals’ content, features and functionalities, and then extracted metadata using the API when available, web scraping or manual extraction. The quality dimension was not initially evaluated since it requires substantial analysis to deal with its sub-dimensions especially the pres‐ ence of experts to evaluate data accuracy for all published datasets. Table 2. Indicators calculation for selected portals

Australia Canada France Morocco

Richness of information DQI RDI PPI 0,75 1 1 1 1 1 0,75 1 0 0,25 0,8 0

Reusability LOI FOI 0,92 0,64 1 0,83 0,96 0,83 1 0,4

DRI 1 1 1 0

Discoverability MCI DAI 0,96 1 0,96 1 1 1 0,8 1

Interactivity VI CFI 1 0,66 1 1 1 0,33 0 0

In terms of Richness of Information, the DQI indicator varies from 1 for Canada, to 0,25 for Morocco with only 136 datasets. The RDI Indicator is almost similar for all portal and equals 1 as required data categories are ubiquitous in almost all portals except ‘Environment’ for Morocco. The PPI Indicator is null for France and Morocco since these portals do not propose data request features. In terms of Discoverability, The MCI indicator varies from 1 for the French portal which publish complete descriptive metadata for all datasets to 0,8 for Morocco because of the lack of complete metadata records. The DAI indicator equals 1 for all countries, they all offer features allowing data access and navigation. In terms of Reusability, The LOI equals almost 1 for all portals, datasets are published under an Open license except few exceptions for Australia and France. The FOI Indicator varies 0,83 from France and Canada to 0,4 for Morocco since several datasets are published under an non-open format which do not enable their reusability by all without technological restrictions. The DRI indicator equals 1 for all portals except the Moroccan portal which do not propose neither API nor applications. In terms of “Interactivity”, The CFI indicator equals 1 only for the Canadian portal which offers all feedback and collaboration features, however, none of these features are provided within the Moroccan portal. The VI indicator equals 1 for all portals except Morocco which do not provide visualization features and allows only data preview.

5

Conclusion

The main challenge of OGD portals its allowing their effective use and appropriation by users. In this paper, we present an evaluation model of OGD portals. We identified five main dimensions which have a high impact on the compliance of the portal with users’ needs and expectations: Richness of information, Discoverability, Reusability, Interactivity and quality and we proposed indicators for these dimensions. We finally evaluated four OGD national portals according to our evaluation model. As future work,

510

K. Y. Dahbi et al.

we aim to define a global index for evaluating OGD portals and propose solutions for OGD portals’ maintainers to gain maturity over identified dimensions.

References 1. Ubaldi, B.: OGD: towards empirical analysis of OGD initiatives. OECD Working Papers on Public Governance 22, 0_1 (2013) 2. The Open Definition: The Open Definition – Open Definition - Defining Open in Open Data, Open Content and Open Knowledge. opendefinition.org 3. Janssen, M., Charalabidis, Y., Zuiderwijk, A.: Benefits, adoption barriers and myths of open data and open government. Inf. Syst. Manag. 29(4), 258–268 (2012) 4. Nafis, F., Yousfi, S., Chiadmi, D.: How big open data can improve public services. In: Proceedings of the Mediterranean Conference on Information and Communication Technologies 2015, pp. 607–612. Springer, Cham (2016) 5. Máchová, R., Lnénicka, M.: Evaluating the Quality of open data portals on the national level. J. Theor. Appl. Electr. Commer. Res. 12(1), 21–41 (2017) 6. Gomes, Á., Soares, D.: Open government data initiatives in Europe: northern versus southern countries analysis. In: Proceedings of the 8th International Conference on Theory and Practice of Electronic Governance, pp. 342–350. ACM, October 2014 7. Oliveira, M.I.S., de Oliveira, H.R., Oliveira, L.A., Lóscio, B.F.: Open government data portals analysis: the Brazilian case. In: Proceedings of the 17th International Digital Government Research Conference on Digital Government Research, pp. 415–424. ACM, June 2016 8. Ciancarini, P., Poggi, F., Russo, D.: Big data quality: a roadmap for open data. In: 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService), pp. 210–215. IEEE, March 2016 9. Neumaier, S., Umbrich, J., Polleres, A.: Automated quality assessment of metadata across open data portals. J. Data Inf. Q. (JDIQ) 8(1), 2 (2016) 10. Reiche, K.J., Höfig, E.: Implementation of metadata quality metrics and application on public government data. In: 2013 IEEE 37th Annual Computer Software and Applications Conference Workshops (COMPSACW), pp. 236–241. IEEE, July 2013 11. Knowledge, Open. “Tracking the State of OGD.”- Global Open Data Index. index.okfn.org/ 12. Barometer, Open Data. “The Open Data Barometer 2013 Global Report.” World Wide Web Foundation and Open Data Institute (2013) 13. Veljković, N., Bogdanović-Dinić, S., Stoimenov, L.: Benchmarking open government: an open data perspective. Gov. Inf. Q. 31(2), 278–290 (2014) 14. Attard, J., Orlandi, F., Scerri, S., Auer, S.: A systematic review of open government data initiatives. Gov. Inf. Q. 32(4), 399–418 (2015) 15. Data on the Web Best Practices. www.w3.org/2013/dwbp/ 16. World Wide Web Consortium: Data Catalog Vocabulary (DCAT) (2014) 17. Susha, I., Grönlund, Å., Janssen, M.: Organizational measures to stimulate user engagement with open data. Transforming Gov. People Process Policy 9(2), 181–206 (2015) 18. Tauberer, J., Lessig, L.: The 8 principles of open government data (2007). Obtenido de http:// www.opengovdata.org/home/8principles 19. Alexopoulos, C., Zuiderwijk, A., Charapabidis, Y., Loukis, E., Janssen, M.: Designing a second generation of open data platforms: integrating open data and social media. In: International Conference on Electronic Government, pp. 230–241. Springer, Heidelberg, September 2014

Toward an Evaluation Model

511

20. Graves, A., Hendler, J.: Visualization tools for open government data. In: Proceedings of the 14th Annual International Conference on Digital Government Research, pp. 136–145. ACM, June 2013 21. Wang, L., Wang, G., Alexander, C.A.: Big data and visualization: methods, challenges and technology progress. Dig. Technol. 1(1), 33–38 (2015) 22. Batini, C., Cappiello, C., Francalanci, C., Maurino, A.: Methodologies for data quality assessment and improvement. ACM Comput. Surv. (CSUR) 41(3), 16 (2009) 23. Open Data Maroc. http://www.data.gov.ma 24. Open Data - Government of Canada. https://open.canada.ca/en/open-data 25. Plateforme ouverte des données publiques françaises. https://www.data.gouv.fr/fr/ 26. Australian government. https://data.gov.au/

NoSQL Scalability Performance Evaluation over Cassandra Maryam Abbasi1(B) , Filipe S´ a2(B) , Daniel Albuquerque2(B) , 2(B) , Filipe Caldeira1,2(B) , Paulo Tom´e2(B) , Cristina Wanzeller Pedro Furtado1(B) , and Pedro Martins2(B) 1

2

Department of Computer Sciences, University of Coimbra, Coimbra, Portugal {maryam,pnf}@dei.uc.pt Department of Computer Sciences, Polytechnic Institute of Viseu, Viseu, Portugal {filipe.sa,dfa,cwanzeller,caldeira,ptome,pedromom}@estv.ipv.pt

Abstract. The implementation of Smart-Cities is growing all over the world. From big cities to small villages, information able to provide a better and efficient urban management is collected from multiple sources (sensors). Such information has to be stored, queried, analyzed and displayed, aiming to contribute to a better quality of life for citizens and also a more sustainable environment. In this context it is important to choose the right database engine for this scenario. NoSQL databases are now generally accepted by the database community to support application niches. They are known for their scalability, simplicity, and key-indexed data storage, thus, allowing an easy data distribution and balancing over several nodes. In this paper a NoSQL engine is tested, Cassandra, which is one of the most scalable, amongst most NoSQL engines and therefore, a candidate for use in our application scenario. The paper focuses on horizontal scalability, which means that, by adding more nodes, it is possible to respond to more requests with the same or better performance, i.e., more nodes mean reduced execution time. Although, adding more computational resources, does not always result in better performance. This work assesses how each workload (e.g., data volume, simultaneous users) influence scalability performance. An overview of the Cassandra database engine is presented in the paper. Following, it will be tested and evaluated using the benchmark Yahoo Cloud Serving Benchmark (YCSB). Keywords: NoSQL · Cassandra · Horizontal scalability Performance · Database · YCSB · BigData · Distributed Smart-City · CityAction

1

· Parallel

Introduction

Traditional relational databases occupy a distinct position when compared with NoSQL databases, which represent, innovative solutions for emerging BigData c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 512–520, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_56

NoSQL Scalability Performance Evaluation over Cassandra

513

related problems, mainly related to unstructured data, data-volume, and the applications growth. This leads to the necessity of an efficient execution and extraction of requests and data management. With the evolution of new technologies and web-oriented paradigms, unstructured data is growing exponentially, leading to the evolution of several technologies. Data storage and its processing requirements results in the 3V concept, characterized by data Volume, Velocity, and Variety, which have affected the industry and opened a new opportunity window for improvement in management tools and data storage [12]. In NoSQL, data distribution, i.e., placing parts of the same database across different nodes, solves problems related to data storage, access performance, and fast query execution. Although, distributing data might also increase communication delays between nodes. With NoSQL horizontal scalability data improvements, the industry is motivated to scale data to increase the number of served clients and improve execution speed. Data scalability to allow system speedup is tightly connected to two main ideas: first, allowing parallel execution of requests, more operations can be performed per time unit and more requests are served; second, adding more servers contributes for a direct performance improvement also offering improved availability and fault tolerance. Efficient client management is essential for small and large companies business, raising the need to reduce waiting time as much as possible since performance problems and badly implemented systems directly affect companies profit [1,2]. Having an applied interest in mind, two questions emerge, “how many clients are supported/served?” and “how fast can it be done?”. These two topics relate directly to the ability of the database to scale. Response time is influenced by the number of servers. However, oversizing can positively or negatively influence performance due to communication overheads. Thus, a reasonable size, i.e., an optional number of servers, must be found. The research community interest has increased into non-relational databases, contributing in large scale to the number of studies relating to NoSQL databases. Though, the major research conclusions are directed to large industrial environments, which cannot be applied to smaller companies. This research study concentrates on scalability and architecture of the NoSQL database Cassandra. Cassandra’s capability to distribute data is examined, in conjunction with, the capability to manage high load, characterized by a high number of simultaneous requests. First, the architecture of Cassandra is reviewed, including the most relevant mechanisms which influence performance. Secondly, by increasing the number of servers, data size and number of threads, different execution times are tested and compared. To evaluate Cassandra database performance, it was used the data generator and workloads present in the YCSB benchmark [4]. This document is organized as follows: Sect. 2, introduces Cassandra’s mechanisms related to performance achievement. Section 3, describes the experimental setup. Section 4, presents experimental results and respective conclusions.

514

M. Abbasi et al.

Section 6, briefly discusses the most relevant related topics. Finally, Sect. 7, presents the conclusions and future work.

2

Architecture

Cassandra is a Column Family NoSQL database that stores data as a set of vertically oriented tables. It was first developed by Facebook and currently developed and maintained by Apache [3]. Its storage architecture is appropriated for write/read-intensive operations (e.g., event logging). Cassandra has a set of characteristics that increase performance, such as indexing and memory mapping. This section describes the mechanisms mentioned above. Internally Cassandra follows a Staged-Event-Driven Architecture (SEDA) for each request stage, which allows handling several simultaneous requests [13]. Cassandra capability to increase the number of threads according with the demand is provided by the mechanisms present in SEDA, which limit the necessary system variations. However, the maximum number of allowed threads is limited by the operating system and underneath hardware. Despite the limitation, which affects all database engines, Cassandra keeps showing superior performance when scaling-out. As already mentioned, volatile memory usage affects the performance and allows to significantly reduce execution time, leading Cassandra to take advantage of memory mapping [2] instead of accessing to standard disk. Within this context, Casandra relies on two mechanisms: row-cache and keycache. Figure 1 shows, the key-cache, which is used to map in memory RAM all stored keys, i.e., primary keys that allow for a possible fast key-value search. As concluded, besides memory mapping, indexing is also essential to achieve better performance. In Cassandra, the primary key index is generated automatically. Because all hash keys are unique, upon new data added, the index is refreshed and ready records can be retrieved. YCSB data retrieval is performed by generating random records and accessing those values (data). In a cluster environment, when the system receives the

Fig. 1. Cassandra caching

NoSQL Scalability Performance Evaluation over Cassandra

515

request it communicates with the respective node and serves the client. For the data distribution, each record is assigned to a specific server, according to its hashed key. Each node becomes responsible for all the records that he manages and is also aware of the hash ranges stored in the other servers [9]. This approach allows for an efficient request management. Since any of the Cassandra nodes may receive the request, if needed, it redirects the request to the node that stores the requested data.

3

Experimental Setup

As mentioned in Sect. 2, YCSB was used for the experimental setup. YCSB comprises two modules: the data generator and a group of operations to test the database. Data is generated randomly and organized in records with 10 fields. The workloads consist on tests where several parameters can be configured, such as: amount of read/write operations; total of operations; number of records. For the purpose of our tests, two YCSB workloads were executed. First, Workload C, consisting on 100% read operations. Second, Workload E, consisting on 95% scan and 5% insert. These workloads were chosen to test Cassandra’s read performance and also to study the database performance while serving read requests from several users simultaneously. The number of clients used with YCSB was also changed during Casandra’s evaluation, from 1 to 5000 threads. Different scenarios were created regarding the database size (1GB, 10GB, 100GB) allowing to evaluate Cassandra’s scalability, using 1 to 10 servers. Results will allow to study how Cassandra scales, and how the execution time improves as the number of servers increase. The 10 used nodes (all similar) had the following characteristics: i7, 3.4GHz, 16GB RAM. Tests show Cassandra’s scalability evaluation while executing workload C and E, with 1, 5 and 10 nodes, and the database size from 1GB, 10GB and 100GB. While Casandra is replying to requests, the number of threads is increased from 500 to 1500 and 5000.

4

Scalability Results

This section shows the obtained experimental results concerning Cassandra’s scalability. All the execution times are in seconds and were obtained from the execution of 10000 operations. The scalability was evaluated by executing workload C (100% read) and E (95% scan operations and 5% insert) with different number of nodes (1, 5 and 10), at the same time the database size was changed from 1GB, 10GB and 100GB. Figure 2, shows that with a small database size (1GB), the execution time increases continuously. As the data volume increases (10GB and 100GB) the cluster with 5 and 10 nodes, shows better performance, while a single node keeps getting worse results. the change in the database explains the obtained

516

M. Abbasi et al.

results, meaning that, 5 nodes started showing better results. However, the 100GB dataset is considered too small to conclude the high performance of the 10 nodes cluster. Figure 3 shows, workload E - 100% scans. When comparing with workload C, Fig. 2, there are some resemblances. In workload C the overall execution time using a single server is worse than expected due to the workload type (100% read requests). With 1 server and 100GB data, there was a very high execution time before all requests were served. Previously, on the execution of workload C, there was a total of 10.000 read operation, where one read operation consists of a simple get. In workload E, scan operations are performed, where one scan operation consists of reading between 1 and 100 records. Considering these results, workload C and E, it is possible to conclude that Cassandra is more optimized to perform simple operations (get/put), and it is not efficient to perform scan operations.

5

Load Management

In this section Load Management in Cassandra is evaluated by performing multiple simultaneous requests, allowing to test if Cassandra is capable of managing high load and to analyze the execution time impact in the database cluster. For this objective the proposed tests were performed using 1 server with 10GB of data, and a 5 nodes cluster with 50GB. Figure 4 shows the obtained results for the workload C. It is possible to conclude that when using a single thread with a smaller dataset, a single server can perform faster. However, as the number of threads/requests increase, parallel query execution has a better performance only until an absolute limit. In Fig. 4, once there are more than 600 threads, the system is no longer able to improve its performance. Moreover, with 1200 and 6000 threads, execution times might be acceptable, never-the-less, performance is degrading.

Fig. 2. Workload C - 100% read - Cassandra Scalability

NoSQL Scalability Performance Evaluation over Cassandra

517

Fig. 3. Workload E - 95% scan short ranges and 5% insert - Cassandra Scalability

Fig. 4. Workload C

Fig. 5. Workload E

518

M. Abbasi et al.

Figure 5 shows how Cassandra handles workload E (100% scans). Casandra is capable of scaling and improve the performance since the execution time is continuously decreasing. It is important to note that, it was not possible to execute workload E using a single server with 100 threads. This happens due the fact that, since query execution is not performed in parallel, response times were too big, leading to “timeout” errors.

6

Resumed Related Work

In [10] the authors analyze HBase and Riak throughput, scalability and the use of hardware to study its performance for different scenarios. Other related work in the field [11], evaluate and compare the performance of NoSQL databases: Voldemort, Cassandra, and HBase. The authors concentrate their attention on the execution of scan operations with the objective of understanding the performance impact on NoSQL databases. In comparison with the proposed work on this paper, it is also used scan operations on Cassandra, however, the focus was concentrated on understanding how scalability can improve performance. In [8], authors introduced a method in Cassandra for reducing requests execution time of simple operations and range queries. Another technique, CCIndex for Cassandra, was proposed by [7]. This method uses indexes that allow faster execution of requests over Cassandra. The authors advocate that Cassandra should enable indexes too and that CCIndexes significantly increase the performance. When comparing these approaches with our work, the achieved performance study was focused on standard Cassandra version, without modifications. Our research shows how Cassandra would play in real enterprise environments. More performance improvements in Cassandra were proposed by [6]; the approach was to combine Cassandra with Hadoop (MapReduce oriented framework) [5]. Performed experiments prove that it is possible to improve Cassandra performance by combining it with other mechanisms. In this work the cluster size performance impact on Cassandra, is evaluated. Such evaluation is not mentioned in any of the referenced related work.

7

Conclusions and Future Work

This work evaluates one of the possibilities to store a significant amount of data, collected from sensors in the context of Smart-Cities for the project CityAction. The study of Cassandra allowed understanding the internal architecture and mechanisms which affect performance under stress scenarios. The use of YCSB, get operations and scan operations, during the experimental study, aimed to understand how efficient the studied database can be to execute simultaneous queries in parallel, assuming the database was already loaded. These multiple requests can be interpreted as many users accessing data simultaneously and

NoSQL Scalability Performance Evaluation over Cassandra

519

continuously. Obtained results allow studying how high-load is managed by the database. Findings enable to conclude that Cassandra is highly scalable and that performance improves when data is distributed and queries are executed in parallel. Moreover, it is also capable of efficiently manage multiple requests, while at the same time overall execution time becomes lower. Other results also show that Cassandra can scale-up until a certain point, getting overloaded when the cluster characteristics (hardware) are not sufficient to respond the demand. When executing simple operations (get operations) and key-value data accesses, Cassandra shows fast performance. However, when execution scans, performance decreases, and the system is not as fast as with get operations. Note that, despite the type of operation, Cassandra database can always scale to adapt to the current load. By doing so, it will distribute data across nodes, creating some network overhead. As future work, in the context of Smart-Cities CityAction project, other NoSQL databases will be tested to accommodate large amounts of data arriving from multiple sensor sources. Acknowledgements. “This article is a result of the CityAction project CENTRO01-0247-FEDER-017711, supported by Centro Portugal Regional Operational Program (CENTRO 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), and also financed by national funds through FCT - Funda¸ca ˜o para a Ciˆencia e Tecnologia, I.P., under the project UID/Multi/04016/2016. Furthermore, we would like to thank the Instituto Polit´ecnico de Viseu for their support.” Special thanks to Maryam Abbasi and Pedro Martins for their persistence and availability for making this paper possible.

References 1. Beernaert, L., Gomes, P., Matos, M., Vila¸ca, R., Oliveira, R.: Evaluating Cassandra as a manager of large file sets. In: Proceedings of the 3rd International Workshop on Cloud Data and Platforms, pp. 25–30. ACM (2013) 2. Carpenter, J., Hewitt, E.: Cassandra: The Definitive Guide: Distributed Data at Web Scale. O’Reilly Media, Inc. (2016) 3. Cassandra, A.: The apache software foundation. The Apache Cassandra project (2013) 4. Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing, pp. 143–154. ACM (2010) 5. Dean, J., Ghemawat, S.: MapReduce: a flexible data processing tool. Commun. ACM 53(1), 72–77 (2010) 6. Dede, E., Sendir, B., Kuzlu, P., Hartog, J., Govindaraju, M.: An evaluation of Cassandra for Hadoop. In: 2013 IEEE Sixth International Conference on Cloud Computing (CLOUD), pp. 494–501. IEEE (2013) 7. Feng, C., Zou, Y., Xu, Z.: Ccindex for Cassandra: a novel scheme for multidimensional range queries in Cassandra. In: 2011 Seventh International Conference on Semantics Knowledge and Grid (SKG), pp. 130–136. IEEE (2011)

520

M. Abbasi et al.

8. Fukuda, S., Kawashima, R., Saito, S., Matsuo, H.: Improving response time for Cassandra with query scheduling. In: 2013 First International Symposium on Computing and Networking (CANDAR), pp. 128–133. IEEE (2013) 9. Garefalakis, P., Papadopoulos, P., Manousakis, I., Magoutis, K.: Strengthening consistency in the Cassandra distributed key-value store. In: IFIP International Conference on Distributed Applications and Interoperable Systems, pp. 193–198. Springer (2013) 10. Konstantinou, I., Angelou, E., Boumpouka, C., Tsoumakos, D., Koziris, N.: On the elasticity of NoSQL databases over cloud management platforms. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pp. 2385–2388. ACM (2011) 11. Pirzadeh, P., Tatemura, J., Po, O., Hacıg¨ um¨ u¸s, H.: Performance evaluation of range queries in key value stores. J. Grid Comput. 10(1), 109–132 (2012) 12. Talia, D.: Clouds for scalable big data analytics. Computer 46(5), 98–101 (2013) 13. Welsh, M., Culler, D., Brewer, E.: SEDA: an architecture for well-conditioned, scalable internet services. In: ACM SIGOPS Operating Systems Review, vol. 35, pp. 230–243. ACM (2001)

A Novel Filter Approach for Band Selection and Classification of Hyperspectral Remotely Sensed Images Using Normalized Mutual Information and Support Vector Machines Hasna Nhaila(&), Asma Elmaizi, Elkebir Sarhrouni, and Ahmed Hammouch Laboratory LRGE, ENSET, Mohammed V University, B.P. 6207 Rabat, Morocco [email protected], [email protected], [email protected], [email protected]

Abstract. Band selection is a great challenging task in the classification of hyperspectral remotely sensed images HSI. This is resulting from its high spectral resolution, the many class outputs and the limited number of training samples. For this purpose, this paper introduces a new filter approach for dimension reduction and classification of hyperspectral images using information theoretic (normalized mutual information) and support vector machines SVM. This method consists to select a minimal subset of the most informative and relevant bands from the input datasets for better classification efficiency. We applied our proposed algorithm on two well-known benchmark datasets gathered by the NASA’s AVIRIS sensor over Indiana and Salinas valley in USA. The experimental results were assessed based on different evaluation metrics widely used in this area. The comparison with the state of the art methods proves that our method could produce good performance with reduced number of selected bands in a good timing. Keywords: Dimension reduction  Hyperspectral images  Band selection Normalized mutual information  Classification  Support vector machines

1 Introduction Recently, the hyperspectral imagery HSI becomes the principal source of information in many applications such as astronomy, food processing, Mineralogy and specially land cover analysis [1, 2]. The hyperspectral sensors provide more than a hundred of contiguous and regularly spaced bands from visible light to near infrared light of the same observed region. These bands are combined to produce a three dimensional data called a hyperspectral data cube. Thus, an entire reflectance spectrum is captured at each pixel of the scene. This large amount of information increases the discrimination between the different objects of the scene. Unfortunately, we face many challenges in storage, time treatment and especially in the classification schemes due to the many class outputs and the limited number of training samples which is known as the curse of dimensionality [3]. Also the presence of irrelevant and redundant bands complicates the © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 521–530, 2019. https://doi.org/10.1007/978-3-030-03577-8_57

522

H. Nhaila et al.

learning algorithms. To overcome these problems, the dimensionality reduction (DR) techniques based on feature selection or extraction become an essential preprocessing step that significantly enhances the classification performance. This article will focus on feature selection methods which include filter and wrapper approaches depending on the use of the learning algorithm in the selection process. Our proposed method is a filter approach. The rest of this paper is organized as follows: Sect. 2 presents the related works of feature selection based methods for dimension reduction of hyperspectral images. In Sect. 3, we describe the proposed approach. Section 4 presents the datasets and discusses the experimental results in comparison with the state of the art methods. Finally, some conclusions of our work are drawn in Sect. 5.

2 Related Works Feature selection approaches have attracted increasing international interest in the last decades. Thus, various methods have been proposed to overcome the HSI classification challenges. The maximal statistical dependency based on mutual information MRMR was used in [4] to select good features for HSI classification. In [5], a greedy optimization strategy was applied to select features from HSI data. In their work [6], authors proposed an adaptive clustering for band selection. Additionally, in [7], an unsupervised method for band selection by dominant set extraction DSEBS was proposed using the structural correlation. On the other hand, in [8], the Gray Wolf Optimizer GWO was used to reformulate the feature selection as a combinatorial problem based on class separability and accuracy rate by modeling five objective functions. New methods are still appearing in the literature. In [9], a new method for dimension reduction of Hyperspectral images GLMI was proposed using GLCM features and mutual information. In [10], authors proposed a semi supervised local Fisher discriminant analysis using pseudo labels samples for dimensionality reduction of HSI. In our work, we propose a new filter approach called NMIBS based on information theoretic, we use the normalized mutual information with the support vector Machines SVM to address the curse of dimensionality problem in HSI classification. To confirm the effectiveness of our proposed approach, experiments are carried out on the NASA’s AVIRIS Indian Pines and Salinas hyperspectral datasets with comparison to several techniques of band selection and classification of hyperspectral images.

3 Methodology The main aim of this work is to improve the classification performance of hyperspectral images by introducing a new filter approach for band selection. It consists to select the optimal subset of relevant bands and remove the noisy and redundant ones using the normalized mutual information NMI.

A Novel Filter Approach for Band Selection

523

According to the general principle of feature selection methods [11], our algorithm comprises four main steps: • The generation procedure of the candidate bands using sequential feature selection starting with an empty set. • The evaluation function to judge the goodness of the current subset. In this step, we measure the information and dependence using NMI. • The stopping criterion to decide when to stop the search. It depends on the number of iterations and the features to be selected. • The validation procedure to test the effectiveness of the retained subset of bands. In this step, we applied the SVM classifier on tow real world benchmark datasets and compare the obtained results with the state of the art methods. The remainder of this section gives a brief explanation of the band selection using normalized mutual information, the definition of Support vector machine and presents the complete selection process of the proposed algorithm. 3.1

Normalized Mutual Information for Band Selection

Mutual information has been widely studied and successfully applied in hyperspectral remote sensing imagery to select the optimal subset of features [4, 5, 9]. It measures the dependence between two random variables which are, in our case, the ground truth noted GT and each candidate band of the input datasets noted B. In this work, we will use the normalized mutual information given as: NMI ðGT; BÞ ¼

H ðGT Þ þ H ðBÞ H ðGT; BÞ

ð1Þ

This measure represents the ratio of the entropy of the ground truth GT and each band B on the joint entropy between GT and B. It is higher when we have a good similarity between the bands. Low value means a small similarity and zero value shows that the bands are independent which allows eliminating the noisy bands. The NMI will be used in the generation and evaluation steps of our proposed methodology see the proposed algorithm in the following subsection. 3.2

Support Vector Machines

The Support Vector Machines SVM is applied in the validation step of our proposed method to generate the classified maps using the selected bands. It is one of the most useful as supervised classifier in many works related to hyperspectral remotely sensed images applications [12, 13]. Its principle consists to construct an optimal hyperplane of two classes by maximizing the distance between the margins. SVM is adopted in our work since it is able to work with a limited number of training samples. In our experiments, we use the radial basis function RBF as a kernel to map the input data to a higher dimensional space. Three cases of training sets (10%, 25% and 50%) are randomly constructed to train the classifier to show the impact of the training samples size on the classification rate.

524

3.3

H. Nhaila et al.

Proposed Algorithm

The complete selection process of our proposed methodology is as follows:

A Novel Filter Approach for Band Selection

525

In this algorithm, to generate the optimal subset of reduced bands, we initialize the selected bands by the one that have the largest NMI with the ground truth, then an approximated reference map called GTest is built by the average of the last one, firstly named GTest0 with the candidate band. The current band is retained if it increases the last value of NMI(GT, GTest ) used as the evaluation function, otherwise, it will be rejected. The threshold Th is introduced to control the permitted redundancy. The stopping criterion is tested depending on the number of bands to be selected k and the number of iterations z. Finally, the validation step is achieved using the SVM classifier to produce the classified maps as an output of this algorithm. Several evaluations metrics are then calculated based on the confusion matrix for the comparison with various other techniques.

4 Experimental Results and Discussion 4.1

Datasets Description

In order to evaluate the performances of the proposed approach, the experiments are conducted on two challenging hyperspectral datasets widely used in the literature [14, 15] and freely available in [16]. The first one was captured over Indian pines region in Northwestern Indiana. The second was gathered over Salinas Valley in South California in USA. Both of them are collected by the NASA’s Airborne Visible/Infrared Imaging Spectrometer Sensor AVIRIS. Table 1 shows the different characteristics of these datasets. The Color composite and the corresponding ground truth reference with classes are respectively presented in (a) and (b) in Fig. 1.

Table 1. Characteristics of the hyperspectral images used in this work.

Indian Salinas

4.2

Number of bands 224 224

Number of classes 16 16

Size of the images 145  145 217  512

Wavelength range 0.4–2.5 µm 0.4–2.5 µm

Spatial resolution 20 m pixels 3.7 m pixels

Parameters Setting and Performance Comparisons

The proposed method is compared with various feature selection methods including Mutual information maximization MIM [17], MRMR [4] and GWO [8]. The SVM classification using all bands without dimension reduction is also included in the comparison. All the experiments are compiled in the scientific programming language Matlab on a computer with quad-core Duo, 64-b, CPU 2.1 Ghz frequency with 3 GB of RAM. The libsvm package available at [18] was used to get the SVM multiclass classifier with RBF kernel. The proposed algorithm stops when the preferred number of selected bands is achieved. The hyperspectral input datasets are randomly divided into training and testing sets, we consider three cases with ratio of 1:10, 1:4, 1:2.

526

H. Nhaila et al.

Fig. 1. The Color composite and the corresponding ground truth with class labels for: (a) Indian Pines and (b) Salinas dataset.

4.3

Results and Discussion

The experimental results on Indian Pines and Salinas datasets using the proposed approach are presented in this subsection. The classification performances are accessed using two evaluation metrics widely used in the hyperspectral remotely sensed images applications which are: the Average Accuracy AA and the Overall Accuracy OA. The AA measures the average of classification accuracy for all classes; it’s calculated as the ratio of the sum of each class accuracy on the number of classes. Whereas, the OA shows the number of correctly predicted pixels over all the test samples. The computational time is also calculated. Tables 2 and 3 show the Overall accuracy obtained for respectively Indian and Salinas datasets. The first column in each table represents the number of the selected bands. The remainder columns show the obtained OA using different percentage of training samples, we use 10%, 25% and 50%. Table 2. The Overall Accuracy obtained using the proposed algorithm on Indian Pines datasets for different number of selected bands and training sets. Number of selected bands 10% training 10 55.2 20 59.65 30 68.23 40 72.06 50 74.00 60 76.41 70 77.60 80 77.83 90 80.90 100 80.83 All bands 60.74

25% training 56.72 61.28 71.61 77.29 79.65 83.63 84.56 84.55 86.98 87.25 69.42

50% training 57.33 62.76 73.98 81.93 84.84 88.63 90.24 90.74 93.90 93.48 75.72

A Novel Filter Approach for Band Selection

527

Table 3. The Overall Accuracy obtained using the proposed algorithm on Salinas datasets for different number of selected bands and training sets. Number of selected bands 10% training 10 80.35 20 88.13 30 89.84 40 90.66 50 91.80 60 92.26 70 92.59 80 92.65 90 92.62 100 92.65 All bands 87.31

25% training 81.36 88.54 90.27 91.29 92.41 92.86 93.23 93.28 93.36 93.48 88.77

50% training 81.81 88.90 90.58 91.63 92.79 93.27 93.59 93.80 93.91 94.08 90.02

From these results, we can make three main remarks: First, it is obvious that the number of pixels used for training affect the accuracy rate, the OA increases with the size of the training sets in both Indian and Salinas images. For example, with 70 selected bands in Indian Pines scene, we get 77.60%, 84.55% and 90.24% for respectively 10%, 25% and 50% as training sets, see Table 2. For Salinas, we obtain OA of respectively 92.59%, 93.23% and 93.59%, see Table 3. The classified maps obtained for these values are illustrated in Fig. 2 for Indian Pines and in Fig. 3 for Salinas scene.

Fig. 2. The ground truth map of Indian Pines dataset (a) and the classified maps using the proposed approach with different training sets: 50% (b), 25% (c) and 10% in (d).

Second, the combination of normalized mutual information and SVM classifier in our proposed methodology produces good classification results even with limited number of training pixels. In the case of 10% as training set, with just 40 selected bands from 224, the OA achieves 72.06% using Indian Pines dataset and 90.66% on Salinas image. Third, it is clear that the use of a subset of relevant bands gives better classification results than using all bands. In Indian Pines, see Table 2, the OA using all bands is equal to 75,72% whereas it achieves 90.24% with reduced subset of 70 bands.

528

H. Nhaila et al.

Fig. 3. The ground truth map of Salinas dataset (a) and the classified maps using the proposed approach with different training sets: 50% (b), 25% (c) and 10% in (d).

In Salinas, we get 87.31% using all bands against 90.66% with just 40 selected bands which confirm the effectiveness of our proposed methodology to select a reduced set of optimal bands and discard the redundant and noisy ones that decrease the classification rate. In the next experiments, the proposed approach is compared with other methods defined in the literature using only 10% as a training set. The obtained results are presented in Table 4 and evaluated using AA, OA and the running time. Table 4. The Average Accuracy AA(%), Overall Accuracy OA(%) and computational time (s) obtained by the proposed algorithm in comparison with different methods on Indian Pines and Salinas datasets. Methods

Indian AA All bands 42.67 MIM 56.06 MRMR 58.70 Gwo-J1 67.82 Gwo-J2 62.57 Gwo-J3 64.10 Gwo-J4 73.89 Gwo-J5 70.43 Proposed NMIBS 70.41

Pines dataset OA Time 60.74 42.83 73.54 12.05 75.70 24.87 71.28 170.3 67.44 1.7 70.29 0.48 73.67 250 70.65 197 77.90 8.77

Salinas dataset AA OA Time 91.45 87.31 397.47 93.54 88.91 126.24 93.56 89.67 151.55 94.46 89.07 1166 94.68 89.25 1.05 94.89 89.41 5.34 97.37 95.38 1221 95.50 90.80 1198 96.47 92.54 84.67

It is seen that our algorithm outperforms the other methods in a good timing. The lower results are obtained by the SVM classification using all bands which confirm the importance of dimension reduction as a preprocessing step of HSI classification to remove the irrelevant bands.

A Novel Filter Approach for Band Selection

529

Furthermore, we can see from Table 4 that the running time increases with the size of the used datasets. The classification without dimension reduction needs a significant time compared to the other feature selection methods. The MIM method outperforms the SVM using all bands but it gives the lower performance in comparison with the other dimension reduction methods (MRMR, GWO and the proposed NMIBS) because it selects bands based only on mutual information maximization without treatment of redundancy between the selected bands. Gwo-J4 exceeds our methods by 3% but it requires much more time of 250 s against just 8.77 s for our method. In Salinas dataset, the running time of Gwo-J4 is 1121 s against only 84.67 s using our proposed method.

5 Conclusion In this paper, we proposed a new method for band selection to address the curse of dimensionality challenge in hyperspectral images classification. The Normalized Mutual information was adopted to generate and evaluate the selected features using a filter approach. The validation was done using the supervised classifier SVM with RBF kernel. The experiments were performed on two well-known benchmark datasets collected by the NASA’s AVIRIS hyperspectral sensor. Various sets of training and testing samples were randomly constructed to run the proposed algorithm with ratio of 1:10, 1:4 and 1:2. The obtained results were accessed using evaluation metrics widely used in this area. The comparison with other methods defined in the literature shows the effectiveness of our approach. In overall, we can say that the major advantages of our proposed method is that it is sample, fast and gives a satisfactory results as more complicated methods which we need in the real world applications.

References 1. Kurz, T.H., Buckley, S.J.: A review of hyperspectral imaging in close range applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 41, 865 (2016) 2. Nhaila, H., Sarhrouni, E., Hammouch, A.: A survey on fundamental concepts and practical challenges of Hyperspectral images. In: 2014 Second World Conference on Complex Systems (WCCS), pp. 659–664. IEEE, Agadir (2014) 3. Hughes, G.: On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 14(1), 55–63 (1968) 4. Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1226–1238 (2005) 5. Guo, B., Damper, R.I., Gunn, S.R., Nelson, J.D.: A fast separability-based feature-selection method for high-dimensional remotely sensed image classification. Pattern Recogn. 41(5), 1653–1662 (2008)

530

H. Nhaila et al.

6. Ren, J., Kelman, T., Marshall, S.: Adaptive clustering of spectral components for band selection in hyperspectral imagery. In: Hyperspectral Imaging Conference, Glasgow, United Kingdom, pp. 90–93 (2011) 7. Zhu, G., Huang, Y., Lei, J., Bi, Z., Xu, F.: Unsupervised hyperspectral band selection by dominant set extraction. IEEE Trans. Geosci. Remote Sens. 54(1), 227–239 (2016) 8. Medjahed, S.A., Saadi, T.A., Benyettou, A., Ouali, M.: Gray wolf optimizer for hyperspectral band selection. Appl. Soft Comput. 40, 178–186 (2016) 9. Nhaila, H., Sarhrouni, E., Hammouch, A.: A new filter for dimensionality reduction and classification of hyperspectral images using GLCM features and mutual information. Int. J. Signal Imag. Syst. Eng. 11(4), 193–205 (2018) 10. Wu, H., Prasad, S.: Semi-supervised dimensionality reduction of hyperspectral imagery using pseudo-labels. Pattern Recogn. 74, 212–224 (2018) 11. Dash, M., Liu, H.: Feature selection for classification. Intell. Data Anal. 1(3), 131–156 (1997) 12. Nhaila, H., Elmaizi, A., Sarhrouni, E., Hammouch, A.: New wrapper method based on normalized mutual information for dimension reduction and classification of hyperspectral images. In: 2018 4th International Conference on Optimization and Applications (ICOA), pp. 1–7. IEEE, Mohammedia (2018) 13. Xie, L., Li, G., Xiao, M., Peng, L., Chen, Q.: Hyperspectral image classification using discrete space model and support vector machines. IEEE Geosci. Remote Sens. Lett. 14(3), 374–378 (2017) 14. Fang, L., Li, S., Duan, W., Ren, J., Benediktsson, J.A.: Classification of hyperspectral images by exploiting spectral–spatial information of superpixel via multiple kernels. IEEE Trans. Geosci. Remote Sens. 53(12), 6663–6674 (2015) 15. Yu, S., Jia, S., Xu, C.: Convolutional neural networks for hyperspectral image classification. Neurocomputing 219, 88–98 (2017) 16. Hyperspectral datasets for classification. http://lesun.weebly.com/hyperspectral-data-set.html . Accessed 22 June 2018 17. Viola, P., Wells III, W.M.: Alignment by maximization of mutual information. Int. J. Comput. Vision 24(2), 137–154 (1997) 18. Libsvm package for multiclass classification. https://www.csie.ntu.edu.tw/*cjlin/libsvm/. Accessed 22 June 2018

Differences Between Clusterings as Distances in the Covering Graph of Partitions Giovanni Rossi(B) Department of Computer Science and Engineering- DISI, University of Bologna, Bologna, Italy [email protected]

Abstract. Cluster analysis in data mining often requires to quantify the difference between alternative clusterings or partitions of a fixed data set: typically, when varying the input in multiple runs of a local-search clustering algorithm, it is important to measure how apart the resulting outputs actually are. This work addresses the issue by means of paths in the covering graph of partitions, hence dealing with these latter as lattice elements. After paralleling the standard Hamming distance between subsets by counting how many atoms of the lattice are included in a symmetric difference between partitions, the approach is generalized by weighting the edges in the covering graph of the lattice through arbitrary symmetric, order-preserving/inverting and super/submodular partition functions, examples being rank and entropy. The induced metrics then obtain as the overall weight of lightest paths in such a weighted graph.

Keywords: Partition Path · Rank

1

· Metric · Atomic lattice · Covering graph

Introduction

Cluster analysis plays a central role in data mining, where partitions of a data set are clusterings. The issue addressed here, namely how to measure a distance between any two partitions of a given set, arises in applicative scenarios where different clusterings of a fixed data set have to be compared. Specifically, a local-search clustering algorithm shall generally output different data partitions for different initializations (and/or different parametrizations, when applicable), while of course alternative clustering algorithms most likely partition the same set in alternative manners. These situations all require a partition distance function, i.e. a metric on the partition lattice [11,12,14]. The topic began receiving attention from the theoretical perspective in the mid 60s [10,15,16], while more recently has acquired increasing relevance for sibling relationship reconstruction in bioinformatics, where the focus is mostly on a distance relying on maximum matching [7] and computable via the assignment c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 531–541, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_58

532

G. Rossi

problem [4,6,8]. However, another distance employed for the same purpose [3] obtains axiomatically from information theory by means of the entropy of partitions, and is called variation of information VI [11]. This metric VI is shown below to correspond to a (i.e. any) lightest path in the covering graph of the partition lattice, when weights on edges are quantified by entropy. In fact, the aim of the present paper is precisely to detail how meaningful partition distances obtain as lightest paths in the covering graph of the lattice, when the weight of edges is determined as the difference between the two real values taken at endpoints by a suitable partition function. In other words, rather than focusing exclusively on path length in the covering graph of the lattice [5], the concern is more generally with path weight, for alternative suitable weighting functions. The idea to place weights on the edges of covering graphs is not new, of course, especially insofar as distributive lattices are involved, in which case metrics may be defined in terms of valuations or modular functions [2,9,13]. In this view, as valuations of the geometric indecomposable lattice of partitions are constant functions [1], the proposed method relies on symmetric, order-preserving/inverting and super/submodular functions.

2

Preliminaries

Consider a n-set of data, and denote by N = {1, . . . , n} the set of indices (or labels) of these data. The Boolean lattice of subsets of N and the geometric lattice of partitions of N respectively are (2N , ∩, ∪) and (P N , ∧, ∨). The former is ordered by inclusion ⊇, while the latter is ordered by coarsening  [1]. Denote generic subsets and partitions respectively by A, B ∈ 2N and P, Q ∈ P N . Recall that a partition P = {A1 , . . . , A|P | } is a collection of (non-empty) pair-wise disjoint subsets, called blocks, whose union is N . For any P, Q ∈ P N , if P  Q, then every block B ∈ Q is included in some block A ∈ P , i.e. A ⊇ B. The bottom partition thus is P⊥ = {{1}, . . . , {n}} (like the bottom subset is ∅), while the top one is P  = {N } (like N is the top subset). Also, ∧ is the coarsest-finerthan operator or meet (of any two partitions), and ∨ is the finest-coarser-than operator or join. Intersection ∩ and union ∪ respectively are the meet and join of subsets. The (Bell [1]) number |PN | =  Bn of partitions of a fininte set N  obtains recursively by Bn = 0≤k Q ⇔ P  Q, P = Q. Definition 1. A partition function f is: • strictly order-preserving if P > Q entails f (P ) > f (Q), • strictly order-inverting if P > Q entails f (P ) < f (Q),

Distances in the Covering Graph of Partitions

535

• supermodular, if f (P ∨ Q) + f (P ∧ Q) ≥ f (P ) + f (Q) for all P, Q, • submodular, if f (P ∨ Q) + f (P ∧ Q) ≤ f (P ) + f (Q) for all P, Q. Finally, f is modular if it is both supermodular and submodular. As for symmetry of partition functions f (and set functions g), recall that the class [17] (or type) P n cP = (cP 1 , . . . , cn ) ∈ Z+ of P specifies the number of k-cardinal blocks of P for P 1 ≤ k ≤ n, i.e. ck = |{A : A ∈ P, |A| = k}|. In other terms, the class cP identifies the number-partition λ of integer n corresponding to the set-partition P of N .  That is, for every partition λ = (λ1 , . . . , λn ) ∈ Zn+ of n, i.e. 1≤k≤n λk = n, the number of distinct partitions P of N with class cP k = λk , 1 ≤ k ≤ n is

−1 cP P k n! [18, Volume 1, p. 319]. 1≤k≤n k! ck ! Definition 2. Partition functions f and set functions g are symmetric if • cP = cQ entails f (P ) = f (Q), • |A| = |B| entails g(A) = g(B). These f and g are indeed invariant under the action of the symmetric group Sn of all n! permutations π : {1, . . . , n} → {1, . . . , n} of the indices i ∈ N . The rank r(P ) of partitions is well-known to be symmetric, strictly orderpreserving and submodular, while the size s(P ) is symmetric, strictly orderpreserving and supermodular. This is shown below. As for subsets A ∈ 2N , the rank r(A) = |A| is symmetric, strictly order-preserving and modular. Lemma 1. The size is strictly order-preserving: if P > Q, then s(P ) > s(Q). Proof. If P > Q, then every A ∈ P is the union of some B1 , . . . , BkA ∈ Q, i.e. A = B1 ∪ · · · ∪ BkA , with kA > 1 for at least one A ∈ P . The union B ∪ B  of any B, B  ∈ Q increases the size by        |B| + |B  | |B| |B | − + = |B||B  |, 2 2 2 which is strictly positive as blocks are non-empty.

 

In order to reproduce expressions (1-2) above, SDD has to count the number of atoms finer than either one of any two partitions but not finer than both. Thus, in terms of cardinalities of subsets of atoms, distance SDD : P N × P N → Z+ is SDD(P, Q) = |{[ij] : P  [ij]  Q}| + |{[ij] : P  [ij]  Q}|. The size and the indicator function allow to obtain SDD as follows: SDD(P, Q) = s(P ) + s(Q) − 2s(P ∧ Q) = IP , IP  + IQ , IP  − 2 IP , IQ . (3) Also note that P ∧ Q =



[ij], and this is the maximal decomposition of

P [ij]Q

P ∧ Q as a join of atoms, namely that involving s(P ∧ Q) atoms. Therefore, SDD(P, Q) = IP , IP  + IQ , IP  − 2 IP ∧Q , IP  .

(4)

536

G. Rossi

Expressions (1–4) show clearly that SDD(P, Q) is the analog of the traditional Hamming distance |AΔB| between subsets A, B in terms of partitions P, Q. As already mentioned, another important distance between partitions P, Q is variation of information V I(P, Q) (see [11, Expressions (15)–(22), pages 879-80]). k   |A| k = − 1≤k≤n cP Entropy e(P ) = − A∈P |A| k n log n of partitions P n log n (binary logarithm) is symmetric, strictly order-inverting and submodular, with e(P⊥ ) = log(n) and e(P  ) = 0. To see submodularity, let again N = {1, 2, 3} with P = [12] and Q = [23], yielding P ∧ Q = P⊥ and P ∨ Q = P  . Then, e(P ∨ Q) + e(P ∧ Q) − e(P ) − e(Q)         1 2 1 1 2 1 log log = −1 log(1) − 3 +2 + log 3 3 3 3 3 3 4 = − log(3) = 1.3 − 1.585 < 0. 3 Finally observe that −e(·), in turn, conversely is strictly order-preserving, symmetric and supermodular. The entropy-based distance, or variation of information, between P and Q is V I(P, Q) = 2e(P ∧ Q) − e(P ) − e(Q). 3.1

(5)

Axioms Characterizing both SDD and VI

The following proposition is meant to be compared with Property 1 in [11]. Proposition 1. SDD is a metric: for all P, P  , Q ∈ P N , 1. SDD(P, Q) = SDD(Q, P ), 2. SDD(P, Q) ≥ 0, with equality if and only if P = Q, 3. SDD(P, P  ) + SDD(P  , Q) ≥ SDD(P, Q), i.e. triangle inequality. Proof. The first condition is obvious. In view of Lemma 1 above, the second one is also immediate as min{s(P ), s(Q)} ≥ s(P ∧ Q). In fact, SDD(P, Q) is the sum [s(P ) − s(P ∧ Q)] + [s(Q) − s(P ∧ Q)] of two positive integers, while min SDD(P, Q) = SDD(P⊥ , [ij]) = 1 = s([ij]) (for any atom [ij]). Concerning

P =Q

triangle inequality, difference SDD(P, P  ) + SDD(P  , Q) − SDD(P, Q) = 2[s(P  ) − s(P ∧ P  ) − s(P  ∧ Q) + s(P ∧ Q)] must be shown to be positive for all triples P, P  , Q ∈ P N . For P, Q ∈ P N chosen arbitrarily but fixed, size s(P ∧ Q) is given, hence s(P  ) − [s(P ∧ P  ) + s(P  ∧ Q)] has to be minimized, and still found positive, by suitably choosing P  . Firstly, sum s(P ∧ P  ) + s(P  ∧ Q) is maximized when both P ∧ P  = P (or P   P ) and P  ∧ Q = Q (or P   Q) hold. Secondly, if Q  P   P , then the whole difference is minimized when P  = P ∨ Q. Thus, SDD satisfies triangle inequality

Distances in the Covering Graph of Partitions

537

if the size satisfies s(P ∨ Q) − s(P ) − s(Q) + s(P ∧ Q) ≥ 0 for all P, Q ∈ P N , i.e. supermodularity. The most straightforward way to see that this is indeed the case is via M¨obius inversion of lattice functions [1,17]. By definition, the size s(·) has M¨ obius inversion μs : P N → {0, 1} takingvalues μs (P ) = 1 if P is an atom while μs (P ) = 0 otherwise. In fact, s(P ) = QP μs (Q) for all P ∈ P N . Therefore, the size satisfies a stronger condition, namely one sufficient but not necessary for supermoduarity, in that its M¨ obius inversion takes only positive values. This completes the proof.   Triangle inequality is satisfied with equality by both SDD and VI (see Properties 6 and 10(A.2) in [11]) when P  = P ∧ Q. Proposition 2. HD satisfies horizontal collinearity: SDD(P, P ∧ Q) + SDD(P ∧ Q, Q) = SDD(P, Q) for all P, Q ∈ P N . Proof. SDD(P, P ∧ Q) + SDD(P ∧ Q, Q) = [s(P ) − s(P ∧ Q)] + [s(Q) − s(P ∧ Q)] as well as SDD(P, Q) = s(P ) + s(Q) − 2s(P ∧ Q).   Horizontal collinearity may be also conceived by replacing the meet with the join, as it is not hard to define distances d(P, Q) satisfying triangle inequality with equality when P  = P ∨ Q, i.e. d(P, P ∨ Q) + d(P ∨ Q, Q) = d(P, Q) for all P, Q ∈ P N . This is the “betweenness” relation B∨ proposed in [13, p. 176]. Collinearity also applies to distances between partitions P, Q that are comparable, i.e. either P  Q or Q  P . Firstly consider the case involving the top P  and bottom P⊥ (for VI, see Property 10(A.1) in [11]). Proposition 3. SDD satisfies vertical collinearity: SDD(P⊥ , P ) + SDD(P, P  ) = SDD(P⊥ , P  ) for all P ∈ P N . Proof. SDD(P⊥ , P ) + SDD(P, P  ) = s(P ) + s(P  )−s(P ) = s(P  ) indepen  dently from P , as well as SDD(P⊥ , P  ) = s(P  ) = n2 . Vertical collinearity may be generalized for arbitrary comparable partitions P   P > Q  P⊥ , in that SDD(Q, P  ) + SDD(P  , P ) = SDD(Q, P ) for all P  ∈ [Q, P ], where [Q, P ] = {P  : Q  P   P } is a segment [17] of (P N , ∧, ∨). This is the “interval betweenness” property considered in [13, p. 179].

4

Lightest Paths in the Covering Graph of Partitions

In the covering graph G = (P N , E) of partitions, vertices are partitions themselves, while edges correspond to the covering relation, i.e. {P, Q} ∈ E if either [Q, P ] = {P, Q} or else [P, Q] = {P, Q}. Concerning weights on edges, let F ⊂ RBn be the vector space of strictly order-preserving/inverting and symmetric partition functions f : P N → R. Hence entropy, rank and size are in

538

G. Rossi

F, and the former is order-inverting, while the latter two are order-preserving. Given any f ∈ F, define weights wf : E → R++ on edges {P, Q} ∈ E by wf ({P, Q}) = max{f (P ), f (Q)} − min{f (P ), f (Q)}. For all pairs P, Q ∈ P N , let P ath(P, Q) contain all P − Q-paths in graph G, noting that this  latter is highly connected  or dense, as every partition P is covered by |P2 | partitions Q and covers A∈P 2|A|−1 − 1 partitions Q , hence |P ath(P, Q)|  1 for all P, Q. Recall that a path p(P, Q) ∈ P ath(P, Q) is a p p p , EP,Q ) ⊂ G where VP,Q = {P = P0 , P1 , . . . , Pm = Q}, subgraph p(P, Q) = (VP,Q p EP,Q = {{P0 , Q0 }, {P1 , Q1 }, . . . , {Pm−1 , Qm−1 }}, with  Pk+1 = Qk , 0 ≤ k < m. Also, the weight of a path p(P, Q) is wf (p(P, Q)) = 0≤k P, Q or else P, Q > P  . Hence the p p path is a union p(P, Q) = p(P, P  ) ∪ p(P  , Q), with EP,P  ∩ EP  ,Q = ∅, for some     P − P -path p(P, P ) and some P − Q-path p(P , Q), entailing that its weight obtains as the sum wf (p(P, Q)) = wf (p(P, P  )) + wf (p(P  , Q)). Finally, since f is strictly order-preserving/inverting and symmetric, P  = P ∨ Q minimizes wf (p(P, P  )) + wf (p(P  , Q)) over all partitions P  > P, Q while P  = P ∧ Q   minimizes wf (p(P, P  )) + wf (p(P  , Q)) over all P  < P, Q.

Whether a f -lightest path visits the join or else the meet of any two incomparable partitions clearly depends on f . A generic f ∈ F may have associated lightest paths visiting the meet of some incomparable partitions P, Q and the join of some others P  , Q . In fact, whether lightest paths always visit the meet or else the join of any two incomparable partitions depends on whether f or else −f is supermodular. As already observed, if f is supermodular, then −f is submodular, i.e. −f (P ∧ Q) − f (P ∨ Q) ≤ −f (P ) − f (Q) (and viceversa). Proposition 5. For any strictly order-preserving f ∈ F, if f is supermodular, then the f -lightest distance is δf (P, Q) = f (P ) + f (Q) − 2f (P ∧ Q), while if f is submodular, then the f -lightest distance is δf (P, Q) = 2f (P ∨ Q) − f (P ) − f (Q).

Distances in the Covering Graph of Partitions

539

Proof. Supermodularity and submodularity entail, respectively, 2f (P ∨ Q) − f (P ) − f (Q) ≥ f (P ∨ Q) − f (P ∧ Q) ≥ f (P ) + f (Q) − 2f (P ∧ Q), 2f (P ∨ Q) − f (P ) − f (Q) ≤ f (P ∨ Q) − f (P ∧ Q) ≤ f (P ) + f (Q) − 2f (P ∧ Q), for all P, Q ∈ P N .

 

Proposition 6. For any strictly order-inverting f ∈ F, if f is supermodular, then the f -lightest distance is δf (P, Q) = f (P ) + f (Q) − 2f (P ∨ Q), while if f is submodular, then the f -lightest distance is δf (P, Q) = 2f (P ∧ Q) − f (P ) − f (Q) (Table 2). Proof. Supermodularity and submodularity entail, respectively, 2f (P ∧ Q) − f (P ) − f (Q) ≥ f (P ∧ Q) − f (P ∨ Q) ≥ f (P ) + f (Q) − 2f (P ∨ Q), 2f (P ∧ Q) − f (P ) − f (Q) ≤ f (P ∧ Q) − f (P ∨ Q) ≤ f (P ) + f (Q) − 2f (P ∨ Q), for all P, Q ∈ P N .

 

Table 2. Summary of the f -lightest distance δf (P, Q) from Propositions 5 and 6. f (always) symmetric f strictly order-preserving f strictly order-inverting f supermodular

f (P ) + f (Q) − 2f (P ∧ Q) f (P ) + f (Q) − 2f (P ∨ Q)

f submodular

2f (P ∨ Q) − f (P ) − f (Q) 2f (P ∧ Q) − f (P ) − f (Q)

The size being supermodular and order-preserving, SDD is the s-lightest distance: SDD(P, Q) = δs (P, Q) for all P, Q. The rank being submodular and order-preserving [1, pp. 259, 265, 274], δr (P, Q) = 2r(P ∨ Q) − r(P ) − r(Q) or |P |+|Q|−2|P ∨Q| is the r-lightest distance. In fact, wr ({P, Q}) = 1 for all edges {P, Q} ∈ E, thus δr is the common shortest-path distance as detailed hereafter by means of an example. Finally, the entropy is order-inverting and submodular, hence the e-lightest distance δe is VI, i.e. V I(P, Q) = 2e(P ∧ Q) − e(P ) − e(Q). Example: Rank-Based Shortest Path Distance. Let N = {1, 2, 3, 4, 5, 6, 7} and consider partitions P = 135|27|46 and Q = 1|23|47|56 (vertical bar | separating blocks). As P ∧ Q = 1|2|3|4|5|6|7 = P⊥ , P ∨ Q = 1234567 = P  , then: δr (P, Q) = 2r(P ∨ Q) − r(P ) − r(Q) = |P | + |Q| − 2|P ∨ Q| = 3 + 4 − 2 = 5 while r(P ) + r(Q) − 2r(P ∧ Q) = 2|P ∧ Q| − |P | − |Q| = 14 − 3 − 4 = 7, and |P | + |Q| − 2|P ∨ Q| = 5 is the length of a shortest path between P and Q. Such a path visits P ∨ Q = P  and for instance may be across edges {P, 12357|46}, {12357|46, P  }, {P  , 123|4567}, {123|4567, 1|23|4567} and finally {1|23|4567, Q}. On the other hand, a P − Q-path forced to visit P ∧ Q = P⊥ minimally has length 7 and for instance may be

540

G. Rossi

across edges {P, 1|35|27|46}, {1|35|27|46, 1|2|35|46|7}, {1|2|35|46|7, 1|2|3|46|5|7}, {1|2|3|46|5|7, P⊥ }, {P⊥ , 1|23|4|5|6|7}, {1|23|4|5|6|7, 1|23|47|5|6}, {1|23|47|5|6, Q}. Discussion of the Results. While the traditional Hamming distance between subsets is undoubtedly the natural way of measuring distances between elements of a Boolean lattice, when focusing on partitions the setting becomes far more complex. Existing extensions of metrics from Boolean to generic distributive lattices can rely on valuations [9,13], which conversely are useless for the geometric indecomposable lattice of partitions [1]. The main contribution of this work is thus to provide a comprehensive theoretical framework for classifying metrics on geometric lattices, including both: (i) the entropy-based VI partition-distance [11], which finds neat inclusion in a larger family, and (ii) the proper Hamming partition-distance, denoted by SDD, which is combinatorially identified via the symmetric difference between partitions (counting included/excluded atoms).

5

Conclusion

Given the ever-increasing amount of data to be processed in contemporary societies, cluster analysis has become essential and is witnessing a widespread proliferation of novel tools, methods and objectives. Data sets are thus frequently partitioned in alternative ways, thereby requiring instruments for quantitative clustering comparison. To that purpose, this work provides a classification method for metrics on the partition lattice, based on lightest paths in the associated covering graph when edge weights are suitably quantified by a symmetric, order-preserving/inverting and super/submodular partition function. Three such weighting functions here detailed are: the rank, the entropy and the size of partitions. By counting the number of atoms finer that any given partition, the size allows for a Boolean representation of partitions that might best fit the computational needs of many applications, while other weighting functions can be easily introduced. As for future work, the whole method shall be analyzed in terms of complementation: the partition lattice is indecomposable, i.e. its center is the whole lattice [1], and thus the results in [5] might find interesting parallels.

References 1. Aigner, M.: Combinatorial Theory. Springer, Heidelberg (1997). (reprint of the 1979 edn.) 2. Barth´elemy, J.P., Leclerc, B., Monjardet, B.: On the use of ordered sets in problems of comparison and consensus. J. Classif. 3, 187–224 (1986) 3. Brown, D.G., Dexter, D.: Sibjoin: a fast heuristic for half-sibling reconstruction. In: Algorithms in Bioinformatics. LNCS, vol. 7534, pp. 44–56 (2012) 4. Day, W.H.E.: The complexity of computing metric distances between partitions. Math. Soc. Sci. 1(3), 269–287 (1981) 5. Duffus, D., Rival, I.: Path length in the covering graph of a lattice. Discret. Math. 19, 139–158 (1977)

Distances in the Covering Graph of Partitions

541

6. Gusfield, D.: Partition-distance: a problem and class of perfect graphs arising in clustering. Inf. Process. Lett. 82, 159–164 (2002) 7. Konovalov, D.A., Bajema, N., Litow, B.: Modified Simpson O(n3 ) algorithm for the full sibship reconstruction problem. Bioinformatics 21(20), 3912–3917 (2005) 8. Konovalov, D.A., Litow, B., Bajema, N.: Partition-distance via the assignment problem. Bioinformatics 21(10), 2463–2468 (2005) 9. Leclerc, B.: Lattice valuations, medians and majorities. Discret. Math. 111, 345– 356 (1993) 10. Lerman, I.C.: Classification et Analyse Ordinale des Donn´ees. Dunod (1981) 11. Meila, M.: Comparing clusterings - an information based distance. J. Multivar. Anal. 98(5), 873–895 (2007) 12. Mirkin, B.: Clustering - A Data Recovery Approach, 2nd edn. CRC Press, Boca Raton (2013) 13. Monjardet, B.: Metrics on partially ordered sets - a survey. Discret. Math. 35, 173–184 (1981) 14. Pinto Da Costa, J.F., Rao, P.R.: Central partition for a partition-distance and strong pattern graph. REVSTAT - Stat. J. 2(2), 127–143 (2004) 15. Rand, W.M.: Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 66, 846–850 (1971) 16. R´enier, S.: Sur quelques aspects math´ematiques des probl´emes de classification automatique. ICC Bull. 4, 175–191 (1965) 17. Rota, G.C.: On the foundations of combinatorial theory I: theory of M¨ obius functions. Z. Wahrscheinlichkeitsrechnung u. verw. Geb. 2, 340–368 (1964) 18. Stanley, R.: Enumerative Combinatorics, 2nd edn. Cambridge University Press, Cambridge (2012)

A New Supervised Learning Based Ontology Matching Approach Using Neural Networks Meriem Ali Khoudja(&), Messaouda Fareh, and Hafida Bouarfa LRDSI Laboratory, Faculty of Science, University of Blida 1, Soumaa BP 270, Blida, Algeria [email protected], [email protected], [email protected]

Abstract. Ontology matching is an effective method to establish interoperability between heterogeneous ontologies. Artificial neural networks are powerful computational models that have proved their efficiency in many fields. In this paper, we propose a new ontology matching approach based on supervised learning, particularly on neural networks. It consists of combining the top ranked matching systems by means of a single layer perceptron, to define a matching function that leads to generate a better set of alignments between ontologies. Experimental results of adopting a cross validation procedure on the Conference test set of the OAEI Campaign 2017, and comparing it with the top ranked ontology matching systems for the same track, show that the proposed approach has gave major values for precision, recall and three variants of Fmeasure. It has significantly increased the performance of the ontology matching task. Keywords: Ontology matching  Artificial neural networks  Machine learning Supervised learning  Concepts alignment  Ontology matching tools

1 Introduction In computer science, knowledge engineering is a field dedicated to collect information about the world, model this information and represent it in a form that a computer system can use to solve complex tasks. Knowledge representation is defined by the set of tools of which the objective is to organize human knowledge in order to be used and shared. Ontologies are representation methods. They allow representing a given domain, so that its knowledge can be used and unified for all applications developed in a different way. This notion of ‘ontology’ appeared in the 90 s in several research axes. This helped solve several problems, and improve the knowledge engineering process. An ontology [1] is a specification of a conceptualization, that is, a description of the concepts and relationships that may exist for a particular domain. However, most applications need to use information from different data sources. They often use multiple ontologies from different domains, and sometimes for the same field. Also, ontology construction is a very complex and critical task, because the main goal is to represent the real world. So, it is reasonable to think that two persons can © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 542–551, 2019. https://doi.org/10.1007/978-3-030-03577-8_59

A New Supervised Learning Based Ontology Matching Approach

543

have different points of view about the world, and how to represent it. Therefore, due to the rapid development of the semantic web, the construction of ontologies by various experts causes heterogeneity at different levels. Thereby, it is interesting to establish semantic correspondences between ontologies, so as to allow agents using different ontologies to inter-operate. These correspondences, called alignments, are the core unit of the ontology matching task, which is the best solution to this ontology semantic heterogeneity problem. It is generally based on the calculation of similarity between the heterogeneous ontologies to be matched, in order to find semantic equivalences between them. Artificial neural networks, often called neural networks, are one of the main tools used in machine learning. They are biologically inspired from human brain in a way to replicate the sort how human brains learn. In the last decades, they have become a major part of artificial intelligence, and have been used for various tasks such as image processing, speech recognition, natural language processing, and many others. This is due to their excellent ability to solve non linear problems by learning, which is such a complicated and difficult task. Ontology Alignment Evaluation Initiative (OAEI) is an international initiative for the evaluation of ontology matching systems, using different types of test ontologies. Its purpose is to compare different systems on the same basis, to identify their advantages and limits. Each year, since 2004, OAEI organizes new sections, and introduces new challenges for evaluation. In this paper, we evaluate our approach according to the OAEI1 Campaign. In this paper, we address these challenges and propose a new automatic ontology matching approach based on artificial neural networks. It consists of combining the most effective ontology matching systems through a single layer perceptron in order to define a matching function that leads to generate a set of alignments between ontologies. The main contribution of the approach proposed in this work is that, it combines, according to a very detailed state of the art on the existing ontology matching techniques, the best matching systems aiming to refine their results that have been validated through various test cases. As we aim, not just to generate alignments between ontologies, but to ideally match them, we refine these results to get a perfect ontology matching. The rest of this paper is organized as follows. We first review existing work in the area of ontology matching, with focus on the techniques based on neural networks. In Sect. 3 we describe our ontology matching approach that we propose in this study. In Sect. 4, to evaluate our technique, the experimental results of testing it according to the OAEI campaign are presented, and the performance of our approach is discussed. Finally, in Sect. 5, we conclude this paper and outline our future work.

1

http://oaei.ontologymatching.org/.

544

M. Ali Khoudja et al.

2 Related Work Ontology matching consists of determining semantic correspondences between ontologies. For that purpose, several methods have been proposed in the literature. Ontology matching process is based on computing similarity measures. On this basis, that the various ontology matching approaches are classified. Different classifications of these techniques are given in [2–4]. We can distinguish terminological approaches which are based on the comparison of terms according to names and descriptions of the entities in the ontologies, such as the work of [5], or using some linguistic knowledge like the approach proposed in [6]. Structural approaches treat the ontology matching problem basing on the internal and external structural information of the ontologies. An example of these techniques can be found in the paper of [7]. Extensional approaches exploit the extension of classes in the ontologies. The work of [8] is an example of this category. Semantic approaches can be divided into different sub-categories: techniques based on external formal and informal resources, such as the work presented in [9], and techniques that exploit the semantic interpretation linked to the input ontologies as in [10]. Machine learning techniques, particularly neural networks, have been widely used in the field of ontology matching. Some of them are for the purpose of a function approximation, whereas others aim at classifying concepts of ontologies for generating alignments between them. Some of these approaches include: • The work [11], where the knowledge of how to match equivalent data elements is discovered, not pre-programmed, presents a procedure using a classifier to categorize attributes according to their field specifications and data values, and then trains a neural network to recognize similar attributes. • Curino et al. presented X-SOM [12], which is an extensible ontology mapper that combines various matching algorithms by means of a feed-forward neural network. It exploits logical reasoning and local heuristics to improve the quality of mappings while guaranteeing their consistency. The architecture of the X-SOM Ontology Mapper is composed by three subsystems: Matching subsystem, Mapping subsystem and Inconsistency Resolution subsystem. • The study in [13] presents an Artificial Neural Network-based ontology matching model for improving web knowledge resource discovery on the Semantic Web based on recently developed intelligent techniques. This method takes into account both schema-level and instance-level information from ontologies, and semantic annotations, and combines agent-based technologies with a machine-learning classifier to propose a solution to the ontology-matching problem. • The authors in [14] propose a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. • Authors in [15] proposed CIDER, which is a schema-based ontology alignment system that performs semantic similarity computations among terms of two given ontologies. It first extracts, for each pair of ontology terms, the ontological contexts

A New Supervised Learning Based Ontology Matching Approach

545

up to a certain depth and enriches it by applying lightweight inference rules, and then combines the different elementary ontology matching techniques using artificial neural networks in order to generate alignments between ontologies. • The OMNN system [16] is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network’s ability to represent and map complex relationships. The learning dynamics of simultaneous training of similar tasks interact at the shared connections of the networks. The output of one network in response to a stimulus to another network can be interpreted as an analogical mapping. OMNN has proved its performance on ontology mapping by participating to several OAEI benchmark test cases. The study of those related works shows that, some of them use neural networks to classify concepts of matching ontologies. Some others applied these machine learning techniques for particular purposes. Whereas others, which are somehow close to our matching approach, aims by using neural networks to find a function approximation, except that they try on finding weights for similarity measures functions, whereas we adjust weights for the best matching tools, which obviously gives finer and more accurate values.

3 Proposed Approach In this work, we propose a new totally automatic ontology matching approach based on neural networks. The processing flow of our approach is illustrated in Fig. 1. It could be described in the following major steps: Since the approach that we propose in this paper is an ontology matching approach, it’s obvious that the input will be two ontologies to be matched: Ontology1 and Ontology2, and the output a set of correspondences between them. We define such a correspondence as an alignment by a triplet as: A ¼ fC; C 0 ; V g Where: C is a concept from Ontology1; C’ a concept from Ontology2; and V the similarity value between C and C’ given by our technique. Step 1. Running several matching tools The first step of our approach consists of, after a very detailed state of the art on the different ontology matching systems developed in the literature, loading the ontologies to be matched: Ontology1 and Ontology2, and then applying the N most efficient matching tools on them. N depends on choice criteria. The result of this step is N sets of alignments between the two ontologies. Each one is given by one of the N matching systems, using its own specific matching technique. After that, the N matching results are combined, and undergo a refinement procedure, in order to perfection the matching process by respecting the environmental conditions of our approach.

546

M. Ali Khoudja et al.

Ontology2

Ontology1 Run Several Matching Tools

Matching phase

Learning phase Learning Weights Values

Generating Alignments

Training Data Machine learning using neural network Final Matching Results

Filtering Alignments

Fig. 1. Processing flow of the proposed approach

Step 2. Using neural networks: weights learning & matching ontologies This step is the core process of our matching approach. It consists of applying a supervised learning procedure based on neural networks, in order to learn our matching function that allows generating alignments between Ontology1 and Ontology2. As neural networks work, our machine learning process is done in two main tasks: 1. Training phase The goal of our supervised learning process is to learn tool’s weights, so as to adjust an importance value for each system. For that, for each combination of concepts C and C’, each of an ontology, we use a single layer perceptron of N inputs and one output. Figure 2 illustrates this network. In neural networks, the output is built on the type of target variable. In our approach, since the output, which is in fact a similarity value, should be a number in the range [0,1], we use Sigmoïd as activation function. Sigmoid is a mathematical function which is used excessively in neural networks. It is defined by: f ð xÞ ¼

ex 1 ¼ ; ex þ 1 1 þ ex

x 2 ½0; 1:

ð1Þ

A New Supervised Learning Based Ontology Matching Approach

547

Fig. 2. Neural network structure

For network learning, we use the backward propagation learning method. It uses a gradient decent procedure to modify weights so as to minimize the error between the desired output and the output calculated by the perceptron. As we aim by this network to fix an importance value for each tool, the N weights are first initialized according to a detailed analyze of our state of the art on the different matching systems of the scientific literature, and then updated for each sample of the training set S, using a learning rate ɛ which is fixed by trial and test. Training dataset is a set of combinations {input, output} where the input is composed of N similarity values between C and C’, each is given by a different ontology matching tool, and the output presents the reference alignment value between the two concepts. It is obtained from the open Ontology Alignment Evaluation Initiative (OAEI) plus our touch of expertise. As required, after the learning phase is completed, the final N values of the weights are fixed. 2. Matching phase Training the neural network using the back-propagation learning method described above allows us to measure the effectiveness of tools used in this work, which permits to define the matching function that leads to generate, using the different matching results of the first step of our approach, each output alignment’s similarity value between the two concepts C and C’ as: V¼

XN i¼1

ðvi wi Þ=N:

ð2Þ

Where: vi 2 ½0; 1 is the similarity value between C and C’ given by tooli; wi 2 ½0; 1 is the weight which reflects its performance. Step 3. Filtering alignments to get final alignments set Finally, we define a threshold T in order to extract the final alignments set from the results of the previous step. Our aim behind filtering the obtained alignments is to improve the matching precision, since by this process, we eliminate irrelevant alignments, and keep only the appropriate ones.

548

M. Ali Khoudja et al.

Example. We pick out the systems with best results In the OAEI 2017 conference track. They are three: AML, LogMap and XMap. We train a neural network of three inputs and one output. Each input is for a matching system, and the output value is the desired output for this alignment. An initial weight is assigned to each matching tool according to our analysis of their results. In order to update these weights, we train this network using the back-propagation learning method, and a training set of correspondences. Weights are fixed after training, and used finally to generate our alignments set using the proposed alignment function (Eq. 2).

4 Experiments In order to evaluate our ontology matching approach, we conducted an experimental procedure described as follows. 4.1

Experimental Design

We adopt a cross validation to effectively control the network while training and testing. It consists of partitioning the training data into P sets of equal size. The algorithm is run P times. For each time, the corresponding partition is used for learning, where the rest of the dataset is used for testing. The global validation result is the average of the individual validation results of the independent partitions. We set up our experiments on the Conference2 test set from OAEI Campaign 2017. The goal of the track is to find alignments within a collection of 16 ontologies describing the same domain: conference organization. We perform our tests on 21 combinations of these ontologies which are shortly described in Table 1. We use the standard evaluation measures set by the campaign: precision, recall as well as three variants of their harmonic mean Fb-measure, against the reference alignments. Fb-measure is a balanced score of precision and recall. It is given by formula (3):   Fb  measure ¼ 1 þ b2 

b2

precision  recall :  precision þ recall

ð3Þ

In order to choose the N efficient systems to be used in the first step of our approach, we based on F1-measure, because it combines both precision and recall measures evenly. We picked up the systems that maximize this score according to the results of OAEI for the Conference track 2017. They are: AgreementMaker Light [17], LogMap [18] and XMap [19]. These systems have marked their efficiency over years by participating in several editions of the OAEI since their realizations.

2

http://oaei.ontologymatching.org/2017/conference/index.html.

A New Supervised Learning Based Ontology Matching Approach

549

Table 1. Overview of OAEI Conference test ontologies. Ontology name Ekaw Sofsem Sigkdd Iasted

4.2

# of classes 74 60 49 140

Ontology name Micro Confious Pcs OpenConf

# of classes 32 57 23 62

Ontology name ConfTool Crs Cmt Cocus

# of classes 38 14 36 55

Ontology # of name classes Paperdyne 47 Edas 104 MyReview 39 Linklings 37

Experimental Results

Aiming to study the effectiveness of our approach, we compare its results with the results of the three top ranked ontology matching systems for the same test set, and adopting the same cross validation procedure. Table 2 summarizes these results thus this comparison. The evaluation results of our approach, represented in the second column of Table 2, are illustrated in Fig. 3. Table 2. Evaluation measures values of our approach and three efficient matching systems. Evaluation measure Precision Recall F0.5-Measure F1-Measure F2-Measure

Our approach 0.871950342 0.729208986 0.839048287 0.79414871 0.753854533

AML 0.924690632 0.657767921 0.855234852 0.768670621 0.698046604

LogMap 0.927267818 0.607171467 0.838745021 0.733750658 0.6521633

XMap 0.92305462 0.571448106 0.82185253 0.705832301 0.618547898

0.9 0.85 0.8 0.75 0.7

Our Evaluation Measures values

0.65

Fig. 3. Precision, Recall and F-measures values obtained by our approach.

The graph in Fig. 4 compares the values of precision, recall and F-measures obtained by both of the ontology matching approach proposed in this paper, and the most effective ontology matching systems for the OAEI Conference track 2017.

550

M. Ali Khoudja et al.

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

Our Approach AML LogMap XMap

Fig. 4. Comparison of our results with those of three efficient ontology matching systems.

4.3

Discussion of Results

It is observed from Fig. 3 that, the ontology matching approach proposed in this paper gives very significant values for the five evaluation measures adopted in the experimental procedure presented above. All the obtained values exceed 0.7, especially in terms of precision, where our approach gives a value so closed to 0.9. This means that, it is a positive approach. Comparing with other ontology matching systems, it is clearly seen from Fig. 4 that, according to the evaluation design adopted in these experiments, our approach is competitive, in terms of precision and F0.5-measure, to the most effective matching systems (AML, LogMap and XMap), which have proved their efficiency through several participations in different evaluation tracks of OAEI. In terms of recall, F1measure, and F2-measure, our results exceed those of the three systems. According to the previous results, we conclude that our ontology matching approach brings significant improvement, which permit to increase the performance of the ontology matching task.

5 Conclusion Artificial neural networks have always been useful for ontology matching. In this paper, we propose a new automatic ontology matching approach based on supervised learning, particularly on neural networks. The experimental results of its evaluation on 16 ontologies from the Conference test set of the OAEI Campaign 2017, shows that the proposed approach can increased the performance of ontology matching task. Although our experimental results are encouraging, we aim, as future work, to adapt this ontology matching approach to effectively match large scale ontologies. We also plan to follow a more detailed evaluation procedure, basing on other different and more complicated test cases.

A New Supervised Learning Based Ontology Matching Approach

551

References 1. Gruber, T.R.: A translation approach to portable ontology specifications. Knowl. Acquis. 5 (2), 199–220 (1993) 2. Shvaiko, P., Euzenat, J.: Ontology matching: state of the art and future challenges. IEEE Trans. Knowl. Data Eng. 25(1), 158–176 (2013) 3. Otero-Cerdeira, L., Rodríguez-Martínez, F.J., Gómez-Rodríguez, A.: Ontology matching: a literature review. Expert Syst. Appl. 42(2), 949–971 (2015) 4. Ardjani, F., Bouchiha, D., Malki, M.: Ontology-alignment techniques: survey and analysis. Int. J. Mod. Educ. Comput. Sci. 7(11), 67 (2015) 5. Akbari, I., Fathian, M., Badie, K.: An improved MLMA+ and its application in ontology matching. In: Innovative Technologies in Intelligent Systems and Industrial Applications, CITISIA (2009) 6. Shah, G., Syeda-Mahmood, T.: Searching databases for sematically-related schemas. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2004) 7. Joslyn, C.A., Paulson, P., White, A.: Measuring the structural preservation of semantic hierarchy alignments. In: Proceedings of the 4th International Conference on Ontology Matching, vol. 551 (2009). CEUR-WS.org 8. Loia, V., et al.: Hybrid methodologies to foster ontology-based knowledge management platform. In: IEEE Symposium on Intelligent Agent (IA) (2013) 9. Mascardi, V., Locoro, A., Rosso, P.: Automatic ontology matching via upper ontologies: a systematic evaluation. IEEE Trans. Knowl. Data Eng. 22(5), 609 (2010) 10. Fareh, M., Boussaid, O., Chalal, R.: Mapping system for merging ontologies. In: Modeling Approaches and Algorithms for Advanced Computer Applications, pp. 205–216. Springer (2013) 11. Li, W.-S., Clifton, C.: Semantic integration in heterogeneous databases using neural networks. In: VLDB (1994) 12. Curino, C., Orsi, G., Tanca, L.: X-SOM: A flexible ontology mapper. In: 18th International Workshop on Database and Expert Systems Applications, DEXA 2007 (2007) 13. Rubiolo, M., et al.: Knowledge discovery through ontology matching: an approach based on an Artificial Neural Network model. Inf. Sci. 194, 107–119 (2012) 14. Mao, M., Peng, Y., Spring, M.: An adaptive ontology mapping approach with neural network based constraint satisfaction. J. Web Semant. (2010) 15. Gracia del Río, J., Bernad, J., Mena, E.: Ontology matching with CIDER: evaluation report for OAEI 2011 (2011) 16. Peng, Y., Munro, P., Mao, M.: Ontology mapping neural network: an approach to learning and inferring correspondences among ontologies. In: Polleres, A., Chen, H. (eds.) 9th International Semantic Web Conference ISWC 2010, Shanghai, China, pp. 65–68 (2010) 17. Faria, D., et al.: Results of AML in OAEI 2017. In: Proceedings of the Twelfth International Workshop on Ontology Matching, OM-2017 (2017) 18. Jiménez-Ruiz, E., Cuenca Grau, B., Cross, V.: LogMap family participation in the OAEI 2017. In: Proceedings of the Twelfth International Workshop on Ontology Matching, OM2017 (2017) 19. Djeddi, W.E.D., Khadir, M.T., Ben Yahia, S.: XMap: results for OAEI 2017. In: Proceedings of the Twelfth International Workshop on Ontology Matching, OM-2017 (2017)

VACIT: Tool for Consumption, Analysis and Machine Learning for LOD Resources on CKAN Instances Álvaro Varón-Capera(&), Paulo Alonso Gaona-García, Jhon Francined Herrera-Cubides, and Carlos Montenegro-Marín Grupo de Investigación GIIRA, Facultad de Ingeniería, Universidad Distrital Francisco José de Caldas, Carrera 7 No. 40B - 53, C. P. 110231 Bogotá D.C., Colombia [email protected], {pagaonag,jfherrerac,cemontenegrom}@udistrital.edu.co

Abstract. Linked Open Data is a concept that has became stronger in the recent years, providing a series of principles for the interconnection of data through machine-readable structures and knowledge representation schemes defined by ontologies. Nowadays, there are platforms that standardize the LOD resources consumption processes, with CKAN being one of the most relevant with open source, for the management of these resources, gathering around 146 instances among government organizations, communities and NGOs. However, the consumption of these resources lacks of minimum criteria to determine its validity such as level of trust, quality, linkage and usability of the data; aspects that require a previous systematic analysis about the set of published data. To support this process of analysis and determination of the mentioned criteria, the purpose of this article is to build the VACIT tool: Visual analytics for CKAN instances Tool; tool that provides a serie of visual analytics about the current state of the datasets obtained from the different instances published in CKAN. Finally, it presents results, conclusions and future work from the use of the tool for the consumption of datasets that belong to certain instances ascribed to the CKAN platform. Keywords: Linked Open Data  CKAN  Data analytics Open Data  TensorFlow  Visual analytics

 Machine learning

1 Introduction Linked Data is presented as a set of design principles to share interconnected data readable by machines. Principles generally focused on tools that provide meaning to the data, and ontologies that provide meaning to the terms [1]. For its part, Open Data focuses on the use, reuse and redistribution of data; therefore, this data must be available and must be in the public domain or have licensing conditions that allow the user to use the data as he wishes without restrictions [1]. The conjunction of these two concepts gives rise to what is now known as Linked Open Data - LOD. LOD allows the user to link and exploit data from various sources, freely and without licensing restrictions [2]. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 552–564, 2019. https://doi.org/10.1007/978-3-030-03577-8_60

VACIT: Tool for Consumption, Analysis and Machine Learning

553

In LOD a 5 Star scheme is proposed for the publication of Linked Data [3], scheme in which most Open Data systems meet the first 3 stars, but do not necessarily reach the last 2 stars, where Linking really happens. This allows us to suppose that not all the linked data are open data, while not all open data can be linked [4]. It is important to bear in mind that a repository, as an LOD data management platform, has a collection of resources that together with the metadata [5, 6] describe each resource, facilitate the exploitation of data (in very varied knowledge domains) to interested persons [7]. Now, if the contents can be accessed freely, these repositories would be configured as Open Access repositories [8]. Additionally, these repositories are based on platforms for data management, such as CKAN - Comprehensive Knowledge Archive Network [9, 10], which is one of the tools for managing and publishing dataset in a web environment, most used in the market. This tool is used by different “organizations”, such as governments, national and local, research institutions. Through the services offered by Ckan, users can connect and consume the necessary dataset, according to the domain of knowledge of the searches carried out. When consuming the dataset, from the services offered by platforms such as Ckan, users are faced with two key elements about the information obtained: (a) the made queries return information in a language that is not very comprehensible to the common user, for example in RDF; and (b) the published data present challenges such as formats, licensing, trust, interoperability, access, among others [11]. For the use of LOD data, tools are required to detect possible quality problems and ambiguities produced by redundancy, inconsistencies and lack of completeness of data and links [12]. Within the literature there are some related works such as [13, 14], who describe the problem of open links and the strategy to solve this problem. For its part, [15] shows the amounts of linked data available as of July 2009 and the number of links between RDF data sets; and in [16] a study of statistics about the structure and content of the LOD cloud is presented. Under this perspective, and taking into account that with the availability of large dataset, a considerable challenge is how to consume these datasets quickly and efficiently [17], therefore arises the need to provide the data users, a tool with which to evaluate the trust of the consumed data, based on the basic information of the resources [5], such as licensing, formats, among others, and to be able to make decisions about consumed datasets. Using a perspective of visual data analytics, this study is oriented to present the development of a framework that allows the consumption of dataset in different instances of CKAN [9, 10], in order to recover the pertinent information for the process display. The purpose of this framework is to have elements that allow analyzing and evaluating the quality of metadata, thus promoting the generation of trust in the consumption of data. The rest of the article is organized as follows: in Sect. 2 of this article is the state of the art, where references of the proposed theme are reviewed. Subsequently, Sect. 3 presents the methodology and methodological design used to explore the data sets. In Sect. 4, the methodological development is presented, which includes a brief description of the proposed tool for the analysis of resources. In Sect. 5, results obtained are presented. Section 6 presents the analysis of results and discussions. Finally, Sect. 7 presents the conclusions and future work.

554

Á. Varón-Capera et al.

2 Background Under the LOD principles, framed in the reuse and free redistribution, being that degree of freedom defined by a license [18], several software tools have been created for the consumption of this data. These tools analyze: (a) a specific repository, such as the tool “Data Hub LOD Datasets” [19], which provides an integrity level of each data set of the metadata description of a specific instance of CKAN; or (b) validate if a data set is active and graphically represent it by adding its link address, such as LOD-Vader [20]. On the other hand, there are tools that use machine learning to determine prediction and scalability models, in order to increase the accuracy of their models, such as the system developed by [21]. This system has scalability characteristics for data recommendation systems linked through parallelization and stacking using MapReduce. Similarly, there are models developed for electronic acquisitions through data mining [22], and some such as the one proposed by [23], which uses LOD Mining through semi-supervised learning methods, based on machine self-learning, for Description of tags in SPARQL resources. 2.1

Identified Problems

As observed in the review carried out, the lack of standardization in the processing of the LOD metadata information leads to identifying problems related to the instances of stored data, and therefore, to generating trust problems in the linked data. To this end, tools have been created to review the structure and validation of the dataset. In the process of early review, there is a lack of visual analysis of the quality of the open data coming from the instances of the data management platform, which for the case of this investigation corresponds to CKAN [24]. In the absence of a support tool, which allows determining the level of trust on the first 3 levels of the opening scheme, verifiable in the metadata of the published data; This process must be done manually, after downloading the metadata of the instance, and then the manual analysis of the description must be carried out in the richness of its data. Otherwise, machine learning [25] is used for LOD applications, or their derivations, but consuming resources managed by the CKAN platform do not identify tools of this type that use said technology to determine trust of the data. Taking into account the described problems, this research focuses on presenting the tool VACIT: Visual Analytics for CKAN instances tool, based on: • Visual Analytics, rational analysis techniques supported by a visual and interactive interface [26], which allow decision making by combining human flexibility, creativity and expertise, with the storage and processing capacity of computers, in order to find solutions to the most complex problems. Therefore, using advanced visual information systems, people can interact with it to make better informed decisions [27]. • Machine Learning [28], based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention, allows a method of data analysis that automates the construction of analytical models.

VACIT: Tool for Consumption, Analysis and Machine Learning

555

These technologies will allow designing a graphic representation of the information contained in the metadata pertaining to dataset that are stored in different instances of Ckan. Additionally, it will support the analysis of the data, providing elements for the determination of trust levels, associated with the characterization of variables of the first 3 levels of the linking scheme, and the linking behavior of the dataset on the selected instances.

3 Methodology To carry out the present study, there was used a research method based on quasiexperimental design in order to provide support to help the process of determining levels of trust and quality of LOD instances, through the consumption of resources through the CKAN manager. As an application of this methodology, the methodological design starts from a Preliminary Stage (entries) where different exercises were carried out, such as: (a) Theoretical review of CKAN accessibility, (b) Connection with CKAN repositories, and (c) Consumption of resources from the instances selected. These exercises were carried out to obtain information about the structure of a dataset when consumed, generating a standard for the exploitation of the dataset of each organization in CKAN. Thanks to the use of code opening and inclusion of governmental entities, NGOs, academic institutions and other types of institutions, which have established LOD policies, CKAN has undergone a transformation, so that in recent years it has matured in a remarkable, being more stable and faster, with more resources and instances. For the process of deployment of the proposed tool, 4 stages were proposed (Fig. 1): Stages in which the following tasks are considered:

Fig. 1. Proposed Source: Authors

methodological

design.

1. Download of metadata of the instances of CKAN, through the use of the API, for the consumption of LOD. 2. Creation of REST service, which allows connection between the frontend of the tool and the data of the instances. 3. Implementation of the Machine Learning module, for the level of concordance of tags: through the consumption of libraries for nonsupervised Machine Learning, which allows determining a level of concordance of the tags corresponding to each dataset of an instance.

556

Á. Varón-Capera et al.

4. Implementation of the Visual Analytics module, through the use and implementation of graphic libraries for the representation resulting from the analysis of the metadata, from the CKAN instances. The subsequent stages correspond: (a) Visual representation of the statistical analysis of the results; and (b) execution of tests corresponding to the consumption and visualization of resources, dataset and instances through the web application, the backend of the tool and the metadata download process.

4 Methodological Development 4.1

Proposed Framework

a. Back-End: Metadata Exploitation of CKAN Instances: To make the connection with the CKAN API that allows consumption of LOD resources, an application was developed in Python, under a Linux environment. This application, making use of the services of the API, stores .json files locally, with the data obtained from the queries made to the repositories (Fig. 2). Local storage is carried out in order to optimize the processing time of the tool, since when consuming an instance online, its quantity of data would generate an exponential behavior in the variable “response time”.

Fig. 2. Obtaining data in Minnesota Geospatial Commons instance [29]. Source: Authors

b. Back-End: Machine Learning Module for Tags Association Level: A common vision of the intersection of machine learning and linked data is that machine learning can provide inference when traditional logic-based methods fail [30]. For this research, an automatic learning perspective is adopted: the linked data are seen as simply a new form of data. In classical machine learning, the complexity and divergence of the field are controlled by the “black box principle”. It is expected that each machine learning method adapts to a simple mold: the input is a table of instances, described by several characteristics with a target value to predict, and the output is a model that predicts the target value [31]. Moreover, tags are a set of keywords that manage to describe the dataset and specify its content. These tags are a relevant field in the consumption of LOD resources [19], since they allow to relate the dataset of different instances with a knowledge domain. For this reason, an unsupervised machine learning module was implemented (Fig. 3), which

VACIT: Tool for Consumption, Analysis and Machine Learning

557

determines the level of accuracy of the tags depending on their description. Through the use of TensorFlow [32], framework for Machine Learning, the proposed model is implemented, based on a specific instance of the platform (training base), with which it is sought to determine the concordance of the tags of other instances regarding to this base instance. As a training base, Datahub [33] was chosen as an instance with greater richness in the description of its domain tags.

Fig. 3. Machine Learning module for the ltags matching level.

c. Front-End: Module of Visual Analytics: After the exploitation of the data, in order to obtain values and results that can be represented graphically, as well as supporting the identification of trust in the dataset exposed in the consulted instances, the inclusion of the visual analytics module is posed. VACIT uses bar charts, cake diagrams, data tables and other elements of visual analytics, in components considered relevant in the richness of a dataset: Description of the organization, Author, Licenses, Dataset resource formats, Relationships as object and subject and Links of resources. To carry out this implementation, the Angular framework is used [34], and Javascript libraries are consumed, which allow to graphically represent the data, after a statistical process. 4.2

Tool Architecture

Figure 4 allows to describe the components implemented inside the VACIT tool.

558

Á. Varón-Capera et al.

Fig. 4. VACIT tool deployment diagram architecture. Source: Authors

5 Obtained Results VACIT tool: Visual Analytics for CKAN Instances Tool, raises a contribution to the need of tools that allow automated analysis processes for the LOD resources assigned to the platform, in order to facilitate the process of systematic and analytical study. VACIT allows, by selecting each of its options (Fig. 5), the representation of information through statistical diagrams, complemented with tables, which facilitates the processes of analysis and decision making regarding levels of trust, linkage, usability and quality of each of the open data provided by the organizations attached to CKAN.

Fig. 5. Menu (instance selection and display options). Source: Authors

VACIT: Tool for Consumption, Analysis and Machine Learning

559

Figure 6 are examples of the Visual Analytics module. Thanks to the standardization of the consumption of resources by the platform, a dispersion diagram can be generated, which indicates the level of relationship of these tags with respect to those already defined in the platform. Figure 6 shows the result obtained, after the process of determining the level of concordance of each dataset of a specific instance and after the process of determination of concordance level of tags.

Fig. 6. Diagrams belonging to the visual analytics module, on the right “Resource format” on an instance and on the left. Dispersion diagram - Level of concordance of the domain tags of each dataset. Source: Authors

The scatter plot represents the relationship between the match percentage of a dataset belonging to an instance (Y-axis), and the identifier of each dataset (X-axis). The size of the figure (Blue Square) is represented by the amount of resources that the dataset has, however the figure is omitted since it does not have a relevant role in the determination of the level of concordance. The further away from the X-axis a dataset is found, the higher the level of concordance of its domain tags with respect to the training base.

6 Analysis of Results and Discussion The local download of the LOD resources, is carried out through .json flat files, which will later be consumed by the Front-End of the tool, where they are ordered and structured. The representation of the downloaded data allows them to be legible and descriptive (Fig. 7), allowing a structured and ordered visualization, allowing a high degree of utility for the user of the tool.

560

Á. Varón-Capera et al.

Fig. 7. JSON file with metadata of a Ckan instance. Source: Authors

On the other hand, Fig. 8 represents the previous dataset, structured under the Visual Analytics module of the VACIT tool.

Fig. 8. Information table of an instance. Source: Authors

In the exploration carried out, the tools that implement LOD and Machine Learning, have different objectives to the validation of data in platforms or managers such as CKAN. These tools generally focus on determining certain types of dataset behaviors, such as the tool [19], which allows the dataset to be classified in a conformance scheme between levels 1–4 for the Datahub instance, limiting itself to determining the level of conformity to a single instance of Ckan. For its part, VACIT is able to consume resources from all the organizations attached to Ckan. Additionally, tool [20] has a status analysis of the dataset (active or inactive), allowing to visualize the tags and the number of resources attached to this dataset, along with their links, but in the same way it is executed on the Datahub instance, therefore it is limited considerably, taking into account that currently the organizations attached to Ckan exceed 140. Factor that is exploited by the VACIT tool significantly exceeding the number of instances and the description of metadata and resources of each dataset, except the functionality of relating the dataset based on its tags.

VACIT: Tool for Consumption, Analysis and Machine Learning

561

Moreover, the tools that are transversal to LOD and Machine Learning are few, within these there is only one tool [23] that intends to work with LOD resources related to Ckan, implementing data mining and semi-supervised learning to obtain dataset that relate to each other. However, this tool focuses on the LOD-CLOUD platform for the consumption of the dataset, a platform that is linked to the Datahub instance, and therefore will only query this instance, which is why the VACIT tool continues to be more productive for its use for different instances of Ckan. Other tools that were identified [21, 22] do not focus on the analysis of platforms, instances, dataset or LOD resources. Instead, they are created for commercial purposes, and are used for recommendation systems and electronic acquisitions respectively. That is why VACIT turns out to be a practical, complete and transversal tool for the analysis of LOD resources, which goes beyond the analysis of a single instance, considering specific fields of its metadata such as domain tags, authors, resource format, among others. Allowing the user in this way to perform a deep analysis of each instance, dataset or resource, in order to determine levels of trust, quality, usability, linkage and other items that may offer indicators of this type for LOD resources.

7 Conclusions and Future Work The determination of the level of trust of the resources published through LOD is a process that, in its origins, is concentrated and affected by the processes carried out in the modeling and description phases of the metadata corresponding to each dataset, both in the richness of its descriptions and usability, and in its linkage. To address this approach to generating elements in order to establish trust criteria, this research provides as a key component for the user, a tool with consistent characteristics for the determination of the mentioned levels, but that transcends these, facilitating the processes of analysis at the instance level - dataset - resource; where it is the same user who can be supported in the different modules of the VACIT tool, to establish indicators either of linkage, trust, usability or other aspects that he considers relevant for the consumption of these resources. As for the obtained indicators, problems can be evidenced regarding the linking process. As a result, it is identified that only two of the instances consulted through the tool handle linking processes, either as subject or object. This allows us to show that the linkage of resources in these instances does not have an adequate treatment that allows it to advance in the last two levels of the data linkage scheme yet. On the other hand, when applying the different elements of visual analytics on the consulted instances, it is observed among other aspects, a proliferation of publication formats, little standardization in the use of licensing, etc. Factors that make it possible to demonstrate that, even if there are recommendations and good practices for the linking of resources, there is still a way to go to achieve the adequate compliance of the proposed characteristics by both Open Data and LOD. Under this context, the VACIT tool is expected to become a focal point for the creation of new modules, which contribute to the insertion of new indicators, as well as linking other platforms for publication and consumption of LOD resources, as well as studies and research. on the CKAN platform, in order to increase the methods and

562

Á. Varón-Capera et al.

indicators of the different levels of trust, usability, provenance, quality and open data linkage, given that they are necessary due to the exponential growth of the semantic web [18], in the same way, it is possible to implement a fuzzy logic module, as the research shows [35], which allows to determine level of trust and quality of the data, likewise, security studies that include all instances from Ckan, studies as in the case of [36] but in a general way for this platform. As future work, it is proposed to abstract the Machine Learning module, in terms of language, transcending the main language (English), and allowing to relate each dataset with others that do not belong to the same instance, but are under the same domain label. Additionally, and as mentioned in the implementation of Machine Learning module, the level of concordance works under a training base, previously analyzed and obtained from a specific instance [25]. This base can be generalized by performing a statistical study on all the domain tags used by the platform, crossing these tags with other languages for its simplification, in order to obtain more results that are accurate. Acknowledgment. This research has been developed within the framework of the doctoral research project on Linked Data, at the Universidad Distrital Francisco José de Caldas. In the same way, the issue is being worked as a line of the GIIRA Research Group.

References 1. Open Knowledge. Open Data HandBook. http://opendatahandbook.org/ 2. BCN. Linked open Data: ¿Qué es?. s.f. https://datos.bcn.cl/es/informacion/que-es 3. Berners-Lee, T., et al.: Linked data-the story so far. Int. J. Semant. Web Inf. Syst. 5, 1–22 (2009) 4. Bizer, C., Heath, T.: Linked Data. Evolving the Web into a Global Data Space. The Semantic Web: Theory and Technology. Morgan & Claypool Publishers, San Rafael (2011) 5. Schmachtenberg, M., Bizer, C., Paulheim, H.: State of the LOD Cloud 2014. University of Mannhein (2014). http://lod-cloud.net/state/state_2014/ 6. LODStats. State of LOD Cloud. http://stats.lod2.eu/ 7. Guzmán-Luna, J., Durley-Torres, I., López-Bonilla, M.: Semántica para repositorios de objetos de aprendizaje. Scientia Et Technica 19(4), 425–432 (2014). http://www.redalyc.org/ pdf/849/84933912011.pdf 8. Melero, R.: Repositorios. Universidad de Costa Rica. Vicerrectoría de Investigación. San José (2014). https://ucrindex.ucr.ac.cr/docs/repositorios_2014.pdf 9. CKAN, CKAN API Guide. S. f. http://docs.ckan.org/en/latest/api/ 10. Winn, J.: Open data and the academy: an evaluation of CKAN for research data management. In: IASSIST 2013. http://eprints.lincoln.ac.uk/9778/1/CKANEvaluation.pdf 11. W3C. Data on the Web. Best Practices. DWBP Use Cases and Requirements. s.f. https:// w3c.github.io/dwbp/bp.html 12. Ruckhaus, E., Vidal, M., Castillo, S., Burguillos, O., Baldizan, O.: Analyzing linked data quality with LiQuate. In: European Semantic Web Conference ESWC 2014. Universidad Simón Bolívar, Venezuela (2014). https://link.springer.com/chapter/10.1007/978-3-31911955-7_72

VACIT: Tool for Consumption, Analysis and Machine Learning

563

13. Herrera-Cubides, J., Gaona-García, P., Gordillo-Orjuela, K.: A view of the web of data. case study: use of services CKAN. Ingeniería 22(1), 111–124 (2017). ISSN 2344-8393.https:// revistas.udistrital.edu.co/ojs/index.php/reving/article/view/10542. https://doi.org/10.14483/ udistrital.jour.reving.2017.1.a07 14. Rajabi, E., Sanchez-Alonso, S., Sicilia, M.-A.: Analyzing broken links on the web of data: an experiment with DBpedia. J. Assoc. Inf. Sci. Technol. 65(8), 1721–1727 (2014). http:// onlinelibrary.wiley.com/doi/10.1002/asi.23109/abstract 15. Bizer, C.: The emerging web of linked data. IEEE Intell. Syst. 24(5): 87–92 (2009). http:// lpis.csd.auth.gr/mtpx/sw/material/IEEE-IS/IS-24-5.pdf 16. HPI Institut. State of LOD Cloud (2011). http://lod-cloud.net/state/ 17. Linked Science. Tutorial on Visual Analytics (2013). http://linkedscience.org/events/ vislod2014/ 18. Berners, T., Hendler, J., Lassila, O.: The semantic web. Sci. Am. 284(5), 29–37 (2001) 19. Hasso Plattner Institut. Data Hub LOD Datasets. http://validator.lod-cloud.net/index.php 20. Baron Neto, C., Müller, K., Brümmer, M., Kontokostas, D., Hellmann, S.: Lodvader: an interface to lod visualization, analytics and discovery in real-time. In: 25th WWW conference (2016). http://gdac.uqam.ca/WWW2016-Proceedings/companion/p163.pdf 21. Ruhland, J., Wenige, L.: Scalable property aggregation for linked data recommender systems. In: 2015 3rd International Conference on Future Internet of Things and Cloud. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300852&isnumber=7300539 22. Mencía, E.L., Holthausen, S., Schulz, A., Janssen, F.: Using data mining on linked open data for analyzing e-procurement information. In: Proceedings of the 2013 International Conference on Data Mining on Linked Data, DMoLD 2013, vol. 1082, pp. 50–57 (2013). http://ceur-ws.org/Vol-1082/paper4.pdf. Accessed 11 Dec 2017 23. Fanizzi, N., d’Amato, C., Esposito, F.: Mining linked open data through semi-supervised learning methods based on self-training. In: ICSC 2012, pp. 277–284 (2012). https://pdfs. semanticscholar.org/39e6/0cad2b9d866fd324e8de4a547589fc40fed0.pdf 24. Ckan org. What is ckan? http://docs.ckan.org/en/latest/user-guide.html#what-is-ckan 25. SAS Institute Inc. Machine learning: What it is and Why it Matters. https://www.sas.com/es_ co/insights/analytics/machine-learning.html 26. Thomas, J., Cook, K.: Illuminating the Path: Research and Development Agenda for Visual Analytics. IEEE-Press. National Visualization and Analytics Center (NVAC) (2005). http:// vis.pnnl.gov/pdf/RD_Agenda_VisualAnalytics.pdf 27. IEBS, la Escuela de Negocios de la Innovación y los Emprendedores. Visual Analytics in Business Intelligence & Big Data. https://comunidad.iebschool.com/ visualanalyticsbusinessintelligencebigdata/que-es-el-visual-analytics/. Accessed 24 Jan 2018 28. Gaitho, M.: Machine Learning: what it is and why it matters (2018). https://www. simplilearn.com/what-is-machine-learning-and-why-it-matters-article. Accessed 8 Mar 2018 29. Minnesota geospatial commons. About the Minnesota Geospatial Commons. https://gisdata. mn.gov/content/?q=about. Accessed 8 Mar 2018 30. Rettinger, A., Lösch, U., Tresp, V., d’Amato, C., Fanizzi, N.: Mining the semantic web— statistical learning for next generation knowledge bases. Data Min. Knowl. Discov. 24(3), 613–662 (2012) 31. Bloem, P., de Vries, G.K.D.: Machine learning on linked data, a position paper. In: Proceedings of the 1st International Conference on Linked Data for Knowledge Discovery, LD4KD 2014, vol. 1232, pp. 64–68 (2014) 32. Tensorflow. TensorFlow API Documentation. https://www.tensorflow.org/api_docs/. Accessed 5 Nov 2017 33. Datahub org. About Datahub. https://datahub.io/docs/about. Accessed 11 Oct 2017 34. Google inc. What is angular? https://angular.io/docs. Accessed 10 Nov 2017

564

Á. Varón-Capera et al.

35. Gaona-García, P., Herrera-Cubides, J., Alonso-Echeverri, J., Riaño-Vargas, K., GómezAcosta, A.: Fuzzy logic system to evaluate levels of trust on linked open data resources. Revista Facultad de Ingeniería (86), 40–53 (2018). http://dx.doi.org/10.17533/udea.redin. n86a06 36. Gaona-Garcia, P., Gordillo, K., Montenegro-Marin, C., Acosta, G.A.: Visualizing security principles to access resources based on linked open data: case study Dbpedia. International Information Institute (Tokyo). Information, Koganei, vol. 21(1), pp. 109–122, January 2018

Selecting Best Machine Learning Techniques for Breast Cancer Prediction and Diagnosis Youness Khourdifi(&) and Mohamed Bahaj(&) Faculty of Sciences and Techniques, Hassan 1st University, Settat, Morocco {ykhourdifi,mohamedbahaj}@gmail.com

Abstract. In this article, we will present an overview of the evolution of large data in the health system, and apply four learning algorithms to a medical data set. The aim of this research work is to predict breast cancer, which is the second leading cause of death among women worldwide, and with early detection and prevention can dramatically reduce the risk of death, using several machinelearning algorithms that are Random Forest, Naïve Bayes, Support Vector Machines SVM, and K-Nearest Neighbors K-NN, and chose the most effective. The experimental results show that SVM gives the highest accuracy 97.9%. The finding will help to select the best classification machine-learning algorithm for breast cancer prediction. Keywords: Machine learning  Classification K-NN  Naïve Bayes  Random Forest

 Breast cancer  SVM

1 Introduction Knowledge creation and the management of large amounts of heterogeneous data has become a major research area, namely data mining. Data mining is a process of identifying new, potentially useful, valid and ultimately understandable models in data [1]. Data mining techniques can be classified into supervised and unsupervised learning techniques. The unsupervised learning technique is not guided by variables and does not create hypotheses before analysis. Based on the results, a model will be constructed. A common unsupervised technique is clustering [2]. The supervised learning technique requires the construction of a model that is used in the analysis of past performance. The supervised learning techniques used in medical and clinical research are classification, statistical regression and association rules [3]. This study focuses on the use of classification techniques in medical science and bioinformatics. The main objective of this paper is the prediction of breast cancer using the weka data-mining tool and its use for classification in the field of medical bioinformatics. It first classifies the data set and then determines the best algorithm for the diagnosis and prediction of breast cancer. Prediction begins with identifying symptoms in patients, then identifying sick patients from a large number of sick and healthy patients [4]. Thus, the primary objective of this paper is to analyze data from a breast cancer data set using a classification technique to accurately predict the class in each case. Many authors have used the WEKA tool in their work to compare the performance of different classifiers applied to different datasets. But none of the authors © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 565–571, 2019. https://doi.org/10.1007/978-3-030-03577-8_61

566

Y. Khourdifi and M. Bahaj

worked on predicting the accuracy of the breast cancer data set. Here, we considered four numbers of classifiers to study their performance according to various parameters obtained by applying them in the data set. The rest of the paper is arranged as follows: Recent work in this area is discussed in Sect. 2. Section 3 describes the detailed description of the proposed methodology. Section 4 explains in detail the experiments using the proposed machine learning models. Section 5 presents conclusions and future research directions.

2 Related Works Several experiments are conducted on medical data sets using multiple classifiers and feature selection techniques. Much of the research on breast cancer datasets can be found in the literature. Many of them show good classification accuracy. Sivaprakasam et al. [5] compared the performance of C4.5, Naïve Bayes, Support Vector Machine (SVM) and K- Nearest Neighbor (KNN) to find the best classifier and SVM turns out to be the most accurate with an accuracy of 96.99%. In our project, SVM proved to be the best classifier with an accuracy of 97.9%. Guo et al. [6] proposed a Multilayer Perceptron (MLP) as a classifier with retroactive error algorithm propagation and obtained an accuracy of 96.21%. While we obtained an accuracy of 97.89% with 5 layers and 10 times cross-validation using MLP. Karabatak et al. [7] presented an automatic diagnostic system for breast cancer detection based on association rules (AR) and neural networks (NN), obtaining a classification accuracy of 97.4%. Chaurasia et al. [8] compared the performance criteria of supervised learning classifiers such as Naïve Bayes, SVM-RBF kernel, RBF neural networks, decision trees (J48) and simple CART; to find the best classifier element in breast cancer data sets. The experimental result showed that the SVM-RBF core is more accurate than other classifiers obtaining 96.84% accuracy in the (original) Wisconsin breast cancer data sets. Djebbari et al. [9] considered the effect of all machine learning techniques to predict survival time in breast cancer. Their technique shows better accuracy on their breast cancer dataset compared to previous results. Aruna et al. [10] achieved an accuracy of 69.23% using the decision tree classifier (CART) in breast cancer data sets. Liu et al. [11] experimented on breast cancer data using the C5 algorithm with generating additional data for training from the original set using combinations with repetitions up to produce multiple sets of the same size as the original data; to predict breast cancer survivability. Delen et al. [12] provided 18 202,932 breast cancer patient records, which were then pre-classified into two groups of “survivors” (93,273) and “did not survive” (109,659). Survivability prediction results were in the range of 93% accuracy. In recent work, Latchoumiet et al. [13] proposed a weighted particle swarm optimization (WPSO) with smooth support vector machine (SSVM) for classification reached 98.42%. Asri et al. [14] showed that SVM can predict breast cancer better than Naive Bayes. Osman et al. [15] proposed a two-step SVM algorithm was presented by combining a two-step clustering algorithm with an efficient probabilistic vector support machine to analyze the Wisconsin Breast Cancer Diagnosis WBCD with a classification accuracy of 99.10%.

Selecting Best Machine Learning Techniques

567

3 Methodology 3.1

Data Set and Attributes

Our research uses a publicly available data set from the University of Wisconsin Hospitals Madison Breast Cancer Database [16]. There are 11 attributes for each sample. Attributes 2 to 10 were used to represent instances respectively. The number of cases is 699. However, some instances are deleted due to missing attributes. There is one class attribute in addition to 9 other attributes. Each instance has one of the 2 possibilities: Benin or malignant. One of the other numeric value columns is the instance ID column. Our data set includes two classes, as mentioned earlier. They are benign (B) and malignant (M). We further analyzed the data and arrived at 30 attributes with 569 attributes.

4 Experiments and Results In this section, we discuss the Breast Cancer dataset, experiments and the evaluation scheme. In this study, we use the WEKA [17]. It is implement many algorithms for data mining clustering, classification, regression, and analysis of results. The proposed architecture is shown in Fig. 1.

Fig. 1. The proposed architecture

4.1

Experimental Setup

This Section describes the parameters and discusses the results of the assessment of the implemented machine learning methods. Accuracy: The accuracy of detection is measured as the percentage of correctly identified instances. This is the number of correct predictions divided by the total number of instances in the dataset. It should be noted that the accuracy is highly dependent on the threshold was chosen by the classifier and may, therefore, vary between different sets of tests. Therefore, this is not the optimal method to compare different classifiers, but it can give an overview of the class.Therefore, the accuracy  TP þ TNÞ can be calculated using the following equation: Accuracy ¼ TP þðFP þ TN þ FN Where: TP = True positive; FN = False negative; FP = False positive; TN = True negative. Similarly, P and N represent the Positive and Negative population of Malignant and Benign cases, respectively.

568

Y. Khourdifi and M. Bahaj

Recall: Recall, also commonly known as sensitivity, is the rate of the positive observations that are correctly predicted as positive. This measure is desirable, especially in the medical field because how many of the observations are correctly diagnosed the sensitivity or the true positive rate (TPR) is defined by: TP=ðTP þ FNÞ while the specificity or the true negative rate (TNR) is defined by: TN=ðTN þ FPÞ Precision: Percentage of correctly classified elements for a given class: Precision = TP=ðTP þ TNÞ 4.2

Results

To apply and evaluate our classifiers, we apply the 10-fold cross-validation test which is a technique used to evaluate predictive models that divide the original set in a training sample to form the model, and a set of tests to evaluate it. We evaluate the effectiveness of all classifiers in terms of time to build the model, correctly classified instances, incorrectly classified instances and accuracy Table 1. Table 1. Classifiers performance Evaluation criteria

Classifiers K-NN SVM Time to build model (s) 0.01 0.08 Correctly classified instances 547 557 Incorrectly classified instance 22 12 Accuracy (%) 96.1 97.9 TP rate 0,961 0,979 FP rate 0,046 0,034 Recall 0,961 0,979 Precision 0,961 0,979

RF 0.28 546 23 96 0,960 0,055 0,960 0,960

NB 0.01 527 42 92.6 0,926 0,086 0,926 0,926

In order to improve the measurement of classifier performance, the simulation error is also taken into account in this study. To do this, we evaluate the effectiveness of our classifier in terms of: Kappa as a randomly corrected measure of agreement between classifications and actual classes, Mean Absolute Error as the way in which predictions or predictions approximate possible results, Root Mean Squared Error, Relative Absolute Error, Root Relative Absolute Error, Root Relative Squared Error. The results are presented in Table 2. Figure 2 shows the ROC curve of our different classifiers in terms of accuracy of each classifier. The ROC curve provides a graph that illustrates the performance of different classifiers. From the plot, we can easily select the optimal models and reject others to the best classification. Since the confusion matrices represent a useful way of evaluating the classifier, each row in Table 3 represents the rates in an actual class while each column shows the predictions.

Selecting Best Machine Learning Techniques

569

Table 2. Training and simulation error Evaluation criteria

Classifiers K-NN SVM RF NB Kappa statistic 0.9171 0.9545 0.9128 0.8418 Mean absolute error 0.0405 0.0211 0.0757 0.0732 Root mean squared error 0.1963 0.1452 0.1731 0.2648 Relative absolute error % 8.6513 4.5095 16.1855 15.6565 Root relative squared error % 40.591 30.0354 35.8076 54.7597

Fig. 2. ROC curve Table 3. Confusion matrix Malignant K-NN 200 10 SVM 201 1 RF 196 7 NB 190 20

4.3

Benign 12 347 11 356 16 350 22 337

Malignant Benign Malignant Benign Malignant Benign Malignant Benign

Discussion

After creating the predicted model, we can now analyze the results obtained in evaluating the effectiveness of our algorithms. In fact, Table 2 shows that SVM obtained the highest value of 99.7% TP for the benign class, but 94.6 for the malignant class. From these results, we can understand why SVM outperformed other classifiers. The ROC curve allows a better understanding of the power of a machine learning algorithm. We can easily observe in Fig. 2 that SVM is the perfect classifier since it starts from the left corner, to the upper left corner, then to the upper left corner, then to

570

Y. Khourdifi and M. Bahaj

the upper left corner and then to the upper right corner (99% sensitive and 99% specific). Now compare the actual class results with the expected results obtained using the confusion matrix, as shown in Table 3. SVM correctly predicts 569 instances out of 699 instances (357 benign instances that are actually benign and 212 malignant instances that are actually malignant), and 12 instances incorrectly predicted (11 benign class instances predicted as malignant and 1 malignant class instances predicted as benign). This is why the accuracy of SVM is better than other classification techniques used with a lower error rate. In summary, SVM has been able to demonstrate its power in terms of effectiveness and efficiency based on accuracy and recall. Compared to a good amount of Wisconsin breast cancer research found in the literature that compares the classification accuracy of data mining algorithms, our experimental results make the highest 97.9% accuracy value in the classification of breast cancer data. It can be noted that SVM outperforms other classifiers in terms of accuracy, sensitivity, specificity and precision in classifying breast cancer data.

5 Conclusion and Future Work In this study, we used five learning algorithms: SVM, Random Forest, Naive Bayes, and K-NN, applied to the breast cancer dataset, and tried to compare them according to many criteria: accuracy, turnaround time, sensitivity, and specificity. SVM has proven its performance on several levels in front of others, especially by the lowest error rate, and shortest turnaround time. For future work, we intend to conduct an in-depth study of these datasets by combining ML techniques with deep learning models on the application of more complex deep learning architectures to achieve better performance. In addition, we test our in-depth learning approach on larger data sets with more disease classes to achieve higher accuracy. Another future research direction would be to adopt these ML techniques for constrained applications in medical E-health. The corresponding results will be published in future papers.

References 1. Witten, I.H., Frank, E., Hall, M.A., Pal, C.J.: Data Mining: Practical machine learning tools and techniques. Kaufmann, Morgan (2016) 2. Berkhin, P.: A survey of clustering data mining techniques BT. In: Kogan, J., Nicholas, C., Teboulle, M. (eds.) Grouping Multidimensional Data: Recent Advances in Clustering, pp. 25–71. Springer, Heidelberg (2006) 3. Chapelle, O., Scholkopf, B., Zien, A.: Semi-supervised learning book reviews. IEEE Trans. Neural Netw. 20(3), 542 (2009) 4. Meesad, P., Yen, G.G.: Combined numerical and linguistic knowledge representation and its application to medical diagnosis. IEEE Trans. Syst. Man Cybern. - Part A Syst. Hum. 33(2), 206–222 (2003)

Selecting Best Machine Learning Techniques

571

5. Christobel, A., Sivaprakasam, Y.: An empirical comparison of data mining classification methods. Int. J. Comput. Inf. Syst. 3(2), 24–28 (2011) 6. Guo, H., Nandi, A.K.: Breast cancer diagnosis using genetic programming generated feature. Pattern Recognit. 39(5), 980–987 (2006) 7. Karabatak, M., Ince, M.C.: An expert system for detection of breast cancer based on association rules and neural network. Expert Syst. Appl. 36(2, Part 2), 3465–3469 (2009) 8. Chaurasia, V., Pal, S.: Data mining techniques: to predict and resolve breast cancer survivability. Int. J. Comput. Sci. Mob. Comput. IJCSMC 3(1), 10–22 (2017) 9. Djebbari, F., Liu, Z., Phan, S., Famili, F.: An ensemble machine learning approach to predict survival in breast cancer. Int. J. Comput. Biol. Drug Des. 1(3), 275–294 (2008) 10. Aruna, S., Rajagopalan, S.P., Nandakishore, L.V.: Knowledge based analysis of various statistical tools in detecting breast cancer. Comput. Sci. Inf. Technol. 2, 37–45 (2011) 11. Liu, Y., Wang, C., Zhang, L.: Decision tree based predictive models for breast cancer survivability on imbalanced data. In: 2009 3rd International Conference on Bioinformatics and Biomedical Engineering, pp. 1–4 (2009) 12. Delen, D., Walker, G., Kadam, A.: Predicting breast cancer survivability: a comparison of three data mining methods. Artif. Intell. Med. 34(2), 113–127 (2005) 13. Latchoumi, T.P., Parthiban, L.: Abnormality detection using weighed particle swarm optimization and smooth support vector machine. Biomed. Res. 28(11) (2017) 14. Asri, H., Mousannif, H., Al Moatassime, H., Noel, T.: Using machine learning algorithms for breast cancer risk prediction and diagnosis. Procedia Comput. Sci. 83, 1064–1069 (2016) 15. Osman, A.H.: An enhanced breast cancer diagnosis scheme based on two-step-SVM technique. Int. J. Adv. Comput. Sci. Appl. 8(4), 158–165 (2017) 16. Lichman, M.: UCI Machine Learning Repositry (2013). https://archive.ics.uci.edu/ 17. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. SIGKDD Explor. Newsl. 11(1), 10–18 (2009)

Identification of Human Behavior Patterns Based on the GSP Algorithm Hector F. Gomez A1(&), Edwin Fabricio Lozada T.2, Luis Antonio Llerena2, Jorge Alonso Benitez Hurtado3, Richard Eduardo Ruiz Ordoñez4, Freddy Giancarlo Salazar Carrillo5, Joselito Naranjo-Santamaria6, and Teodoro Alvarado Barros3 1

3

5

Facultad de Ciencias Humanas y de la Educacion, Universidad Técnica de Ambato, Ambato, Ecuador [email protected] 2 Universidad Regional Autonoma de los Andes-UNIANDES, Carrera de Sistemas, Km. 51/2 via a Baños, Ambato, Ecuador {ua.edwinlozada,ua.luisllerena}@uniandes.edu.ec Departamento de Ciencias Juridicas, Universidad Tecnica Particular de Loja, Loja, Ecuador 4 Instituto Tecnologico Superior Beatriz Cueva de Ayora, Rectorado, Loja, Ecuador Universidad Técnica de Ambato, Direccion de Posgrados, Ambato, Ecuador 6 Facultad de Contabilidad y Auditoria, Universidad Técnica de Ambato, Ambato, Ecuador

Abstract. The analysis of the algorithms dedicated to the identification of sequential patterns described in the literature, shows that not all are suitable for the type of scenarios with which video surveillance often deals, in particular for the recognition of behavior patterns suspects to classify human behavior as normal or suspicious, it is necessary to analyze all the monitored actions. This is the reason why in this study the main proposal is a modification of the Generalized Sequential Patterns, which we call Generalized Sequential Patterns +memory, which mainly incorporates a module that manages the number of repetitions and combinations of actions (and not only of the sequence) that make up patterns. For the experimentation scenes of theft in supermarkets have been recorded with labels representing states that we assume can be recognized by an artificial vision system. The results obtained were analyzed and their performance was evaluated by comparing them with the results obtained from the GSP application. Keywords: States  Frequent sequences  Items Theft in supermarkets  Human behavior

 Patterns

1 Introduction The characterization and modeling of human behavior have been especially enhanced in recent years by the development of technologies applied to surveillance, through video or other types of sensors. And this is clearly due to the need or the urgency of © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 572–579, 2019. https://doi.org/10.1007/978-3-030-03577-8_62

Identification of Human Behavior Patterns

573

security that our society increasingly demands [1–3]. The domain of application of this investigation is the analysis of videos in which people commit shoplifting in supermarkets, because it is a clear example of the type of scenes in which human actions that are apparently normal define suspicious situations which are registered. For example, if a person takes a product and observes that nobody looks at it constantly, or walks (turns) permanently through the supermarket, the constant repetition of those actions (states) generates suspicion in the person watching the video. This means the need to analyze not only the sequence of the states but also their repetition and combination, to establish the difference of normal and suspicious behavior. The analysis of the algorithms dedicated to the identification of sequential patterns described in the literature, it shows that not all are suitable for this type of scenario. This is the reason why in this study the main proposal is a modification of the GSP algorithm (Generalized Sequential Patterns) [1], which we call GSP_M (Generalized Sequential Patterns+memory), which mainly incorporates a module that manages the number of repetitions and combinations of actions (and not only of the sequence) that make up patterns. To describe the development of the research, this article is structured as follows: it begins with a review of researches related to the topic of study (Sect. 2). Next (Sect. 3) the GSP algorithm and its modification GSP_M are described. Section 4 describes the experiments conducted on the domain of people who commit shoplifting in supermarkets and an analysis of the results. The last section of the article raises the conclusions of the study, as well as the lines of new work opened by the research.

2 Art State: Identification of Patterns from Bases of Sequences of States In this investigation, to classify human behavior as normal or suspicious in video scenes it is necessary to carry out a study of all the actions (states) that people record in it. These actions are sequential [4, 5] and are the data input for algorithms that attempt to classify human behavior. The use of algorithms based on Bayesian networks and probabilistic Petri networks, can give acceptable results of the classification as long as the probabilities table [6, 7] of transitions between TTE states is known (or learned) beforehand from one state to another. Oliver [6] uses artificial vision algorithms to identify the new states of people in a video and applies Bayesian networks and Petri nets to compare the transitions of the new learned with the previously learned TTE. Hu [8] analyzed human behavior by identifying vehicle trajectory patterns through unsupervised neural networks. Hu proposes that the input vector for the unsupervised neural network is the trajectory of movement of the vehicle observed in the video. With the trajectories of the vehicles the unsupervised neural network is trained and the result is the possible trajectories Tv that the vehicles can register in the area from which the videos were taken. The generation of alerts in Hu’s proposal consists of recording the trajectory of a new vehicle and comparing it with the previously obtained in TV. If the result of the comparison is negative, the generation of an alert is immediate [9–11]. In order to satisfy the needs of this domain, the opportune modifications in the GSP algorithm are introduced, obtaining what is called the GSP_M algorithm. GSP has been selected as the starting algorithm because it performs an examination of the whole BD

574

H. F. Gomez A et al.

analyzing the common states [1] although it requires adaptation to deal with the repetitions of items and the dynamic update of patterns. The experimental DB is not extensive, which also favors the selection since GSP obtains better results in databases of medium size.

3 Methodology: Adaptation of the Gsp Algorithm to the Type of Problems Treated: Algorithm GSP_M A scenario is the place where a set of actions occurs. A state is the action that the person records in a video (walk, sit, stop, etc.). The lines that connect the scenes with the scenarios and the scenarios with the actions allow to define the taxonomy of the domain [1] and also the basic principles to find sequential patterns. An item in = {e1, e2, …, in} is a set of states grouped consecutively in the course of a time window. An itemset In = {i1, i2, …, in} is the set of all the items grouped consecutively and in the course of a time window. A sequence s is composed of the grouping and in the course of a time window. The reason why the terms item, itemsets and sequence are used are the obtaining of taxonomies through sequential patterns and the horizontal and vertical analyzes proposed by Srikant [1] for GSP, which are detailed in the following steps:

With minimum support >= 50%, T4 = observing, observing, observing, stopping becomes a pattern since it appears in 50% of the transactions. This result is obtained by comparing that all T4 is included in the rest of the transactions (horizontal comparison). However, the horizontal comparison can not find patterns between transactions (vertical analysis). For example, Step 2: grouping the previous transactions in pairs:

Identification of Human Behavior Patterns

575

A video scene can be represented, by a sequence s (obtained from the manual labeling of the video scenes, see the experimental chapter) or by a set of sj, depending on the size of a temporary window assigned to the configuration of the sequence. If the time window Vts has the same duration as the video scene then it will be represented by a single s, otherwise it will be represented by a set of sj (sub-sequences of s representing the entire video scene). The use of a temporary window is known as a temporary restriction [12]. The s or sj sequences are stored in a BD. Each sequence s or sj stored in the BD corresponds to the states recorded by the behavior of a single person in the monitored scene. We add to the GSP algorithm a structure that consists of three modules: (1) module for the modification of the input sequences that takes into account the repetition of itemsets, (2) module for the memorization of the sequences that could not become a pattern and (3) module for memorizing the weight of each state per sequence. Figures 2 and 3 graphically describe the structure of GSP_M and its functionality. Figure 2 describes the training phase of GSP_M.Modification of the input sequences to take into account the repetition of itemsets. In the sequence s = each of its itemsets is repeated a certain number of times as described in the previous section. But, in GSP_M s is decomposed in its sub-sequences, s1 = , s2 = , s3 = , from the item that is repeated in this case, I1. The item that is repeated is called the predecessor item. If s1 is taken as scandidata the number of times that scandidata is included in the sub-sequences of s is 3 (since I1 is in the three subsequences of s). Up there, the result is the same as the GSP application. However for other scandidatas the result is different as can be seen in Table 1: Table 1. Treatment of the repetition of items through gsp_m Group ( (s1) U (s2))  ( (s2) U ) (s3))  ( (s1) U ) (s3))  ( (s1) U ) (s3)  ( (s2) U ) (s3)  (s3) U (s3)  (s1) U (s3)  (s2) U (s3) 

Scandidate Cycles 3



3

2

In Table 1, the scandidate = is repeated three times. To obtain the result of the repetition, it is verified if there is any combination of the items of the sub-sequences of s resulting in a scandidate. Thus, combining item I1 of s1 with item I1 of s2 it is possible to obtain a scandidate, combining item I1 of s2 and item I1 of s3 it is also possible to obtain a scandidate, the same happens if we combine item I1 of s1 with the item I1 of s3. It is the result of these combinations, is that the candidate sequence = is repeated 3 times, one more time than calculated by GSP in the previous section. With a minimum support > = 2, the sequential response patterns obtained would be s1 = , s4 = , s5 = and s6 = since their repetitions exceed or they are equal to the value of the minimum support. In the s5 and s6

576

H. F. Gomez A et al.

sequences the item I2 appears, the same analyzed individually as part of the subsequence s7 = is discarded by GSP and GSP_M since it appears only once in the sequence s. But their combinations are taken into account by GSP_M, since it is able to obtain frequent sequences in combination with s1 and with s3. After this modification, GSP is applied and the sequential patterns that are stored in a BD are obtained. But GSP discards the sequences that could not become a pattern and we want to maintain for GSP_M the possibility that these sequences become relevant patterns later [13–15]. The GSP algorithm discards the sequences that do not exceed the minimum support. With the sequences discarded by GSP, a new database called Sequence Base Under Minimum Support Threshold (BSBU) is built according to what is proposed in Algorithm 1. In Algorithm 2, when entering a new sequence snueva, it is verified if it is included in some sp 2 BP sequence. If sequence snueva is not included in BP, the inclusion of snueva in some sd sequence of BSBU is verified. If the inclusion is confirmed, the frequency of sd in BSBU is increased. If the new frequency assigned to sd exceeds the minimum support, then a new pattern is generated, so sd is removed from BSBU and included in BP.

Algorithm 1. GSP with memory of sequences that do not exceed the threshold

In Algorithm 1, it is verified if a sequence a is included in a sequence s 2 BD in order to increase its frequency and determine whether or not it can become a sequential pattern. To verify the inclusion of sequences in another it is necessary to decompose the sequences in their itemsets (sub-sequences). To clarify this process, it is proposed to analyze if the s1 = is included in the s2 = , decomposing the sequences s1 and s2 in their sub-sequences until obtaining 1-sequences.

Identification of Human Behavior Patterns

577

4 Experimentation and Analysis of Results Shows a screen of the developed prototype that we call HBP (Human Behavior Patterns) that was used to perform the manual labeling of the states from the image taken by Tracked in Matlab, in order to prepare the sequences and execute GSP, GSP_M and its variants Fig. 1.

Fig. 1. HP application screen

In Fig. 1, the result of manual labeling is observed in the first sub-window. Following the indicative arrow of the manual labeling, the coding of the states is started, for which each state was assigned a number whose value is dependent on the appearance of the states in the labeling. Logically, each state was assigned a single number. Video observation and manual annotation of the states: 694 videos were made available made by video surveillance systems in supermarkets. Each of the videos was watched by an assistant student for 40 s (temporary window). Then, manually recorded the states observed in the video. Each state corresponds to 1 s, the time window for an item is 6 s, therefore each item had 6 states; the temporary window for an itemset is 18 s therefore each itemset had 3 items and, the time window for a sequence is 40 s so each sequence had 2 itemsets. This configuration allowed better results in the execution of GSP_M. • The following states, which are those with which labeling works, are assumed as possible results of the application of artificial vision algorithms of tracking (tracking) and recognition of certain gestures or body movements: – Walk: It is used to indicate that a person is walking. Works related to obtaining these types of states, through artificial vision, can be read in [16, 17]. – Observe cameras: It is used to indicate that the person searches, for repeated occasions, security cameras in the supermarket or turns his gaze towards them, repetitively. Works related to obtaining these types of states, through artificial vision, can be read in [18].

578

H. F. Gomez A et al.

– Take product: It is used to indicate that the person takes a product. Works related to the obtaining of these types of states, through artificial vision, can be read in [19, 20]. – Stop: It is used to indicate that the person is stopped. Works related to obtaining these types of states, through artificial vision, can be read in [17, 21].

5 Conclusions The GSP_M algorithm has been tested in the domain of the supermarket theft commitment, with satisfactory results. From the results of Table 1 we have been able to conclude that complete GSP_M is more efficient than GSP and that incomplete GSP_M in obtaining VP and FP. The GSP_M algorithm results in a database of BP patterns, based on the temporal sequences generated by the behavior of people, but spatial relationships and the handling of contemporary items have not been taken into account. This is intended to be addressed in the following works. The GSP_M algorithm results in a database of BP patterns. These patterns are key to the construction of ontologies for the domain of committing theft in supermarkets, through which it is intended in future work to obtain an ontology of surveillance scenarios and an ontology of scenario objects. In the ontology of surveillance scenarios, each scenario will contain situations to be identified, some by automatic learning and others by an expert’s description. In the ontology of objects of the scenarios, each scenario will be composed of objects (including people) and situations (patterns) that characterize it.

References 1. Sunico, J.: « Post sobre seguridad » , Reconocimiento de imágenes: usuarios, segmentos de usuarios, gestos, emociones y empatía. http://jm.sunico.org/4007/06/48/reconocimiento-deimagenes-usuarios-segmentos-de-usuarios-gestos-emociones-y-empatia/. Último acceso 21 Sep 2010 2. Hu, W., Tan, T., Wang, L., Maybank, S.: A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 34(3), 334–354 (2004) 3. Srikant, R., Agrawal, R.: Mining sequential patterns: generalizations and performance improvements. In: International Conference on Extending Database Technology EDBT 1996. Advances in Database Technology, vol. 1057, pp. 1–17 (1996) 4. Yogameena, B., Komagal, E., Archana, M., Abhaikumar, S.R.: Support vector machinebased human behavior classification in crowd through projection and star skeletonization. J. Comput. Sci. 6(9), 1008–1013 (2010) 5. Martinez, J., Rincon, R., Bachiller, M., Mira, M.: On the correspondence between objects and events for the diagnosis of situations in visual surveillance tasks. Pattern Recognit. Lett. 49(8), 1117–1135 (2008) 6. Oliver, N., Rosario, B., Pentland, A.: A Bayesian computer vision system for modeling human interactions human interactions. In: International Conference on Computer Vision Systems, vol. 44, pp. 255–272 (2002)

Identification of Human Behavior Patterns

579

7. Ghanem, N.: Petri Net Models for Event Recognition in Surveillance Vídeos, Departamento of Computer Science, University of Maryland (2007) 8. Hu, W., Xie, D., Maybank, S.: Learning activity patterns using fuzzy self-organizing neural network. IEEE Trans. Syst. Man Cybern. B Cybern. 34(3), 1618–1626 (2004) 9. Burdick, D., Calimlim, M., Gehrke, J.: MAFIA: a maximal frequent itemset algorithm for transactional databases. In: Proceedings of the 17th International Conference on Data Engineering, pp. 443–452 (2001) 10. De Amo, S.: Curso de Data Mining, Algoritmo PrefixSpan para Minería de Secuencias – Optimizacion y Experimentos, Universidad Federal de Uberlandia, Brasil 11. Bannister, W.: Associative and sequential classification with adaptive constrained regression methods. Dissertation, Dissertation, Arizona State University, EEUU (2008) 12. Fiot, C., Laurent, A., Teisseir, M.: Extended time constraints for sequence mining, time. In: 14th International Symposium on Temporal Representation and Reasoning, pp. 105–116 (2007) 13. Cabrera González, F.A.: Medidas de tendencia central - Estadística Económica, Monografías.com. http://www.monografias.com/trabajos43/medidas-tendencia-central/medidastendencia-central2.shtml. Último acceso 08 Febrero 2011 14. Honovich, J.: Top 3 Problmes Limiting the Use and Growth of Video Analytics.IP Video MarketInfo, IPVM, 18 Junio 2008. http://ipvideomarket.info/report/top_3_problems_ limiting_the_use_and_growth_of_video_analytics. Último acceso 12 Julio 2010 15. Fayyad, U., Piatetsky-Shapiro, G.: Advances in Knowledge Discovery and Data Mining: Towards a Unifying Framework, pp. 82–88. AAAI Press/The MIT Press (2000) 16. Seyfarth, A., Tausch, R., Stelzer, M., Lida, F., Karguth, A., Stryk, O.: Towards bipedal jogging as a natural result of optimizing walking speed for passively compliant threesegmented legs. In: CLAWAR 2006, Bruseels, pp. 12–14 (2006) 17. Adam, A., Amershi, S.: Identifying Humans by Their Walk and Generating New Motions Using Hidden Markov Models, CPSC 53A Topics in AI: Graphical Models and CPSC 526 Computer Animation (2004) 18. Grecu, V., Dumitru, N., Grecu, L.: Analysis of human arm joints and extension of the study to robot manipulator. In: Proceedings of the International MultiConference of Engineers and Computer Scientists, vol. 2, pp. 18–20 (2009) 19. Pablovic, V., Sharma, R., Huang, T.: Visual interpretation of hand gestures for humancomputer interaction: a review. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 677–695 (1997) 20. Gage, S.: MATLAB simulation of fixed-mass rigid-body. brothersoft.com, 24 Junio 2010. http://www.brothersoft.com/matlab-simulation-of-fixed-mass-rigid-body-6dof-379573.html. Último acceso 01 Septiembre 2012 21. Kwon, J., Park, F.C.: Natural movement generation using Hidden Markov models and principal components. In: International Conference on Intelligence Robots and Systems, vol. 38, no. 5, pp. 1990–1995 (2007) 22. Waters, S.: How to Identify Shoplifters, the balance, 27 Octubre 2016. https://www. thebalance.com/how-to-identify-shoplifters-2890263. Último acceso 10 Marzo 2017

Matchstick Games: On Removing a Matchstick Without Disturbing the Others Godfried T. Toussaint(&) New York University Abu Dhabi, Abu Dhabi, United Arab Emirates [email protected] Abstract. It is shown that given any configuration of n  3 line segments (matchsticks) in the plane, there exist at least three segments that can each be translated to infinity, without colliding with the other n − 1 segments. In addition, if n  4, and the line segments are restricted to be parallel to the axes, at least four segments can be moved without disturbing the others. Furthermore, both lower bounds are best possible. The proofs are elementary and suitable for teaching in lower-level undergraduate courses on discrete mathematics. Keywords: Line segments Spatial planning  Robotics Computational geometry

 Translation  Collision avoidance  Artificial intelligence  Discrete mathematics

1 Introduction Most children or adults have, at one time or another, played non-hazardous games using matchsticks. A wide variety of such games exist, ranging from one-person puzzles to create patterns by using a fixed number of matches restricted by constraints [1, 2], to two-person games of strategy [3]. Matchsticks have also been used as models of computation to solve geometric problems [4, 5], and have inspired research in graph theory [6]. Such puzzles, games, and computational models hone analytical skills. Another type of matchstick game also involves physical skill. In one such game, a group of matchsticks is first thrown on a flat surface, in a heap. Players then take turns removing a single matchstick, without disturbing any of the others. A player that removes a matchstick and causes at least one other matchstick to move, loses and is disqualified from the game. The winner is the player that stays in the game the longest. Experience indicates that the player that starts the game usually has little difficulty removing a matchstick successfully. A natural question that arises is whether there always exists at least one matchstick that can be removed from the heap without disturbing the others. In this paper we prove two theorems regarding an idealized geometric version of this game. The matchsticks are assumed to be geometric line segments not subject to gravity, friction or any other physical force acting on them. We may think of them as floating in space motionless, unless a player moves them. Each segment may be moved only by a single translation to infinity. In addition, the problem is restricted to the two-dimensional Euclidean plane.

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 580–587, 2019. https://doi.org/10.1007/978-3-030-03577-8_63

Matchstick Games

581

2 Related Research In a 1981 report Professor William Moser, of the Department of Mathematics at McGill University, made the following conjecture: Given a finite set of solid spheres, prove that at least one of them can be moved without disturbing the others [7]. The challenge was then taken up by Robert Dawson, who proved that for any configuration of m hyperspheres (balls) in n-dimensional space, that intersect each other (touch) at most at their boundaries, there exist at least min{m, n + 1} of them, each of which is movable by a single translation to infinity [8]. Therefore, for every configuration of n circular discs in the plane (where n  3) at least three discs have the property that they can each be translated to infinity without disturbing the others. This property of a set of objects will be referred to as the translation separability property. A closely related problem in robotics (assembly planning [20]) and computer graphics displays, is that of translating a set of objects by a fixed vector, one at a time, without any collisions occurring during the entire process [9]. Guibas and Yao explored the problem for convex polygons and axis-parallel rectangles [10]. First they noted the equivalence between translating convex polygons and translating a suitable set of line segments. Consider the four polygons A, B, C, and D, in Fig. 1 (left). If each polygon is replaced by a line segment connecting the vertices with highest and lowest y coordinates, then a polygon may be translated in the +x direction if, and only if, its corresponding line segment can be translated. Guibas and Yao [10] presented a relatively complicated proof, based on the theory of partial orders, that a translation ordering of the line segments exists in all oriented directions. They also presented an algorithm to compute a translation ordering in O(n log n) time. A simpler O(n log n) algorithm was later discovered by Ottmann and Widmayer [11]. For the case of n  4 axis-parallel rectangles in the plane, it has been shown that at least four rectangles can each be translated to infinity, without disturbing the others [12]. A more transparent proof of the translation ordering theorem, may be obtained via the idea of visibility [13]. Referring to Fig. 1 (left), assume that light is projected in the negative x direction from x = +infinity, and that the polygons prevent the light rays from passing through them. Of the four upper vertices, a, b, and d, are illuminated, whereas c is not. A simple proof by contradiction shows that the line segment which has the illuminated upper endpoint with minimum y-coordinate (D in this case) can be translated first.

Fig. 1. Translating convex polygons is equivalent to translating line segments (left), and segments that intersect the convex hull boundary are free to translate (right).

582

G. T. Toussaint

3 Removing Matchsticks in the Plane In this section it is proved that for n  3 line segments in the plane, there exist at least three that can each can be translated to infinity without colliding with the others. Also, for the case of n  4 axis-parallel line segments at least four segments have this property. It is assumed that no two segments intersect, and that they are in general position, in the sense that no two segments are collinear (lie on a common line). Theorem 1: In every configuration S of n line segments in the plane (where n  3) at least three segments have the property that they can each be translated to infinity without disturbing the other n − 1 segments. Proof: Consider the convex hull of S denoted by CH(S) [14–16]. If three or more segments intersect the boundary of CH(S), denoted by bd-CH(S), we are done, for then each segment with an endpoint on the boundary of CH(S) can be translated in a direction obtained by extending the segment indefinitely through its appropriate endpoint towards the exterior of CH(S) as illustrated in Fig. 1(right). Therefore it only remains to analyze the situation in which two segments touch bd-CH(S). In this case CH(S) is either a quadrilateral or a triangle. Consider first the case when CH(S), is a quadrilateral. If the two segments are parallel, then (without loss of generality) rotate the configuration so that these segments are parallel to the x-axis. Trivially, each of these two segments may be translated in the positive or negative x-directions. Furthermore, the remaining n − 2 segments lie in the horizontal strip determined by the parallel segments of bd-CH(S). Now consider parallel light rays from x = +infinity in the −x direction. The segment in the strip with the lowest illuminated upper endpoint can be translated in the +x-direction, yielding the third segment. If the CH(S) segments are not parallel then rotate S so that one of the segments is parallel to the x-axis. Without loss of generality assume that the intersection point of the two lines containing the two segments lies to the left of the x-axis aligned segment, and refer to Fig. 2(left). Each of these two segments can be translated along one of the lines that contains it. Of the remaining n − 2 segments, the one with the lowest illuminated upper endpoint can be translated in the +x-direction, yielding the third segment.

Fig. 2. Two cases of two segments that intersect the convex hull.

Finally, consider the case when the CH(S), of the segments is a triangle. Here one segment, denoted by E, determines an edge of bd-CH(S), whereas the other, denoted by V, determines only a vertex of bd-CH(S). Without loss of generality rotate S so that edge E lies on the x-axis and all other segments below it, and refer to Fig. 2(right).

Matchstick Games

583

Segment E can be translated in the positive or negative x-direction, and segment V can be translated in the direction along a ray containing V, towards the exterior of CH(S). It remains to show that one additional segment can be translated without disturbing the others. Assuming a light source at x = +infinity, the segment below the x-axis, that has the lowest illuminated upper endpoint can translate in the +x-direction. If this segment is V, we repeat this procedure with a light source at x = −infinity. If this process yields a third segment, we are done, because it can translate in the −x-direction. If this process yields the same segment V, as is the case in Fig. 2(right), we must still identify a third movable segment. However, now no segments can lie in the shaded horizontal strip determined by the horizontal lines containing the endpoints of segment V. Otherwise one of these edges would contain the lowest illuminated top endpoint, contradicting the fact that V has the lowest illuminated top endpoint. Therefore in this case the remaining n − 2 segments must all lie in a horizontal strip determined by E and the line parallel to E that contains the upper endpoint of V. But now one of these segments may be translated in the +x-direction. Q.E.D. Theorem 2: In every configuration S of n  4 axis-parallel line segments in the plane at least four segments can each be translated to infinity without disturbing the others. Proof: If at least four segments intersect bd-CH(S), every one of these segments may be translated in the direction of the segment towards the exterior of the CH(S). It is impossible for only one segment to intersect bd-CH(S) because this would imply that n = 1, which contradicts that n  4. Therefore there remain two general cases: the number of segments that intersect bd-CH(S) is either three or two. Furthermore, a line segment L may intersect CH(S) in two ways: (1) L is an edge of CH(S), or (2) only one or both endpoints of L intersect CH(S). Consider first the possible types of configurations of S in which three segments intersect bd-CH(S), and refer to Fig. 3. There exists only one equivalence class in which all three segments are edges of bd-CH(S). Since no two segments are collinear, two segments must be parallel, and without loss of generality, the three segments may be drawn as in diagram (a). Of the remaining segments contained in the interior of CH(S), the one with an endpoint that has smallest x-coordinate may be moved in the −x direction. Combining with the three segments on bd-CH(S) yields four segments that can be moved, and we are done. For the configurations in which three segments of S intersect bd-CH(S), but only two segments form edges of bd-CH(S), we can distinguish several equivalence classes illustrated in diagrams (b)–(f) in Fig. 3. One class, illustrated in diagram (b) has a segment which is not an edge of bd-CH(S), but has both endpoints on bd-CH(S). Here the three segments must be parallel, and without loss of generality may be considered to be horizontal. Therefore the three parallel segments may be moved in either the positive or negative x directions, and of the remaining segments, at least one must lie in at least one of the two slabs determined by the three parallel segments. Of these, the segment with an endpoint that has largest x-coordinate may be moved in the +x direction. Together with the three parallel segments, this yields four that can be moved, and we are done. When two segments in S form edges of bd-CH(S), and only one endpoint of the third edge intersects bd-CH(S), several cases arise, as illustrated in diagrams (c)–(f). In case (c) all three edges are parallel. There must exists at least one endpoint of at least

584

G. T. Toussaint

Fig. 3. Classes of configurations of S in which three segments intersect bd-CH(S).

one segment in at least one of the three regions delineated by dashed lines. The segment with an endpoint in quadrilateral cxbd with maximum x-coordinate (if it exists) may be moved in the +x direction. The same holds for the quadrilateral cdfy. Finally, the segment with an endpoint in rectangle axye with minimum x-coordinate (if it exists) may be moved in the −x direction. This segment plus the three parallel segments that may be translated in several directions, yields four, and we are done. If the segment that has only one endpoint on bd-CH(S) is not parallel to the other two, as in case (d), only two regions need to be searched. At least one endpoint of one segment must lie in one of the two regions. The segment with an endpoint in triangle bxd with maximum x-coordinate (if it exists) may be moved in the +x direction. On the other hand, the segment with an endpoint in pentagon axdfe with minimum x-coordinate (if it exists) may be moved in the −x direction. This segment, together with the three intersecting bd-CH(S) yields the required four. It is also possible that the segments that form edges of bd-CH(S) are orthogonal to each other, as illustrated in diagrams (e) and (f). Similar arguments establish that at least one segment contained in one of the two regions can be moved. If a segment straddles both regions then it can be moved in either a vertical or horizontal direction. For the configurations in which three segments of S intersect bd-CH(S), but only one segment forms an edge of bd-CH(S), we can distinguish two equivalence classes illustrated in diagrams (g) and (h) in Fig. 3. In case (g) there must exist at least one endpoint in at least one of the three regions shown. With similar arguments to those used in the previous cases, it may be shown that there must exist at least one segment contained in the CH(S) that may be translated in either the +x, −x, or −y directions. The version of (g) in which the non-convex hull edges are parallel, can be handled in the same way (the drawing is left out for lack of space). The last case in which three segments in S intersect bd-CH(S), occurs when none of the segments form an edge of bd-CH(S), as shown in diagram (h). Similar arguments to those used in the previous cases apply here as well, and are left as an exercise for the reader.

Matchstick Games

585

Finally, consider the possible types of configurations of S in which exactly two segments intersect the bd-CH(S), and refer to Fig. 4. Diagrams (a) and (b) show the configurations in which both segments form edges of bd-CH(S). In this case bd-CH(S), must be a quadrilateral. In diagram (a) the quadrilateral abcd must contain at least two segments. Of these, the segment with the endpoint having largest x-coordinate can be translated in the +x direction. Similarly, the segment with the endpoint having least xcoordinate can be translated in the −x direction. If these two segments are distinct, we have identified four segments that can be moved. If they are the same segment, it cannot be vertical, for this would imply that n = 3, which contradicts that n  4. Therefore the segment ef must be horizontal, implying that there must exist at least one segment in region uvfe or region efyx that can be translated in the +x direction. In diagram (b) the quadrilateral abdc must contain at least two segments, one of which can be translated in the +x direction. If this segment is vertical as in diagram (b), then triangle xdy must be empty, implying there exists at least one segment in region byxca that may be translated in the +y direction. The case in which segment ef is horizontal is equivalent to first finding the segment with highest y coordinate, which can be translated in the +y direction. This implies that there must exist at least one more segment below the horizontal line containing endpoint f. If this segment is wholly contained in either triangle yxd or pentagon byxca, then it can be translated in the +y direction. On the other hand, if it straddles both regions, below the horizontal line containing endpoint e, then it can be translated in the +x direction.

Fig. 4. Classes of configurations of S in which two segments intersect bd-CH(S).

Diagrams (c) and (d) show the configurations in which only one of the two segments that intersect bd-CH(S), forms an edge of bd-CH(S). In this case CH(S) must be a triangle. At least two additional segments must be contained in triangle abc. The segment (other than cd) with endpoint having largest x-coordinate, contained in triangle cwb may be translated in the +x direction. Similarly, the segment (other than cd) with endpoint having smallest x-coordinate, contained in triangle cwa may be translated in the −x direction. If these two segments are distinct we have identified four segments that can move. If they are the same segment, as in diagram (c), then there must be at least one additional segment contained in rectangle xufe or pentagon cyefv that can translate to the right or left, establishing that four segments can be moved. In the case that no segments lie to one side of the line cw, as in diagram (d), and ef has the endpoint with largest x-coordinate, it follows that at least one segment is completely contained in

586

G. T. Toussaint

triangle cwb. If this segment lies completely in rectangle wufx it can be translated in either x direction. Otherwise it can be translated in the −y direction. If the segment ef is vertical, say on the line uv, then there exists at least one segment that lies completely in region cwuv, and the segment in this region with smallest y-coordinate may translate in the −y direction, yielding four segments that can be moved. Q.E.D.

4 Concluding Remarks It was proved in Theorem 1, that in any planar configuration of n  3 arbitrary line segments, at least three can each be translated to infinity without disturbing any of the other n − 1 segments. Theorem 2 established that in any planar configuration of n  4 axis-parallel line segments, at least four can each be translated to infinity without disturbing the other n − 1 segments. That the results established by these theorems cannot be strengthened is demonstrated by the two self-explanatory configurations in Fig. 5. In the left diagram three relatively long line segments lie exterior to the shaded triangle, and all the remaining n − 3 segments lies in the interior of the shaded triangle. Here only the three segments ab, cd, and ef can be translated to infinity without disturbing the others. In the right diagram the line segments contained in the interior of the shaded quadrilateral in the central region, are prevented from translating to infinity by the segments ab, cd, ef, and gh. The proofs of the theorems given here, in addition to being new, are elementary and ideal for assignments in freshman courses on discrete mathematics in computer science programs. The proof of Theorem 2 contains many cases, and it would be interesting to obtain a proof with fewer cases, or a totally different simpler proof. The theorems suggest generalizing these results to problems in higher dimensions and different types of objects [17–19] as well as allowable motions such as multiple translations [20] and rotations [21].

Fig. 5. In these configurations only the line segments intersecting the convex hull boundary of the set can be moved without disturbing the others. Acknowledgments. This research was supported by a grant from the Provost’s Office, administered by the Faculty of Science of New York University Abu Dhabi, in Abu Dhabi, The United Arab Emirates.

Matchstick Games

587

References 1. Old Matchstick Puzzles - Fun For All Ages. www.learningtree.org.uk/stickpuzzles 2. Zoltán, T.: From matchstick puzzles to isoperimetric problems. Teach. Math. 16, 12–17 (2013) 3. Könemann, J.: Winning strategies for a matchstick game. In: Vöcking, B., et al. (eds.) Algorithms Unplugged, pp. 259–265. Springer, Heidelberg (2011) 4. Martin, G.E.: Sticks. In: Euclidean Constructions, pp. 109–121. Springer, New York (1998) 5. Dawson, T.R.: Match-stick geometry. Math. Gaz. 23, 161–168 (1939) 6. Sascha, K., Pinchasi, R.: Regular matchstick graphs. Am. Math. Mon. 118, 264–267 (2011) 7. Moser, W.: Research Problems in Discrete Geometry. Department of Mathematics, McGill University (1981) 8. Dawson, R.: On removing a ball without disturbing the others. Math. Mag. 57, 27–30 (1984) 9. Toussaint, G.T.: Collision avoidance problems with disks and balls: an introduction for freshman discrete mathematics courses. In: International Conference on Arts, Education and Interdisciplinary Studies, Dubai UAE, pp. 126–130 (2016) 10. Guibas, L.J., Yao, F.F.: On translating a set of rectangles. In: 12th ACM Symposium on the Theory of Computing, pp. 154–160 (1980) 11. Ottmann, T., Widmayer, P.: On translating a set of line segments. J. Comput. Vis. Graphi. Image Process. 24, 382–389 (1983) 12. Toussaint, G.T.: Motion planning problems with boxes: an introduction for undergraduate courses in discrete mathematics. In: International Conference on Robot Systems and Applications (ICRSA), 27–29 July, Shanghai, China (2018) 13. Toussaint, G.T.: Movable separability of sets. In: Computational Geometry, pp. 335–375. North-Holland Publishing, Amsterdam (1985) 14. Graham, R.L.: An efficient algorithm for determining the convex hull of a finite planar set. Inf. Process. Lett. 1, 132–133 (1972) 15. Akl, S.G., Toussaint, G.T.: A fast convex hull algorithm. Inf. Process. Lett. 7, 210–222 (1978) 16. Devroye, L., Toussaint, G.T.: A note on linear expected time algorithms for finding convex hulls. Computing 26, 361–366 (1981) 17. Toussaint, G.T., ElGindy, H.A.: Separation of two monotone polygons in linear time. Robotica 2, 215–220 (1984) 18. Toussaint, G.T.: On separating two simple polygons by a single translation. Discret. Comput. Geom. 4(3), 265–278 (1989) 19. Dehne, F., Sack, J.-T.: Translation separability of sets of polygons. Vis. Comput. 3(4), 227– 235 (1987) 20. Halperin, D., Wilson, R.H.: Assembly partitioning along simple paths: the case of multiple translations. In: Proceedings of the IEEE International Conference on Robotics and Applications, Nagoya, pp. 1585–1592 (1995) 21. Houle, M., Toussaint, G.T.: On the separability of quadrilaterals in the plane by translations and rotations. Contrib. Algebr. Geom. 58(2), 267–276 (2017)

Multi-cloud Resources Optimization for Users Applications Execution Anas Mokhtari1(B) , Mostafa Azizi1 , and Mohammed Gabli2 1

2

MATSI Laboratory, ESTO, University Mohammed Ist, BP 473, 60000 Oujda, Morocco {a.mokhtari,azizi.mos}@ump.ac.ma LARI Laboratory, FSO, University Mohammed Ist, BP 717, 60000 Oujda, Morocco [email protected]

Abstract. This paper presents a multi-cloud approach to optimize computing resources. We dealt with an optimization problem with two objectives: The duration and the payment cost of application execution. Our goal is to propose a multi-cloud solution while ensuring equitability between the two objectives. For that we used a Dynamic Genetic Algorithm (DGA) approach. Our approach even offers solutions that combine between resources of several clouds for running the same application, from where it comes its multi-cloud feature. The obtained results have shown that it is important to consider the multi-cloud in this kind of problems. Keywords: Cloud computing Genetic algorithm

1

· Multi-cloud · Resources management

Introduction

Cloud computing model is categorized into three types of service models [1]: SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service). The resources management in a cloud environment, especially for the IaaS service model, is an important research topic. Many research works has been done to find solutions to the problems of cloud resources optimization in order to achieve given objectives. According to Grozev and Buyya [2], multi-cloud denotes the usage of multiple, independent clouds by a client or a service. A multi-cloud environment does not imply volunteer interconnection and sharing of providers’ infrastructures. Clients or their representatives are directly responsible for managing resource provisioning and scheduling. Our paper presents a multi-cloud optimization method of computing resources. It consists of determining which cloud resources a consumer should use to (i) execute its application, (ii) minimize execution time and (iii) minimize the payment cost for this execution. Our solution takes into account the different c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 588–593, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_64

Multi-cloud Resources Optimization for Users Applications Execution

589

cloud resources offered by the cloud providers in the market. Our approach even offers solutions that combine between resources of several clouds for running the same application, from where it comes its multi-cloud feature. This paper is organized as follows: We present in Sect. 2 the mathematical model used for this problem. In Sect. 3 we describe our multi-cloud solution based on the DGA approach. After that, in Sect. 4, we put results of our experiment and analyze them. We conclude our paper in Sect. 5.

2

Problem Statement and Model Presentation

Let the variables described on Table 1 [3]. Table 1. Variables definitions Variable Definition P

Set of packages types offered by a cloud provider during a set of time periods

TM

Maximum time

p

A package type from the set P (p ∈ P )

cp

The cost of purchasing the package p for one period of time

NM

Maximum limit of packages that a consumer can purchases at a period of time

For each p ∈ P , 1 ≤ i ≤ NM and 1 ≤ t ≤ TM , let the decision variable:  1 if package i of type p is purchased at time t, xpit = 0 otherwise.

(1)

Let the variable tm as the last time period that a package was purchased by the consumer. We denote f1 the function that represents the payment cost and f2 the execution time of user applications [4]. f1 (x) =

NM  

cp xpit

(2)

p∈P i=1 t∈T

and f2 (t) = tm

(3)

The mathematical formula that expresses the problem, which is minimizing the total cost and the execution time, is written as follows [4]:  min α(k)f1 (x) + (1 − α(k))f2 (t) x,t (4) |α(k)f1 (x) − (1 − α(k))f2 (t)| ≺ ε, where ε is a positif number in the vicinity of 0, k is a time-step and α is a dynamic weight.

590

3 3.1

A. Mokhtari et al.

Multi-cloud DGA Approach Dynamic Weights and Algorithm

We can express the objective function by f (y, k) = α(k)f1 (y) + (1 − α(k))f2 (y). The variable y depends on both x and t. For the initialization step, the variable α(0) takes random positive real value. In each iteration k of GA[5] we take: α(k) =

|f2 (yk−1 )| |f1 (yk−1 )| + |f2 (yk−1 )|

(5)

where yk−1 is the best solution of the iteration (k − 1) of the GA. If f1 (yk−1 ) = f2 (yk−1 ) = 0, then we take α(k) = α(k − 1). So, f (y, k) becomes: f (y, k) = 3.2

|f2 (yk−1 )| |f2 (yk−1 )| f1 (y) + (1 − )f2 (y) (6) |f1 (yk−1 )| + |f2 (yk−1 )| |f1 (yk−1 )| + |f2 (yk−1 )|

Multi-cloud Support

The treatment performed by the multi-cloud DGA approach is based on the characteristics of instances offered by cloud providers. The proposed solution must be in the form of a combination of intances to use. This solution is composed, for each period of time, of the number of instances to use for each instance type of each cloud. For l period mtimes, m clouds and ni package types of cloud i, i ∈ {1, ..., m}, we have l × i=1 ni digits {b111 ...b11n1 b121 ...b12n2 ...b1m1 ...b1mnm ...bl11 ...blmnm }. Each digit bijk represents the number of instances of type k to be acquired from the cloud j at the period time i. Let’s take the cloud providers Amazon, Google and Microsoft that offer, respectively, five instance types [4], six instance types [4] and five instance types (see Table 4 for details). As an example, if the solution is exepressed as 3551313001300001 2031300130000111 1011200110000001, it means that the application will be executed in three periods. Each block number represents a period of time. For the first block, the first five digits represent the five instance types offered by Amazon, the next six digits represent the six instance types offered by Google and the last five digits of this block are the instance types of Microsoft. The value of each digit specifies the number of instances to use for this type. Table 2 illustrates this example solution.

Multi-cloud Resources Optimization for Users Applications Execution

591

Table 2. Interpretation of the example solution Period time Cloud provider No. of instances 1st 2nd 3rd 4th 5th 6th

Instance type: st

1

2nd

3rd

4 4.1

Amazon Google Microsoft

3 1 0

5 3 0

5 0 0

1 0 0

3 1 1

3

Amazon Google Microsoft

2 0 0

0 0 0

3 1 1

1 3 1

3 0 1

0

Amazon Google Microsoft

1 0 0

0 0 0

1 1 0

1 1 0

2 0 1

0

Experimental Results Data Description

Instances used to evaluate our approach are three real applications [3], which are two algorithms that tackle the manipulation of biologic sequence problem (ModelGenerator [6] and Segemehl [7]) and a typical analysis user job for the CMS experiment [8]. Characteristics of these instances are shown in Table 3. Table 3. Applications description [3] Application Memory Storage (GB) (GB)

GFLOP

Time (hour)

Max Cost ($) packages

mod-gen

3,317,760

24

20

28,748,390

4

20

$192

324,000,000 24

45

$2,592

4

2

segemehl

64

600

cms-1500

2250

30

$100

We used three real cloud providers market offers: Amazon EC2 [4], Google Compute Engine [4] and Microsoft Azure [9]. Table 4 details the features of the instances offered by MS Azure. Our simulation is implementend in Java language and launched on a physical machine composed of Intel Core i3-3120M processor with 2.5 GHz frequency and 2GB of RAM.

592

A. Mokhtari et al. Table 4. Azure cloud characteristics

Cloud provider

Instance vCPU Memory Storage (GB) Price ($/hour) type (GB)

Microsoft Azurea [9] F1 1 F2 2 F4 4 F8 8 F16 16 a For Premium Managed Disks

4.2

2 4 8 16 32

128 256 512 1024 2048

0,0748 0.147 0.2914 0.4904 1.1231

Comparison Between Single-Cloud and Multi-cloud DGA

Results of the simulation are exposed in Table 5. In the first column we have the cloud type, followed in the second one by the application name. The payment cost in the next column represents the function f1 in the minimization Eq. 4. Similarly, the fourth column represents f2 in Eq. 4, that is execution time (in hour) required to accomplish running of the application. Execution time of our simulation is indicated in the fifth column. Solution for each cloud-application is in the last column. Table 5. Single-cloud vs multi-cloud results Cloud EC2

Application Payment cost (f1 ) mod-gen

9.099

Execution timea (f2 )

Execution timeb (ms)

Solution

2

72

34031 03001

segemehl

15.8

2

77

10504 00302

CMS-1500

11.76

3

111

Compute Engine mod-gen

14.86

2

76

005032 000021

segemehl

11.76

2

79

010430 000330

CMS-1500 Azure

16.76

3

135

mod-gen

8.74

2

74

03004 03003

segemehl

8.62

2

78

30115 30110

CMS-1500 Multi-cloud

355131 300130 000120

17.53

3

120

mod-gen

7.93

1

115

0022012221123300

segemehl

8.106

1

120

3110013101120330

10.358

1

123

0022012221123300

CMS-1500 Of the application in hours. b Of our simulation a

33510 30110 10010

40135 40115 20112

The comparison between the single-cloud option on one side, and the multicloud option on the other side shows that the multi-cloud gives best solutions. For the payment cost, multi-cloud solutions are cheaper than the three clouds used in single-cloud mode. Even more, the shortest applications execution time is in the multi-cloud mode, for all the three applications.

Multi-cloud Resources Optimization for Users Applications Execution

593

We report that these results show that the treatement between the two objectives functions is fair. For example, we found in the case of the multi-cloud mode that (αf1 , (1 − α)f2 ) is (0.797, 0.837) for the mod-gen application, (1.21, 1.03) for the segemehl application and (0.86, 0.91) for the CMS-1500 application.

5

Conclusion

In this paper we dealt with the problem of computing resources optimization for running user applications in the cloud. Optimization objectives are the payment cost and the execution time of the user application. For that, we used a dynamic approach based on genetic algorithms. Then we added the multi-cloud support to this approach. After that, we launched simulations of this algorithm for single and multicloud cases. Results showed that the multi-cloud mode gives more optimal solutions than the single-cloud mode. The price needed to execute each application is the cheapest in the case of multi-cloud. Likewise, the execution time of each application is the shortest in the case of multi-cloud.

References 1. Mell, P., Grance, T.: The NIST definition of cloud computing (2011) 2. Grozev, N., Buyya, R.: Inter-cloud architectures and application brokering: taxonomy and survey. Softw. Pract. Exp. 44(3), 369–390 (2014) 3. Coutinho, R.D.C., Drummond, L.M., Frota, Y.: Optimization of a cloud resource management problem from a consumer perspective. In: European Conference on Parallel Processing, pp. 218–227. Springer, Berlin, Heidelberg, August 2013 4. Mokhtari, A., Azizi, M., Gabli, M.: Optimizing management of cloud resources towards best performance for applications execution. In: 2017 First International Conference on Embedded & Distributed Systems (EDiS), pp. 1–5. IEEE (2017) 5. Gabli, M., Jaara, E.M., Mermri, E.B.: A genetic algorithm approach for an equitable treatment of objective functions in multi-objective optimization problems. IAENG Int. J. Comput. Sci. 41(2), 102–111 (2014) 6. Keane, T.M., Creevey, C.J., Pentony, M.M., Naughton, T.J., Mclnerney, J.O.: Assessment of methods for amino acid matrix selection and their use on empirical data shows that ad hoc assumptions for choice of matrix are not justified. BMC Evol. Biol. 6(1), 29 (2006) 7. Hoffmann, S., Otto, C., Kurtz, S., Sharma, C.M., Khaitovich, P., Vogel, J., Stadler, P.F., Hackerm¨ uller, J.: Fast mapping of short sequences with mismatches, insertions and deletions using index structures. PLoS Comput. Biol. 5(9), e1000502 (2009) 8. Adolphi, R., Spanier, S.: The CMS experiment at the CERN LHC, CMS collaboration. J. Instrum. 3(08), S08004 (2008) 9. https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/ . Accessed 20 May 2018

An Algorithm of Conversion Between Relational Data and Graph Schema Zakariyaa Ait El Mouden1(&), Abdeslam Jakimi1, and Moha Hajar2 1

2

Software Engineering and Information Systems Engineering Team, Faculty of Sciences Technologies Errachidia, Moulay Ismail University, Meknes, Morocco [email protected], [email protected] Operational Research and Computer Science Team, Faculty of Sciences Technologies Errachidia, Moulay Ismail University, Meknes, Morocco [email protected]

Abstract. This paper presents an approach of modeling a relational data by graphs using a specific open source format named Graph Exchange XML Format (GEXF), this approach can take us from a random data stored in a relational model database to a graph which models those data by nodes and edges to present the links between those data as a first step to the graph-oriented database. The advantage of this approach is that it can show as the whole stored data in a relational modal and with a simple look we can decide which data are more useful and which data can be ignored, using the visualization of the generated GEXF schema we can generate several graphs, where each SQL query in the input relational database generates a different graph. Keywords: Graph SQL  GEXF

 Relational database  Graph-oriented database

1 Introduction Relational databases have been the most used model since their first appearance in 1970, their implementation of multi-user environments and their ease of programming using SQL has given them a big market value. In the last years another model is making its own place in the database market, which is the NoSQL (Not Only SQL) model, especially the Graph-oriented databases (GDB); Neo4j is one of the most popular GDB, and since the appearance of its first version in 2010, applications and concepts using Neo4j keep growing [1]. XML (eXtensible Markup Language) is widely used to represent data with different uses, with this tool we can store and transport data independently of software and hardware. We will present in this paper an algorithm to convert relational data to graph schema, basing on the XML representation of graphs, also we will focus on the case where we deal only a set of data using selections on the relational data and we will see how different graphs will be generated for each selection. The paper is organized as follows: Sect. 2 presents some related works. In Sect. 3 we will present in brief our previous work and the studied approach of community © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 594–602, 2019. https://doi.org/10.1007/978-3-030-03577-8_65

An Algorithm of Conversion Between Relational Data and Graph Schema

595

detection in graphs. Section 4 will present the algorithm of conversion between relational data and graph schemas. Then we will close this paper in Sect. 4 with conclusions and perspectives.

2 Related Works This paper is the continuation of our previous work concerning the approach of community detection in graphs using spectral clustering algorithms [2]. The goal of this study is to link the proposed approach with a relational database; in this case the input of the classification process will be the data stored in the linked database and a conversion algorithm will model those data as a graph. Graphs are one of the most performing data representations in computer science [3], recently this data structure was linked to social networks and it was one of the main reasons why those networks keep expanding, it is true that not all the social networks use the graph-oriented databases, but all of them disengaged with the relational databases, such as the case of Cassandra [4] for Facebook and BigTable [5] for Google. In addition to computer science, graphs are present in many other fields like biology and chemistry; we cite for example the metabolic networks, chemical structure graphs, gene clusters and genetic synthesis [3, 6]. The success of graphs’ applications in several fields has encouraged the development of new graph based database management systems; we cite for example SylvaDB [7] of De la Rosa Pérez et al. On another side, Machkour et al. [8] proposed an algorithm to link XML schema to object-relational model which was the inspiration of our algorithm to link relational data with graph schema based on XML.

3 Approach Overview The studied community detection process (Fig. 1) has as input a set of data points, and the main objective is to generate a set of clusters to regroup those data points, using spectral clustering algorithms. This approach consists of five main steps; Data Definition. As the first step of the approach, it’s a collection of information considering the heterogeneous individuals of the treated problem; each individual will be presented thereafter as a vertex of the graph that models the problem in the next step of the process. Graphical Modeling. This step consists of the built of a graph that models the entire individuals, using a formula to calculate the similarities sij between each pair of individual xi and xj, then to connect the vertices vi and vj that represents respectively xi and xj if their similarity is higher or equal to a prefixed threshold. The output of this part of the process is a graph G = (V, E) where V is the set of vertices and E the set of edges between the vertices of V, adding that the order of the graph is equal to the size of the set of individuals defined in the first step.

596

Z. Ait El Mouden et al.

Fig. 1. Approach overview.

An Algorithm of Conversion Between Relational Data and Graph Schema

597

Matrix Representation. This third sub process is a matrix representation of the generated graph in the previous step; we have focused on the use of Laplacian matrices, and their spectral analysis which can give important information about the treated graph. In this part we deal especially with the normalized version of the Laplacian matrices (LN) and its branch (Labs). Spectral Clustering (SC). As an unsupervised classification method for graph clustering. The most used methods and algorithms can be found in [9, 10], SC generally uses the spectral analysis of a matrix, in our case the input is the normalized Laplacian matrices. Based on the eigenvalues and the eigenvectors of those matrices we can build a prefixed number k of clusters to regroup the n vertices of the treated graph, we have based our analysis on one of the best performing SC algorithms; Absolute Spectral Clustering [11] which uses the normalized matrix Labs. Classification and Results Interpretation. The last sub process is the interpretation of the generated clusters, each modification of the input parameters generates different clusters and then gives a different classification of the defined individuals, and those clusters must give interpretable information about the individuals.

4 Conversion Between Relational Data and Graph Schema This section focuses on the transit between the data definition and the graphical modeling; the goal is to start from a table in a relational database and to finish with a graph schema using the Graph Exchange XML Format (GEXF) to model the table. 4.1

GEXF

Graph Exchange XML Format [12] is an extension of XML to describe graphs and complex networks, GEXF files are defined for Gephi [13] which is an open source platform for visualization and exploration of graphs and complex networks. The output graphs of this algorithm were visualized using Gephi. 4.2

Preprocessing

To test the performance of the proposed algorithm, we consider a table named Student where a student is defined by his code as a textual value with primary key constraint, a first name, a last name, a gender (Character: ‘F’ for female and ‘M’ for Male), a graduation year, a system which defines if the student has studied in bachelor (‘L’) system or a mastery (‘M’) system, the number of studied years to obtain the diploma, the mark (numeric value between 10 and 20) and the sector of study (“BCG”: Biology, Chemistry and Grology, “MIP”: Mathematics, Informatics and Physics, “MIPC”: Mathematics, Informatics, Physics and Chemistry). We can summarize the preprocessing phase in three main functions; LabelingNodes(). This function chooses the primary key value as label of the nodes in the output graph. In the Student table, students’ nodes will be labeled by their codes.

598

Z. Ait El Mouden et al.

DetectClassificationFields(). This function returns a set of fields that not give any classification information; starting with the primary key and all the other fields with unique constraint, simply because a classification of n individuals based on a unique value will end with n cluster where every individual presents its own cluster. In our example the only unique field is the primary key, so it will not be considered to calculate similarities between nodes. Also we have proposed to ignore all the textual fields with a high no redundancy parameter f; we define the no redundancy parameter as the frequency of the appearance of a value in the field values, for a field field_name in a table table_name we execute the following SQL query:

For the table Student (30 rows), we will have the no redundancy values (Table 1): Table 1. No redundancy values for Student table’s fields. f_code f_fname f_lname f_sector f_gender f_system 1.0000 0.7333 0.8000 0.1000 0.0667 0.0667

For a fixed threshold e = 0.3, the fields having f_field_name > e will be ignored in building the similarity matrix; thus fname and lname will not be considered as a classification fields in the Student table. ChangeTextualToNumeric(). To apply a similarity formula, it is important to have all values in numeric format; this function deals with the fields with a low no redundancy parameter, which means that the nodes share the same set of values. We start with calculating the number of distinct values with n_values = COUNT(DISTINCT (field_name)), and we replace each textual value by a numeric one in the range [1, n_values].

Table 2. Example for the function ChangeTextualToNumeric(). Field gender system sector

n_values 2 2 3

Old set of values {F,M} {L,M} {MIP, MIPC, BCG}

New set of values {1,2} {1,2} {1,2,3}

An Algorithm of Conversion Between Relational Data and Graph Schema

599

In our example (Table 2), the fields considered by this function are gender, system and sector: After the execution of the three functions, we will have a table with labeled lines, and fields with numeric values, which will help us to build the similarity matrix. We take in note that it is important to save the first state of the table before executing the preprocessing functions. 4.3

Similarity Matrix

After the preprocessing, each individual will be presented by a vector labeled with the primary key and having numeric values, an example of an individual in our student table: M131 = [2 2005 2 4 12.88 1]; The next phase is to calculate the similarities sij between all pair of individuals i and j, then store the values in the similarity matrix. We can use the Gaussian similarity as shown in the formula (1): sij ¼ exp 

  !   xi  xj   2 2r2

ð1Þ

With || xi – xj || the Euclidian distance between xi and xj, and r > 0 a parameter to control the size of the neighborhood, with r = 5 we obtain the following similarity matrix (Fig. 2).

Fig. 2. Similarity matrix.

600

4.4

Z. Ait El Mouden et al.

GEXF Exportation

In GEXF format, in addition to the type (directed of undirected) and the name, a graph is defined by a set of nodes and a set of edges between those nodes. Each node is defined by an id and a label, a numeric value, a 3D position (z = 0 for 2D), an RGB color. And each edge is defined by a source and destination nodes and its weight. The GEXF Exportation function starts with creating the nodes of the graph using the following syntax for each data point.

Then it creates the edges between nodes using the similarity matrix and the syntax as following.

After generating the whole GEXF file, it can be easily imported in Gephi to visualize the generated graph. (Figure 3) shows some examples of the visualization of the generated graphs; Graph 1 is a fully connected graph, Graph 2 is an -neighborhood graphs with  = 0.75 and Graph 3 is an -neighborhood graph having  = 0.6 in addition to a selection F = rmark  12(Student)

An Algorithm of Conversion Between Relational Data and Graph Schema

601

Fig. 3. Graph visualization.

5 Conclusion This paper presents an algorithm to link a relational data to graph schema based on GEXF format; we also presented the Gephi tool to visualize the generated data. Starting with a relational data, the algorithm starts with a preprocessing phase to detect the classification fields and change the textual values of classification fields to numeric. Then after calculating the similarity matrix, we start the GEXF exportation process, first by creating the nodes and then by creating the edges between those nodes. The output graphs are shown in the previous section, as we mentioned, we can add selections to the relational data to generate a specific graph for a treated situation. This algorithm can be adapted to several use cases of relational data, in our case, the student table has 30 rows, and it took 427 ns to generate the GEXF schema of a graph with 30 nodes and 435 edges described in a file with nearly 1000 rows for the

602

Z. Ait El Mouden et al.

fully connected graph. And since we deal with semi-structured data, the running time does not cause any problem even with a high dimensional data. As perspective, we will work on linking this approach to a graph-oriented database, such as Neo4j which can give us the possibility to execute NoSQL queries and deal with complex networks with a large set of heterogeneous data (Big data).

References 1. Miller, J.J.: Graph database applications and concepts with Neo4j. In: Proceedings of the Southern Association for Information Systems Conference, vol. 2324, p. 36 (2013) 2. Ait El Mouden, Z., Moulay Taj, R., Jakimi, A., Hajar, M.: Towards for using spectral clustering in graph mining. In: Big Data, Cloud and Applications. BDCA 2018. Communications in Computer and Information Science, vol. 872. Springer (2018) 3. Vicknair, C., et al.: A comparison of a graph database and a relational database: a data provenance perspective. In: Proceedings of the 48th Annual Southeast Regional Conference, April 2010 4. Cassandra homepage. http://cassandra.apache.org/ 5. BigTable. https://cloud.google.com/bigtable/ 6. Pennerath, F., et al.: The model of most informative patterns and its application to knowledge extraction from graph databases. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg (2009) 7. De la Rosa, P., et al.: SylvaDB: a polyglot and multi-backend graph database management system. In: Proceedings of the 2nd International Conference of Data Technologies and Applications, pp. 285–292 (2013) 8. Machkour, M. et al.: A reversible conversion methodology: between XML and objectrelational models. In: 7th International Conference on Information and Communication Systems (ICICS), Irbid, pp. 270–275 (2016) 9. Schaeffer, S.E.: Graph clustering. Comput. Sci. Rev. 1, 27–64 (2007) 10. Aggarwal, C.C., Wang, H.: A survey of clustering algorithms for graph data. In: Managing and Mining Graph Data. Advances in Database Systems, vol. 40. Springer (2010) 11. Rohe, K., et al.: Spectral clustering and high-dimensional stochastic blockmodel. Ann. Stat. 94(4), 1878–1915 (2011) 12. GEXF. https://gephi.org/gexf/format 13. Gephi. https://gephi.org

Self-calibration of the Fundus Camera Using the Genetic Algorithm Mostafa Taibi1(&), Rabha Allaoui2, and Raja Touahni1 1

Laboratory Telecommunications Systems and Engineering of the Decision, Faculty of Sciences, Ibn Tofail University, Kenitra, Morocco {mostafa.taibi,touahni.raja}@uit.ac.ma 2 Department of Computers and Telecoms, National School of Science Applied from Khouribga, Khouribga, Morocco [email protected]

Abstract. In this article, we present a method of Self-calibration of the fundus camera from the background images of the eye. The data is limited to inter-view homographys, between a reference image and any images. Our first contribution is to propose a new Self-calibration method of the fundus camera based on the idea of adopting as an estimate of the movement of the camera, the movement associated with the eye. A second contribution is the application of the genetic algorithm to determine the internal parameters of the camera. Our method is robust and gives good results as demonstrated by our experiments. Keywords: Genetic algorithm Fundus camera

 Self-calibration  Computer vision

1 Introduction The Fundus Camera is a device used to photograph the back of the eye. Widely used by ophthalmologists, their images are essential in a diabetes glaucoma consultation and other pathologies. This technique is simple, manageable, but gives only a small field of observation. Moreover, it does not allow a vision of the relief. However, it should be noted that in order to be able to reliably compare two examinations, the patient must be followed in the same place with the same device. In order to circumvent this drawback, the images of the fundus can be processed to produce a simple 3D volume and rich in reconstructed information thanks to 3D reconstruction algorithms. This implies the need to calibrate the Fundus Camera. A calibrated Fundus Camera can extract more information about the 3D structure of the fundus components (position, shape, size, etc.), which gives ophthalmologists more opportunities to identify anomalies and help them with the diagnosis. Camera calibration is an essential step for many computer vision algorithms and applications. It consists in estimating the intrinsic parameters (focal, main point) and extrinsic (position and orientation) of the camera. These parameters are used to construct a three-dimensional model for visual inspection, detection and, location of objects [1]. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 603–612, 2019. https://doi.org/10.1007/978-3-030-03577-8_66

604

M. Taibi et al.

Calibration methods can be divided into two categories. The first one is based on a three-dimensional or two-dimensional reference object whose exact relationship between the metric coordinates of this object and its pixel coordinates are perfectly known [2–5]. The second category called self-calibration appeared in the early 1990 [6–8] is used when no object of calibration is not used. The metric properties of the camera are calculated from the uncalibrated images using the constraints of the camera and/or the scene to be modeled [21, 22]. Most calibration methods are solved in two steps, the first step is to find an approximate solution using a linear method, in the second step the result obtained is refined iteratively by a nonlinear method, based on the minimization of a cost function. Several strategies have been proposed to determine the global minimum of a nonlinear function, (methods of (quasi-Newton [12], gradient descent, LevenbergMarquardt [13, 14], Powell regions of confidence [15–17], …), These methods iteratively provide relative displacements in the parameter space to be optimized. In this work, we will propose a method of self-calibration of the camera whose focal length can vary during the intervention, using the geometrical properties of the eye more precisely its movement to apply the self-calibration method by pure rotation. A second contribution consists of the use of the genetic algorithm which is a stochastic optimization global search algorithm to determine the intrinsic parameters of the rotating camera. To represent, the camera most calibration methods are based on the pinhole model where all the rays pass through a single point (optical center), this mathematically rich and easily exploitable model allows to model most projective sensors.

2 Geometric Framework We model a rotation camera, like an inverse projective camera. The 3  4 projection matrix called H associated with this model, groups the internal and external parameters of the camera: H ¼ A½R j t R is an orthogonal matrix and t is a vector of R3 which represents respectively the orientation and the position of the camera, the calibration matrix A contains the intrinsic parameters which represent the internal characteristics of the camera, which are invariant to his position. 2

a c A = 40 b 0 0

3 u0 v0 5 1

Where ðu0 ; v0 Þ are the coordinates of the main point in the image, ða,bÞ represents the focal length of the camera, expressed in pixels (in the vertical and horizontal directions), and c represents the Skew parameter. s ¼ a=b is an important parameter called scale report.

Self-calibration of the Fundus Camera Using the Genetic Algorithm

605

3 Methodology and Approach Envisaged In this part, we will present the apparatus used for the acquisition of the images of the bottom of the eye then we will describe the physical and mathematical model proposed to calibrate these images. 3.1

Fundus Camera

Fundus examination is a painless examination, using a special camera to observe the retina and its vessels, the disc (optic disk), the macula (see Fig. 1). The images taken are interpreted by an ophthalmologist. This procedure can detect certain diseases that affect the retina, such as age-related macular degeneration, diabetic retinopathy or retinitis pigmentosa.

Fig. 1. Image of the fund of the eye

The Fundus examination takes place in a sitting position, the patient puts his face on a chin and has to fix a point of light. The two eyes are examined successively, each shot causes a glare of a few seconds and acquired images are recorded. In ophthalmology applications using smartphones currently include the examination of the fundus. Economic and portable accessories that cling to the top of the Smartphone, and rests comfortably on the lens of the rear camera. Allows ordinary ophthalmic examinations. 3.2

Approach and Implementation of the Self-calibration Method

To carry out this work, a subset of images from the back of the eye are taken from different points of view. Our approach is inspired by the Self-Calibration method of Agapito [9], the idea is to adopt as an estimate of the movement of the camera, the movement associated with the eye. Indeed, the moving eye rotates around the centers of rotation. These movements are caused by eye muscular, and are an integral part of the visual system. Translational movements are considered insignificant.

606

M. Taibi et al.

It will be considered that the first position of the camera is associated with the world marker, the other positions of the camera will be expressed relative to the first position. To find the transformation between two images, the points are detected and matched with points detected in a reference image. The image selection criterion is based on the number of matched points, which must be greater than a constant. We assume that Skew is zero, the calibration matrix we use is in this form: 2

a A ¼ 40 0

0 b 0

3 u0 v0 5 1

ð1Þ

~ j ¼ ½ uj vj 1 T projections of ~ i ¼ ½ ui vi 1 T and m Considering two points m ~ ¼ ½ X Y Z 1 T in two images, one can then write: the same point M 2 3 3 X ui 6Y 7 4 vi 5 ¼ Ai ½Ri j 06 7 4Z5 1 1

ð2Þ

2 3 X uj   6 Y 7 4 vj 5 ¼ Aj Rj  0 6 7 4Z 5 1 1

ð3Þ

2 3 3 ui X 4 vi 5 ¼ Ai Ri 4 Y 5 Z 1

ð4Þ

2 3 3 uj X 4 vj 5 ¼ Aj Rj 4 Y 5 Z 1

ð5Þ

2

2

3

2

2

~ in Eq. (5) we obtain: Two images can be linked by a homography, substituting M 2

3 2 3 uj ui 4 vi 5 ¼ Aj Rj R1 A1 4 vj 5 i i 1 1

ð6Þ

The équation is rewritten as follows: 2

3 2 3 uj ui 4 vi 5 ¼ Hij 4 vj 5 1 1 1 1 With Hij ¼ Aj Rj R1 i Ai ¼ Aj Rij Ai

ð7Þ

Self-calibration of the Fundus Camera Using the Genetic Algorithm

607

As Rij is a rotation, we know that the matrix is orthogonal Rij RTij ¼ I, the equation becomes: ðAj ATj Þ ¼ Hij ðAi ATi ÞHTij

ð8Þ

Here we find the expression of the image of the absolute conic (ICA) [11]: W  ¼ AAT ½9 2

a2 þ u20  4 w ¼ u0 v0 u0

u0 v0 b2 þ v20 v0

3 u0 v0 5 1

The absolute conic (CA) is an imaginary conic of the plane of infinity. His image w has the characteristic of relying only intrinsic parameters of the camera. For each homography we therefore have the following equation: Hij wi HTij ¼ wj

ð9Þ

The calculation of Ai can be reduced to the estimate of de wi . For a sequence of n images, it is theoretically possible to ðn  1Þ independent homographys since we have relationships Hij ¼ H1 ji et Hij ¼ Hik Hkj . A classic solution is to take a reference view and consider only homographys H0k 8 k 2 f1; . . .; ng. wj can be expressed linearly by the components of the symmetric matrix which depends only on the intrinsic parameters of the camera related to the reference image. 2

a1 w0 ¼ 4 a2 a3

a2 a4 a5

3 a3 a5 5 a6

Knowing that the skew is equal to zero, in this case a4 ¼ 0. This condition gives us an equation for each homography. All of its equations can be grouped into a system of linear equations Ea = 0, with more than 5 homographys we have an overdetermined system whose solution is obtained with the linear least squares method by minimizing kEak such as kak ¼ 1. Once the matrix w0 found, we can get the elements of the calibration matrix A0 [10]: u0 ¼ w0 ð3; 1Þ v0 ¼ w0 ð3; 2Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a ¼ w0 ð1; 1Þ  u20 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b ¼ w0 ð2; 2Þ  v20

ð10Þ

608

M. Taibi et al.

This estimate is then refined using a Genetic Algorithm, where the parameters to be computed are the unknown intrinsic parameters of each calibration matrix, and the cost function to be minimized is:    n  T T X Ai Ai  H0i A0 A0 H0i   ð11Þ    Ai ATi  H0i A0 AT0 H0i  i¼1 F

where the subscript F indicates the use of the Frobenius norm.

4 Genetic Algorithm An optimization problem is defined as the search, in a space of solutions, an optimal solution quantified by an objective function. This quantification leads to wanting to maximize or minimize the problem. In this section, we explain how to apply Genetic Algorithm methods to parameter estimation. Genitic Algorithm is a very prominent non-traditional optimization technique which resembles the theory of evolution. It is an adaptive search algorithm that works based on the methods of natural selection. Unlike the previous algorithms, GA works based on logic, not derivatives of a function and it can search for a population of solutions, not only one solution set. The logic it uses is based on the concept of ‘Survival of the fittest’ from Darwin’s theory [18] which means only the most competent individual will survive and generate other individuals that might perform better than the current generation. While a variety of explanations can be found about this algorithm in the literature, the most common way to explain GA is to look at it as a replica of biological chromosomes and genes, where the chromosome is a solution set or an individual containing the set of parameters to be optimized, and a gene represents single components of those parameters. New generations of chromosomes can be generated by manipulating the genes in the chromosomes. A collection of chromosomes is known as a population, and the population size is the exact number of chromosomes in the experiment. Two basic genetic operators ‘mutation’ and ‘crossover’ are used to manipulate genes in the chromosomes. The crossover operation combines genes from two parents to form an offspring, while the mutation operation is used to bring new genetic material into the population by interchanging the genes in the chromosome [19]. The following steps summarize how GA works in solving optimization problems. Step 1. Generate n population of chromosomes at random. Step 2. Compute the fitness function of each chromosome. Step 3. Generate a new population using the selected GA operator. Step 4. Run the algorithm using the newly generated population. Step 5. Stop if a particular stopping condition is satisfied or Step 6. Go back to step 2.

Self-calibration of the Fundus Camera Using the Genetic Algorithm

609

Chromosomes are selected as parents for step (3) based on some selected rule (GA operator) to produce new chromosomes for the next iteration. In this experiment, the stochastic universal sampling technique was adopted. Because original chromosomes are given randomly [20], this may induce getting different solution set, we have specified upper and lower bounds for all variables. Both mutation and crossover were applied concurrently to reproduce the fittest offspring.

5 Experimental Verification of the Approach An experimental evaluation is necessary for the validation of our self-calibration approach of the fundus camera. We have developed the method described in the previous section with the Matlab language installed on a Core 2 Duo computer. The tests are carried out on a snapshot composed of 5 images of size 345  405 pixels from the back of the eye. Our goal is to evaluate the applicability of the presented method. Are 5 images of the fundus of the eye, for which we estimated the matrices of interviews homographys. Suppose the camera moves around the eye, such that its motion is a rotation. Table 1 shows the results found: Table 1. Intrinsic parameters Image Image Image Image Image

A b 1 95,17 115,40 2 34,22 19,49 3 145,59 182,16 4 78,36 98,67 5 132,43 159,73

s 0,82 1,75 0,8 0,79 0,83

u0 232,26 128,79 236,32 286,13 193,18

v0 214,59 294,11 138,69 333,44 8,79

The solutions are obtained after 2 s of computation time for the genetic algorithm. The focal length of the camera is different for each view its value is in the interval [19; 183]. The scale ratio has a fixed value equal to 0.8, except for image 2, the result of which will not be retained. The coordinates of the main point vary with the change of focal length (Table 2). Table 2. Residual and number of iteration at convergence of the cost function Image 1 Image 2 Image 3 Image 4 Image 5 Residual 4,38e-5 0,0084 0,247 0,0127 0,064 Number of Iteration 140 112 97 136 89

610

M. Taibi et al.

After optimization, the intrinsic parameters obtained are remarkably accurate, the projection residual is close to zero for all the images. The result obtained is very satisfactory. It seems that the number of images as well as the number of variables that is equal to 4 contributed to the rapid convergence of our algorithms. The below Fig. 2 shows the behavior of the Genetic Algorithm for each image.

Fig. 2. Convergence process of the Genetic Algorithm for each image

Self-calibration of the Fundus Camera Using the Genetic Algorithm

611

6 Conclusion We have proposed in this paper a method of Self-calibration the Fundus camera from inter-image homography matrices that can be calculated easily. Our method exploits the rotational movement of the eye and we assume that the focal length is variable. To solve the problem of optimization the genetic algorithm allowed us to find satisfactory solutions according to the criteria of time and error, under real conditions, experiments have shown that this method is usable.

References 1. Zhang, Z., Han, Y., Zhou, Y., Dai, M.: A novel absolute localization estimation of a target with monocular vision. Optik 124(12), 1218–1223 (2013) 2. Feng, X.-f., Pan, D.-f.: A camera calibration method based on plane mirror and vanishing point constraint. Optik 154, 558–565 (2017) 3. Heikkilä, J., Silven, O.: A four step camera calibration procedure with implicit image correction. In: Computer Vision and Pattern Recognition, pp. 1106–1112 (1997) 4. Staranowicz, A.N., Brown, G.R., Morbidi, F., Mariottinia, G.L.: Practical and accurate calibration of RGB-D cameras using spheres. Comput. Vis. Image Underst. 137, 102–114 (2015) 5. Zhang, Z.: A flexible new technique for camera calibration. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000) 6. Faugeras, O.D., Luong, Q.-T., Maybank, S.J.: Camera self-calibration: theory and experiments. In: Proceedings of the Second European Conference on Computer Vision, ECCV 1992, pp. 321–334. Springer, London, UK (1992) 7. Maybank, S.J., Faugeras, O.D.: A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 8(2), 123–151 (1992) 8. Fusiello, A.: Uncalibrated Euclidean reconstruction, a review. Image Vis. Comput. 18(6–7), 555–563 (2000) 9. Agapito, L., Hayman, E., Reid, I.: Self-calibration of rotating and zooming cameras. Int. J. Comput. Vis. 45, 107–127 (2001) 10. Ji, Q., Dai, S.: Self-calibration of a rotating camera with a translational offset. IEEE Trans. Robot. Autom. 20(1), 1–14 (2004) 11. Datta, A., Kim, J.-S., Kanade, T.: Accurate camera calibration using iterative refinement of control points. In: 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1201–1208. IEEE (2009) 12. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(3), 503–528 (1989) 13. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 1 edn. Cambridge University Press, Cambridge (2003) 14. Nielsen, H.B.: Damping parameter in marquardt’s method. Technical report, Informatics and Mathematical Modelling, Technical University of Denmark, DTU, Richard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby (1999) 15. Powell, M.J.D.: A hybrid method for nonlinear equations. In: Rabinowitz, P. (ed.) Numerical Methods for Nonlinear Algebraic Equations, pp. 87–114 (1970) 16. Nocedal, J., Yuan, Y.: Combining trust region and line search techniques. Technical report, Advances in Nonlinear Programming, Kluwer (1992)

612

M. Taibi et al.

17. Argyros, M.L.A.: Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment? ICCV (2005) 18. Michalewicz, Z.: GAs: What are they? In: Genetic Algorithms + Data Structures = Evolution Programs, pp. 13–30. Springer, Berlin/Heidelberg, Germany (1994) 19. Heaton, J.: Introduction to Neural Networks with Java, 2nd edn. Heaton Research Inc., St. Louis (2008) 20. Göçken, M., Özçalıcı, M., Boru, A., Dosdoğru, A.T.: Integrating metaheuristics and artificial neural networks for improved stock price prediction. Expert Syst. Appl. 44, 320–331 (2016) 21. Boudine, B., Kramm, S., El Akkad, N., Bensrhair, A., Saaidia, A., Satoria, K.: A flexible technique based on fundamental matrix for camera self-calibration with variable intrinsic parameters from two views. J. Vis. Commun. Image Represent. 39, 40–50 (2016) 22. Sun, Q., Wang, X., Jiping, X., Wang, L., Zhang, H., Jiabin, Yu., Tingli, S., Zhang, X.: Camera self-calibration with lens distortion. Optik 127(10), 4506–4513 (2016)

Utilizing Faults and Time to Finish Estimating the Number of Software Test Workers Using Artificial Neural Networks and Genetic Programming Alaa Sheta1(&), Sultan Aljahdali2, and Malik Braik3 1

3

Department of Computing Sciences, Texas A&M University-Corpus Christi, Texas, TX 78412, USA [email protected] 2 Computer Science Department, College of Computers and Information Technology, Taif, Saudi Arabia [email protected] Department of Computer Science, Al-Balqa Applied University, Salt, Jordan [email protected]

Abstract. Time, effort and the estimation of number of staff desired are critical tasks for project managers and particularly for software projects. The software testing process signifies about 40–50% of the software development lifecycle. Faults are detected and corrected during software testing. Accurate prediction of the number of test workers necessary to test a software before the delivery to a customer will save time and effort. In this paper, we present two models for estimating the number of test workers required for software testing using Artificial Neural Networks (ANN) and Genetic Programming (GP). We utilize the expected time to finish testing and the rate of change of fault observation as inputs to the proposed models. The proposed models were able to predict the required team size; thus, supporting project managers in allocating the team effort to various project phases. Both models yielded promising estimation results in real-time applications. Keywords: Prediction of test workers  Software testing  Project management Artificial Neural Networks  Genetic Programming

1 Introduction Software testing process is defined as the process of implementing a program with the intent to find software bugs, errors or any defects [1]. It requires numerous efforts and might cost more than 50% of the project development effort [1]. This process should deliver software with minimum or no faults. The software development life cycle is all about people, methodologies and tools. This is evident from the software development process (see Fig. 1). People (i.e., staff) need to collect the project requirements, develop a project plan, make a design, deploy the project, test and validate the business requirements and finally detect and fix the bugs if any. The standard software development life cycle consists of multiple stages: requirements, analysis and design, coding, unit and system test and finally software evolution. The staff management process © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 613–624, 2019. https://doi.org/10.1007/978-3-030-03577-8_67

614

A. Sheta et al.

for the project development involves the following five phases: Staff Planning, Staff Acquisition, Staff Training, Staff Tracking, and Staff Transition. This process appears in Fig. 2. Specific information related to staff must be collected, organized and updated during the project development life cycle. The project manager should be able to identify the size of the team required to test the software that primarily relies on the expected number of faults. Employing a large team with no need means money loss; while employing a small team with a lot of bugs in the software means a delay on the delivery day. Therefore, a compromise must be reached. This can be achieved by building a model that can estimate the required number of test workers to utilize the bugs or faults in the software.

Fig. 1. Software development process [4].

Fig. 2. Staff management process [4].

More recently, there has been a growing interest in the use of evolutionary computation and soft computing techniques such as Genetic Algorithms (GAs), Genetic Programming (GP), Artificial Neural Networks (ANNs), Fuzzy Logic (FL), Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) to solve a variety of software engineering problems such as estimating software effort and detecting faults during software testing [2–5]. Sheta [2] presented two models using GAs to estimate the effort required to develop software projects. Several models in predicting the software cost using PSO, FL and other cost estimation models were presented in [3]. Sheta et al. [4] showed an estimate of the number of test workers necessary for a software testing process using ANN. In [5], the authors used GWO to estimate the parameters of the software reliability growth model in order to reduce the difference between the expected and the actual number of failures of the software system. In this paper, we use ANN and GP approaches to developing models for estimating the number of test workers required to test software utilizing the number of measured faults. Experiments are conducted for two different projects to evaluate the performance of the developed models. This paper is organized as follows. In Sect. 2, an overview of the artificial neural network is given with an insight into the multilayer Perceptron architecture and the learning algorithm used. Section 3 presents the basic concepts of genetic programming. The test data used for training and testing the models is presented in Sect. 4 with the proposed model structure given in Sect. 5. Section 6, lists the criteria used in assessing the models. Finally, the results of the developed models are presented in Sect. 7 with concluding comments in Sect. 8.

Utilizing Faults and Time to Finish

615

2 Artificial Neural Network ANN, a widely parallel distributed processor, has the apt to identify functions, examples or patterns so long as it is trained with prior knowledge. Basically, ANNs have emerged to simulate biological neural systems-particularly the human brain. The fundamental structure of the ANN system consists of a number of interconnected processing elements, referred to as neurons, which are organized in the input, output, and hidden layers. The neurons are joined to each other through a set of synaptic weights. Each neuron receives inputs from a single or multiple neurons, processes it through a specific activation function, and accordingly generates a transformed output to other neurons or external outputs. Although every single neuron performs its task somewhat incompletely or at a slow pace, jointly ANN structure can conduct an immense number of functions efficiently. To create an optimal design for an ANN with good fitting ability, there is a need to create a suitable configuration for the network, mainly in regards to hidden neurons, type of transfer function, number of input variables and the learning algorithm used to adjust the network’s weights. Actually, when the number of model parameters increases, this favors the learning over the network and therefore preferable fitting. The learning algorithm may be the most important factor among all in identifying the model, as it is an essential process for updating the network weights and estimating the model parameters that fit with the given data set so that the target function is met. ANN continually updates its weights during the learning process until sufficient knowledge is acquired. When the learning process is completed, it is needful to assess the network’s generalization capability using unknown samples of the problem. The importance of learning algorithms has spurred the development of many learning algorithms that are looking for an optimal computational effort that allows for finding optimal quality solutions. Several superb features of ANN have made it a potent computational tool to address a wide range of problems in a variety of areas [6, 7]. ANNs have an ability to learn from unseen examples that have not been formulated, viewed or used. In this context, ANN can be treated as a multivariate nonlinear statistical method and as a universal approximator to approximate nonlinear functions with the desired accuracy. 2.1

Multi-Layer Perceptron

Multi-Layer Perceptron (MLP), as a global approximator of functions, is the most familiar kind of ANN. It became prevailing with the evolution of the Back-Propagation (BP) learning algorithm [8]. The training process of MLP relies on an optimization scheme that looks for the best set of network parameters, or specifically the weights, in relation to input and output patterns to be fit by ANN, referred to as a supervised learning scheme. The MLP network is organized into three layers; the input layer, hidden layer, and output layer. The ANN is expanded when hidden neurons are added to a hidden layer, one by one until the ANN model is capable of achieving its functionality with the lowest possible error. This depends principally on the problem’s complexity that is allied to the complexity of the input-output datasets. The BP algorithm consists mainly of two phases. The first layer is ordinarily referred to as the forward pass and the second layer is known as the backward pass. In the forward

616

A. Sheta et al.

process: an external input vector X ¼ ðx1 ; x2 ;    ; xn Þ with dimension n is initially fed to the input neurons of the input layer; then the outputs from the input neurons are fed to the hidden neurons of the hidden layer; finally, the outputs of hidde the n layer are presented to the output neurons of the network, leading to output yin the output layer where the network weights are all stationary. In the backward process, the weights are fine-tuned on the basis of the error between the true and the in demand outputs. The prefatory step in training ANNs is to initialize the weight vector ~ w. During the computation of the forward process, the weight vector is adjusted until it reaches the desired behavior. The output y is assessed to measure the network’s performance; if the output is not desirable, the weights have to be iteratively adapted in terms of input patterns. In supervised learning, the goal is to generate an output approximation with coveted patterns of input-output samples as described in Eq. 1. ~ Tk ¼



xk 2 R N ; d k 2 R M



; k ¼ 1; 2;    ; p

ð1Þ

where ~ T k are the training samples, x is an input pattern, d is the desired response, N and M are the number of samples in the input and output patterns. The requirement is to design and analyze the parameters of the network model so that the actual output yk due to xk is statistically close to the required degree d k for all k. The MLP weights can be updated using the BP algorithm. The use of BP algorithm to adjust the ANN’s weights may stick in a local minimum, and further, it might not be able to solve non continuous problems. Thence, it may be better to pay attention to other learning algorithms that can address non nonlinear problems, which are critical to achieving high level of performance in solving complex problems. The vector ~ w is updated through the learning process of the MLP-type ANN until an error criterion asthe one defined in Eq. 2 is converged to an appropriate value. e¼

M 1 X ðyi ðxk ; ~ wÞ  dik Þ2 M  p i¼1

ð2Þ

Where: yi is the ith output overall p pattern samples, and dik is the desired result. The MLP-ANN can be represented mathematically as stated in Eq. 3 [9]: " ^yi ðtÞ ¼ gi ½u; h ¼ Fi

nh X j¼1

Wi;j fj

nu X

! xj;l ul þ xj;0 þ Wi;0

# ð3Þ

l¼1

where ^yi ðtÞ is the output signal at time sample t, gi is the function recognized by the ANN model and h identifies the weight parameter vector, which includes all the tunable parameters of the network (weights xj;l and biases Wi;j ). Here, MLP is trained using the BP algorithm so that the output ^y corresponds to the input u when an objective criterion as introduced later is met.

Utilizing Faults and Time to Finish

617

3 Genetic Programming GP is an evolutionary-based algorithm with a global search potential proposed by Koza in 1992 [10]. GP is among the most well-known algorithms under evolutionary algorithms or nature-inspired algorithms. It is inspired based on the biological evolutionary ideas of natural selection which is capable of finding solutions for a broad variety of real problems through automatically evolving randomly generated models. The relations are arrived at out of an evolutionary search process, with an assortment of potentialities for the real algorithm concerned, for instance, a plan, an expression, a formula, a decision tree, a control strategy or problem-based learning model [11]. The evolutionary process of GP in this context commences by generating a population of individuals at random, each representing a computer program. This is followed by an evaluation of an evaluation metric measure (i.e., the fitness function) of the program with regard to its capacity to reach a solution. The fitness addresses how an individual fits an environment; it is a criterion for selecting individuals who are interested in generating a new population. Programs which are especially fit are chosen for recombination based on the fitness value to create a new population using genetic operators, including selection, crossover besides the mutation operator. The evolutionary process is repeated until a passible solution is reached, or the number of predefined runs is exceeded. The fitness function is recalculated inside each iteration loop until convergence. The programs can be perceived as a syntax tree, where a branch node is a component from a function group, which in turn may accommodate arithmetic functions, trigonometry, and logic operators with at least one argument [10]. The following steps are involved iteratively in the GP evolutionary process until the convergence process is achieved: 1. Selection step: some individuals (computer programs) are picked for reproduction using a defined selection procedure. Selection mechanisms may take, for example, one of the following forms: • Roulette wheel, where the likelihood that an individual is picked relies on its normalized fitness value [11]; • Ranking, Which depends on the order of fitness values of individuals [12]; • Tournament, where individuals are sampled from the population, where the individual with the highest fitness is picked out provided that there is more than one individual [11]; • Elitism copies the best individuals in the following generation, where it can enhance performance by eschewing the loss of fit individuals [12]; 2. Creation step: new individuals are produced through the use of reproduction operators, which typically involve the crossover and mutation operators. These operators accomplish random soft changes to create individuals. Crossover is a process that produces two new individuals through a probabilistic swap of genetic information between two randomly selected parents, facilitating a global search for the best individuals in operation. Mutation is an operator that prompts a slight probabilistic change in the genetic structure, resulting in a local search. It selects one individual that commences by choosing a point at random within the tree, and

618

A. Sheta et al.

then it supersedes a function or a terminal set with the same kind of element. Old individuals are replaced by the new individuals created by the reproduction operators in order to build a new generation. 3. Evaluation step: This process continues to iterate until an optimal solution based on fitness metric is achieved, or the number of generations is exceeded. There is an extensive literature on GP for solving a broad range of real complex problems and, more recently, a number of studies have been reported on the success of GP for solving several problems in crucial areas such as software engineering, image processing, manufacturing and statistical modeling [13, 14].

4 Test/Debug Data To evaluate the accuracy of the developed MLP-ANN and GP models, extensive experiments were conducted on test datasets for two different projects, namely, project A and project B. The collected data consists of 200 modules with each having a onekilo line of code of FORTRAN for real-time control application [15]. A test/debug data set for project A consists of 111 measurements of test instances (D), real detected faults (F), number of test workers (TW) as given in Table 1. A test/debug dataset for project B contains 46 measurements as shown in Table 2 [15]. The available measurements are limited in this case. This represents a challenge for traditional modeling methods.

5 Proposed Models for Test Workers Estimation Two types of models were explored for projects A and B as aforementioned. The first model was developed using MLP-ANN and the second model was formed using GP. Here, a new model structure that can help estimating the number of test workers during the software testing process was proposed. The available data set includes the date of test d, the observed number of faults F and the number of test workers y. The proposed model structure is given in Eq. 4.  y¼f

@F ; d ðn  t Þ @t

 ð4Þ

Where:@F @t is the rate of change of the faults as a function of time t, t ¼ 1; 2;    ; n and n is the expected day to finish the software testing. We will rename this attribute as x1.d(n−t) is the countdown to the end day of testing. We will rename this attribute x2.

6 Model Evaluation Criteria In order to verify the performance of the developed MLP-ANN and GP models, we have explored a number of performance assessment functions, including:

Utilizing Faults and Time to Finish

619

• Correlation Coefficient (R2): Pn R ¼ 1  Pi¼1 n 2

ðyi  ^yi Þ2

i¼1 ðyi

ð5Þ

 yÞ2

• Mean absolute error (MAE): MAE ¼

1 Xn jy  ^yj i¼1 n

ð6Þ

Table 1. Test/debug dataset for project A. D 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

F 5 5 5 5 6 8 2 7 4 2 31 4 24 49 14 12 8 9 4 7 6 9 4 4 2 4 3 9

TW 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 6

D 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56

F 2 5 4 1 4 3 6 13 19 15 7 15 21 8 6 20 10 3 3 8 5 1 2 2 2 7 2 0

TW 6 6 6 6 6 6 6 6 8 8 8 8 8 8 8 8 8 8 8 4 4 4 4 4 4 4 4 4

D 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84

F 2 3 2 7 3 0 1 0 1 0 0 1 1 0 0 1 1 0 0 0 1 2 0 1 0 0 0 0

TW 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 4 4 4 4 4 2 2 2 2 2 2 2

D 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111

F 0 0 2 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 1

TW 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1

620

A. Sheta et al. Table 2. Test/debug dataset for project B. D 1 2 3 4 5 6 7 8 9 10 11 12

F 2 0 30 13 13 3 17 2 2 20 13 3

TW 75 31 63 128 122 27 136 49 26 102 53 26

D F 13 3 14 4 15 4 16 0 17 0 18 0 19 0 20 0 21 0 22 30 23 15 24 2

TW 78 48 75 14 4 14 22 5 9 33 18 8

D 25 26 27 28 29 30 31 32 33 34 35 36

F 1 7 0 22 2 5 12 14 5 2 0 7

TW 15 31 1 57 27 35 26 36 28 22 4 8

D 37 38 39 40 41 42 43 44 45 46

F TW 3 5 0 27 0 6 0 6 0 4 5 1 2 6 3 5 0 8 0 2

• Root mean square error (RMSE): rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Xn RMSE ¼ ðy  ^yÞ2 i¼1 n

ð7Þ

• Relative absolute error (RAE): Pn jy  ^yj P RAE ¼ i¼1 n yj i¼1 jy  

ð8Þ

• Root relative squared error (RRSE): ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn yÞ 2 i¼1 ðy  ^ RRSE ¼ Pn yÞ 2 i¼1 ðy  

ð9Þ

where y is the actual number of test workers, ^y is the estimated value and y is the mean of the signal y using n measurements.

7 Experiments and Results We generated the required attributes x1andx2from the available data sets of Projects A and B to train an ANN and develop a GP mathematical model for test worker estimation. For MLP-ANN, one hidden layer was used while the rest of the parameters are tuned as shown in Table 3. The tuning parameters of GP are set as shown in Table 4.

Utilizing Faults and Time to Finish Table 3. MLP Parameters. Parameter Parameter Architecture Training method Hidden Layers hidden neurons Learning rate Maximum epochs

Value Value MLP Back-propagation 1 30 0.9 300

621

Table 4. GP regression parameters. Parameter Population size Maximum tree depth Selection mechanism Tournament size Mutation probability Crossover probability Function operators Cross-validation

Value 500 4 Tournament 10 0.14 0.84 [+, −, ] 10-fold

The observed and estimated number of workers calculated for the test instances and the real detected faults for Project A based on ANN is shown in Fig. 3 and its convergence is shown in Fig. 4. For Project B, the observed and estimated number of workers and the convergence of ANN are shown in Figs. 5 and 6, respectively.

Fig. 3. Project A: observed and estimated number of test workers using MLP.

Fig. 5. Project B: observed and estimated number of test workers using MLP.

Fig. 4. Project A: convergence of the MLP model

Fig. 6. Project B: convergence of the MLP model.

622

A. Sheta et al.

The GP models for projects A and B are shown respectively in Eqs. 10 and 11. yA ¼ 0:00528x1 ðx1 þ x2 Þ2 0:0895x1 x2  3:56  104 x21 x2  0:0886x21 þ 8:95  104 x32  5:03  106 x21 x2 ðx1 þ x2 Þ2 þ 9:33

ð10Þ

yB ¼ 0:0614x21  6:72  104 x21 x2 þ 0:00202x22  1:63  105 x32 6:38  107 x1 x2 ðx1 þ 5:0Þðx1  x2 Þ þ 1:03

ð11Þ

The observed and predicted numbers of test workers for projects A and B based GP are shown in Figs. 7 and 8, respectively. The evaluation results of MLP and GP models are shown in Table 5. The results of these models are very competitive with a slight superiority for MLP in one case and GP in the other case. GP still has merit of creating efficient mathematical models which are easier to value than MLP models.

Fig. 7. Project A: observed and estimated number of test workers using GP.

Fig. 8. Project B: observed and estimated number of test workers using GP.

Table 5. Evaluation results of MLP-ANN and GP models. Criterion

Project A MLP Correlation coefficient 0.99531 Mean absolute error 0.036364 Root mean squared error 0.19069 Relative absolute error 2.3918% Root relative squared error 9.7244% Total Number of Instances 111

GP 0.90161 0.036364 0.8528 34.681% 43.489% 111

Project B MLP 0.84726 13.044 19.589 50.355% 56.775% 46

GP 0.84228 14.778 18.598 57.046% 53.905% 46

8 Conclusions The proposed work has demonstrated the use of two computational techniques, namely, Multilayer Perceptron Artificial Neural Network and Genetic Programming methods, to model the relationship between the number of test workers and the measured faults in software to build two prediction models. In this context, the developed models utilized

Utilizing Faults and Time to Finish

623

the expected day to finish testing and the rate of change of faults as inputs to the models. Two case studies were presented, and several evaluation criteria were conducted to validate the performance of the proposed models. All evaluation measures have reported a high level of performance, on the basis of the satisfactory predication estimates obtained, suggesting that the presented MLP and GP models are highly accurate, learned the dynamic relationships between the inputs and output successfully. The use of GP and ANN to estimate the number of test workers using software faults is an exciting direction for future research. Further work is needed to assess the suitability of the proposed models to other test instances.

References 1. Yoon, I.C., Sussman, A., Memon, A., Porter, A.: Effective and scalable software compatibility testing. In: The 2008 ISSTA, pp. 63–74. ACM (2008) 2. Sheta, A.F.: Estimation of the COCOMO model parameters using genetic algorithms for NASA software projects. J. Comput. Sci. 2(2), 118–123 (2006) 3. Sheta, A.F., Ayesh, A., Rine, D.: Evaluating software cost estimation models using particle swarm optimization and fuzzy logic for NASA projects: a comparative study. IJBIC 2(6), 365–373 (2010) 4. Sheta, A.F., Kassaymeh, S., Rine, D.: Estimating the number of test workers necessary for a software testing process using artificial neural networks. IJACSA 5(7), 186–192 (2014) 5. Sheta, A.F., Abdel-Raouf, A.: Estimating the parameters of software reliability growth models using the Grey Wolf optimization algorithm. Int. J. Adv. Comput. Sci. Appl. 1(7), 499–505 (2016) 6. Braik, M., Sheta, A., Arieqat, A.: A comparison between GAs and PSO in training ANN to model the TE chemical process reactor. In: The AISB 2008 Symposium on Swarm Intelligence Algorithms and Applications, vol. 11, pp. 24–30 (2008) 7. Sheta, A.F., Braik, M., Al-Hiary, H.: Identification and model predictive controller design of the tennessee eastman chemical process using ANN. In: The International Conference on Artificial Intelligence (ICAI 2009), vol. 1, pp. 25–31 (2009) 8. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Neurocomputing: foundations of research. In: Anderson, J.A., Rosenfeld, E. (eds.) Learning Representations by Back-propagating Errors, pp. 696–699. MIT Press, Cambridge (1988) 9. Al-Hiary, H., Braik, M., Sheta, A., Ayesh, A.: Identification of a chemical process reactor using soft computing techniques. In: The 2008 International Conference on Fuzzy Systems within the 2008 IEEE World Congress on Computational Intelligence (WCCI 2008), pp. 845–853 (2008) 10. Koza, J.R.: Genetic Programming II, Automatic Discovery of Reusable Subprograms. MIT Press, Cambridge (1992) 11. Dimitriu, R.C., Bhadeshia, H., Fillon, C., Poloni, C.: Strength of ferritic steels: Neural networks and genetic programming. Mater. Manuf. Process. 24(1), 10–15 (2008) 12. Naudts, B., Kallel, L.: A comparison of predictive measures of problem difficulty in evolutionary algorithms. IEEE Trans. Evol. Comput. 4(1), 1–15 (2000) 13. Faris, H., Sheta, A.: Identification of the Tennessee Eastman chemical process reactor using genetic programming. Int. J. Adv. Sci. Tech. 50, 121–140 (2013)

624

A. Sheta et al.

14. Sheta, A.F., Faris, H., Öznergiz, E.: Improving production quality of a hot-rolling industrial process via genetic programming model. Int. J. Comput. Appl. Technol. 49(3/4), 239–250 (2014) 15. Tohma, Y., Tokunaga, K., Nagase, S., Murata, Y.: Structural approach to the estimation of the number of residual software faults based on the hyper-geometric distribution. IEEE TSE. 15(3), 345–355 (1989)

A Comparative Analysis of Control Strategies for Stabilizing a Quadrotor Moussa Labbadi1(B) , Mohamed Cherkaoui1 , Yassine El Houm1 , and M’hammed Guisser2 1

2

Mohammadia School of Engineers (EMI), Engineering For Smart and Sustainable Systems Research Center, Mohammed V University, Rabat, Morocco [email protected],[email protected] Laboratory of Signal Analysis and Information Processing, Faculty of Science and Technology, Settat, Morocco

Abstract. Isn’t it wonderful to see a floating body in the sky? Challenging multiple physics laws with remarkable control and stability. These are today’s flying vehicles. In this paper, the secret of this technique will be revealed differently. In the first place, the modeling of the system takes place using the Newton-Euler method allowing the balance of forces and moments on different parts of the manipulator, taking into account the disturbing factors acting on the aerial vehicle. It is therefore essential to design a controller (position and attitude) to ensure the stability of the floating body when following the desired set-point in the form of a 3D trajectory, a robust nonlinear controller based on the Lyapunov theory is designed. This latter has the ability to compensate the uncertain disturbing factors acting on the quadrotor. A comparative study of proportional-derivative (PD) and backstepping controllers comes to take place by comparing their performances using Matlab/Simulink as workspace environment to implement the plant model and different types of controllers. Keywords: Quadrotor drone · PD and backstepping control Newton-euler modeling · Lyapunov stability

1

Introduction

In the course of the last decade, the field of robotics has experienced a rapid development thanks to the scientific and technological development. Among the fruits of this development, there are the flying robots. These unmanned aerial vehicles (UAVs) have a complex mechatronic design. Drones can be used in many applications in daily life, for instance, it can be applied in environmental protection, urban monitoring, agricultural spreading and maintenance of power lines in railways [1]. Quadrotor drones have a simple design compared to standard helicopters. These vehicles have better maneuverability and reduced gyroscopic effect [2]. However, quadrotors have an unstable and strongly coupled nonlinear dynamics model which involve a robust controller that maintains the system in c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 625–630, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_68

626

M. Labbadi et al.

the stable zone and which can follow the predefined flight path, for this reason several control strategies have been developed. In [3] a proportional-integralderivative (PID) controller and a linear-quadratic (LQ) controller based on the optimal theory are proposed these controllers allow the quadrotor to perform a hover. A robust method backstepping is presented in [4] this technique makes it possible to control the orientation and the position of the vehicle without any problem in the presence of disturbances. In [5,6] two nonlinear models were studied from the classical approach of Newton-Euler and Lagrange-Euler. The stabilization of the drone is carried out using a controller based on the Lyapunov analysis and the nested saturation technique. Similarly [7] proposes a comparison between a LQR linear technique and the integral sliding mode control. In the same way, the work developed in [8] conceived a nonlinear control H∞ allowing to follow the desired trajectory. The rest of this document is organized as follows: The dynamic model of the quadrotor and the controllers are described in Sects. 2 and 3 respectively. The results of the simulation are discussed in Sect. 4. The last section is reserved for the conclusion.

2

Mathematical Modeling of a Quadrotor

This section is dedicated to the quadrator dynamic modeling problem. To do this, a ground-based inertial reference point RI is defined and a mobile marker linked to the body of the vehicle as illustrated in Fig. 1.

Fig. 1. Schematic view of a Quadrotor.

2.1

Kinematics of a quadrotor

The quadrotor position is defined by the vector ξ(x, y, z) in the fixed reference and its angular velocity is defined by the vector Ω(p, q, r). The orientation of the drone is given by the passage matrix R. The latter is expressed according to Euler angles roll(φ), pitch(θ) and yaw(ψ). These angles are bounded by (π < φ < −π),(π/2 < θ < −π/2) and (π < ψ < −π). The rotation matrix is: ⎡ ⎤ Cθ Cψ Sφ Sθ Cψ − Cφ Sψ Cφ Sθ Cψ + Sφ Sψ R = ⎣ Cθ Sψ Sφ Sθ Sψ + Cφ Sψ Cφ Sθ Sψ − Sφ Cψ ⎦ (1) −Sθ Sφ Cθ Cφ Cθ where C(∗) and S(∗) denote the cos(∗) and sin(∗) respectively.

A Comparative Analysis of Control Strategies

2.2

627

Dynamics of a quadrotor

The principle of Euler-Newton to find the drone motions equations based on the fundamental principle of the dynamics is used [9,10].  mξ¨ = Fth + Fd + Fg (2) J Ω˙ = M − Mgp − Mgb − Ma where Fg is the force of the gravity, Fth the total force of the thrust and Fd the force of the air friction during vehicle movements which is assumed to be proportional to the linear velocities of the drone. M = (τφ , τθ , τψ ) denotes the rolling, pitching and yawing torques respectively developed by the quadrotor. Mgb , Mgp represent the gyroscopic torques of the body and propellers, Ma is the torque due to the aerodynamic effect and J is the inertia matrix assuming symmetric. So, the complete dynamics of the quadrotor is [4,9]: ⎧ ˙ y − Iz ) − Ir Ωr θ˙ − k1 φ˙2 + τφ Ix φ¨ = θ˙ψ(I ⎪ ⎪ ⎪ ⎪ ˙ z − Ix ) + Ir Ωr φ˙ − k2 θ˙2 + τθ ⎪ I θ¨ = φ˙ ψ(I ⎪ ⎨ y¨ ˙ ˙ Iz ψ = φθ(Ix − Iy ) − k3 ψ˙2 + τψ ⎪ m¨ x = −k4 x˙ + (cosφsinθcosψ + sinφsinψ)F ⎪ ⎪ ⎪ ⎪ m¨ ⎪ ⎩ y = −k5 y˙ + (cosφsinθsinψ − sinφcosψ)F m¨ z = −k6 z˙ − mg + (cosφcosθ)F

(3)

Therefore, The total force and the torques can be expressed according to the speeds that are generated by the quadrotor as following [11]. ⎡ ⎤ ⎡ ⎤ ⎡ 2⎤ b b b b F ω1 ⎢ τφ ⎥ ⎢ 0 −lb 0 lb⎥ ⎢ω22 ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ (4) ⎣ τθ ⎦ ⎣−lb 0 lb 0 ⎦ ⎣ω32 ⎦ τψ ω42 −k k −k k With k and b are positive constants depending on the blades geometry and the air density.

3

The quadrotor control

The general control architecture of the quadrotor system shows in Fig. 2, the outer loop controls the quadrotor position (variable x and y), this loop generates the total thrust, the loosened roll and pitch angles. The inner loop is responsible for the orientation of the quadrotor, this loop has as input the Euler angles loosened and as output the torques (roll torque, pitch torque and torque yaw). The system outputs (thrust and torques) are used to finish the speeds of each rotor.

628

M. Labbadi et al.

Fig. 2. General control of a quadrotor.

3.1

Proportional derivative controller of the quadrotor

Most industrial processes use the P ID controller, this controller has acceptable performances such as stability level, elimination of disturbance, static error cancellations and the rapidity increase. These controllers are easy to adjust and their implementation is simple compared to other controllers. Therefore the application of the PD controller to the quadrotor allows to stabilize their altitude and their position (state variable x y). The commands generated by the PD controller are as follows: ⎧ u2 = kpφ e1 + kdφ e˙ 1 ⎪ ⎪ ⎪ ⎪ u3 = kpθ e3 + kdθ e˙ 3 ⎪ ⎪ ⎨ u4 = kpψ e5 + kdψ e˙ 5 (5) u ⎪ x = kpx e7 + kdx e˙ 7 ⎪ ⎪ ⎪ uy = kpy e9 + kdy e˙ 9 ⎪ ⎪ ⎩ u1 = kpz e11 + kdz e˙ 11 where ei (i=φ, θ, ψ, x, y and z) are the errors of the state variable of the system positions φ, θ and ψ, and the angles x, y and z. The kpi and kdi are positive constants. 3.2

Backstepping controller of the quadrotor

Most authors use the complete model of the vehicle to design the controllers based on the backstepping technique [4,12,13]. The backstepping control approach is applied to the lower triangular system, this technique uses the principle of Lyapunov [14]. Using the backsteppig procedure and the principle of Lyapunov for obtaining the position and rotation controllers of the quadrotor: ⎧ Ix ˙2 ˙˙ ⎪ ⎪ u2 = l (−e1 − b2 e2 − θψ(Iy − Iz ) + Ir Ωr + k1 φ + α˙ 1 ) ⎪ I ⎪ y ˙ z − Ix ) − Ir Ωr + k2 θ˙2 + α˙ 2 ) ⎪ u3 = l (−e3 − b4 e4 − φ˙ ψ(I ⎪ ⎪ ⎪ ⎪ ˙ ˙ ⎨ u4 = Iz (−e5 − b6 e6 − φθ(Ix − Iy ) + k3 ψ˙2 + α˙ 3 ) (6) ux = um1 (−e7 − b8 e8 + k4 x˙ + α˙ 4 ) ⎪ ⎪ ⎪ uy = um (−e9 − b10 e10 + k5 y˙ + α˙ 5 ) ⎪ 1 ⎪ ⎪ m ⎪ u1 = cosφcosθ (−e11 − b12 e12 + k6 z˙ + g + α˙ 6 ) ⎪ ⎪ ⎩ Ωr = ω1 − ω 2 + ω 3 − ω 4

A Comparative Analysis of Control Strategies

629

Where αi and the bi are the virtual commands and the positive parameter, respectively.

4

Simulation and discussion

To show the performance of the proposed control algorithms simulation tests have been performed in the Matlab/Simulink environment. The backstepping controller is applied to the quadrotor complete model, by introducing the aerodynamic forces and the moments that are acting on the drone. On the other side, the PD command is applied to the system in the absence of these aerodynamic effects. The physical parameters the quadrotor model used in the simulation are based on [9] (Table 1). Table 1. Quadrotor parameters. Parameter

Value

Parameter

g(m/s2 )

9.81

k2 (N/m/s) 5.5670e−4

m(kg)

0.486

k3 (N/m/s) 5.5670e−4

Ix (kg.m2 )

3.827e−3

k4 (N/m/s) 5.5670e−4

Iy (kg.m )

3.827e−3

k5 (N/m/s) 5.5670e−4

Iz (kg.m2 )

7.6566e−3 k6 (N/m/s) 5.5670e−4

Ir (kg.m2 )

2.8385e−5 b(N.s2 )

2

k1 (N/m/s) 5.5670e−4 k(N.m.s2 )

Value

2.9842e−3 3.2320e−2

Through these simulation results(see Fig. 3), it’s noted that the backstepping command allows robust tracking of the reference trajectory and guarantees the drone stability in the presence of disturbances. For the PD command the variables (x, y, z) follow their references with a tolerated overflow. It is obvious that the backstepping controller is better than the PD controller.

Fig. 3. The results of the quadrotor positions (x, y, z) and angles (φ, θ, ψ).

630

5

M. Labbadi et al.

Conclusion

The nonlinear control algorithm is the solution for stabilization of helicopters with four rotors. In this document, the Newton-Euler approach to obtain the dynamic model of the rotorcraft is used. Two controllers are proposed: a robust controller based on the analysis of Lyaponov and a PD traditional controller. The first showed the ability to track the desired flight path with a precision and rapidity, while the second controller found difficulties in performances (overtaking and precision) despite the elimination of aerodynamic effects. The results of backstepping control showed the efficiency and good performances of our control system.

References 1. Hassanalian, M., Abdelke, A.: Classifications, applications, and design challenges of drones: a review, Progress in Aerospace Sciences, April 2017 2. Hoffmann, G.M., Tomlin, C.J.: Quadrotor Helicopter Flight Dynamics and Control : Theory and Experiment, pp. 1–20 (2007) 3. Bouabdallah, S., Noth, A., Siegwart, R.: PID vs LQ control techniques applied to an indoor micro quadrotor. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Sendai, Japan, vol. 3, pp. 2451–2456 (2004) 4. Li, C., Zhang, Y., Li, P.: Full control of a quadrotor using parameter-scheduled backstepping method: implementation and experimental tests. Nonlinear Dyn. 89(2), 1259–1278 (2017) 5. Castillo, P., Lozano, R., Dzul, A.: Stabilization of a mini rotorcraft with four rotors. IEEE Control Syst. Mag. 25, 45–55 (2005a) 6. Castillo, P., Lozano, R., Dzul, A.E.: Modeling and Control of Mini-Flying Machines. Springer-Verlag, London (2005b) 7. Hoffmann, G., Jang, J.S., Tomlin, C.J.: Multi-agent X4-flyer testbed control design: integral sliding mode vs. reinforcement learning. In: International Conference on Intelligent Robots and Systems, pp. 468–473 (2005) 8. Raffo, G.V., Ortega, M.G., Rubio, F.R.: An integral predictive/nonlinear H∞ control structure for a quadrotor helicopter. Automatica 46(1), 29–39 (2010) 9. Mofid, O., Mobayen, S.: Adaptive sliding mode control for finite-time stability of quad-rotor UAVs with parametric uncertainties. ISA Trans. 72, 1–14 (2017) 10. Vaidyanathan, S., Lien, C.-H. (eds.) Applications of sliding mode control in science and engineering. In: Chang-Hua Conference 2017. LNCS, vol. 709. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55598-0 11. Shastry, A.K., Kothari, M.: Development of flight dynamics model of quadrotor. In: AIAA Guidance, Navigation, and Control Conference (2018) 12. Bouabdallah, S., Siegwart, R.: Backstepping and Sliding-mode Techniques Applied to an Indoor Micro Quadrotor, pp. 2259–2264, April 2005 13. Madani, T., Benallegue, A.: Backstepping control for a quadrotor helicopter. In: 2006 IEEE/RSJ International Conference on Intelligent Robotic Systems, pp. 3255–3260 (2006) 14. Benaskeur, A.R.: Aspects de lapplication du backstepping adaptatif la commande dcentralise. Universit Laval Qubc (2000)

A New Approach Based on Bat Algorithm for Inducing Optimal Decision Trees Classifiers Ikram Bida(B) and Saliha Aouat Laboratory for Research in Artificial Intelligence (LRIA), Department of Computer Science, University of Science and Technology Houari Boumediene, Bab-Ezzouar, Algiers, Algeria {ibida,saouat}@usthb.dz

Abstract. In Data Mining, inducing optimal decision trees models for supervised classification task is well-known to be an NP-Hard problem. In this context, heuristic-based methods were introduced, such algorithms are local in nature, they don’t assure generating the global optimal decision tree that covers most of the dataset. Consequently, it is necessary to develop new approaches to efficiently navigate the decision tree search space to find an optimal or near-optimal tree. This research paper presents a novel learning algorithm for constructing optimal decision trees, it’s based on a powerful swarm intelligence meta-heuristic: the Bat Algorithm, and also the gain ratio measure. It was tested on 29 datasets from UCI ML archive. Experimental results achieved were very satisfying and provided a better accuracy classification compared to other decision trees inducers. Keywords: Data mining · Supervised classification · Decision trees Decision trees induction · Meta-heuristics · Swarm intelligence Bat algorithm

1

Introduction

Supervised classification is an important data mining task beside clustering and association rule mining, its purpose is to build a classifier based on training observations already labeled (classified). The constructed model has the ability to predict an obscure instance class [1,2]. In order to deal with supervised classification, various standard approaches have been proposed such as K-Nearest Neighbors, Decision trees, Naive Bayes, Artificial Neural Networks, Support Vector Machines and others. In this work, we are interested in decision trees classifiers. Decision trees classifiers are famous machine learning technics that describe data as a comprehensible diagram, they are formed as a set of internal nodes (attributes) interconnected on different tests, ending with leaf nodes (classes), it can easily classify an unseen instance by mapping the constructed tree [3]. c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 631–640, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_69

632

I. Bida and S. Aouat

In literature, researchers proved that inducing a fundamental decision tree using brute force methods is an NP-Hard problem [4], hence heuristic-based algorithms made appearance to tackle this problem where top-down family approaches were likely more utilized, for instance, ID3, C4.5, CART, CHAID, QUEST, and others [1]. As already mentioned, these inducers are local considering that best attribute selection is optimized locally. Such a limitation drives us naturally to think of searching globally the tree space seeking optimal trees. Meta-heuristics like swarm intelligence algorithms have been adapted to formulate supervised classification task as a combinatorial problem, where the goal is to examine the huge search space effectively without listing all solutions and returning only the best ones, in our case best decision trees. Swarm intelligence algorithms are nature-inspired meta-heuristics that are largely employed for hard optimization problems and computational intelligence. It examines then mimic the social comportment of animals and insects such as ants, bats, fireflies, termite, fish, and others. In nature, Swarm individuals are not complete agents and have finite capacities on their own yet they are selforganized [5]. However, whenever interacting and communicating together, they perform in somehow an intelligent behavior to many hard issues. Over times, different swarm algorithms were proposed based on specific analogies such: Ant Colony Optimization (ACO), Particle Swarm Optimization, Bee Swarm Optimization (BSO), Firefly Algorithm (FA) and Bat Algorithm (BA) . . . etc. This contribution emphasis on Bat Algorithm meta-heuristic. The novel approach we propose modifies and adapts bat algorithm in a way that it can handle the problem of inducing optimal decision trees. In what follows, we will be seeing five sections. In Sect. 2, we list the related works. Later, Sect. 3 gives a detailed overview of classic Bat Algorithm basics and principals. We devote Sect. 4 to introduce our new Bat Algorithm based approach for trees construction. In Sect. 5, we validate our approach by presenting the results of experimentations. The last section closes the paper with a conclusion and some proposed perspectives.

2

Related Works

Through the literature, heretofore two swarm meta-heuristic have been successfully used to deal with constructing decision trees problem: PSO and ACO. Veenhuis and his colleagues [6] presented Tree Swarm Optimization (TSO) to build optimal binary decision trees for continuous candidates. Generally, it follows PSO algorithm structure to optimize generic trees. Years later, TSO was modified to use multi-objective function (MOPSO) [7]. Dynamic decision trees [8] were introduced as a PSO based approach to induce decision trees, the process of induction follows an improved version of classic ID3 technic, so instead of using normal information gain employed in ID3, they suggested a new measure called “Actual information gain” that gave a better classification accuracy.

Optimal Decision Trees Classifiers Induction

633

Otero introduced Ant-Tree-Miner as an ACO based approach for inducing N-ary decision trees, slight modifications were applied to the classic ACO where the C4.5 gain ratio measure was used as a heuristic [9]. An extension to AntTree-Miner has been proposed in ant-tree-minerM [10] for learning multi-trees classification models. ACDT is another example of ACO based approach dedicated for learning binary trees [11], it only dealt with discrete attributes. ACDT uses ACO’s probability to select attributes and CART twoing criterion measure as a heuristic. Later on, ACDT was improved by co-learning [12]. In order to handle also continuous attributes, cACDT was introduced, it’s an enhanced version of ACDT that additionally performs a direct inequality test on the real values of continuous attributes [13]. The majority of these approaches have been tested on UCI machine learning repository. The conducted experiment results showed mostly a better classification accuracy which exceeded classical methods. Based on our knowledge and carried out researches, using Bat Algorithm to construct optimal decision trees is an unexplored research area. Although, since its proposal and implementation it has been successfully applied to different optimization problems in many fields [14,15].

3

Classic Bat Algorithm

Bat algorithm is a bio-motivated population meta-heuristic that was proposed by Yang in 2010. It relies upon the echolocation system of microbats, which is a sort of sonar they use to: distinguish pries, move to food source already found by the swarm, and furthermore to locate their roosts [16]. For bat algorithm’s simple implementation, Yang adjusted and added some approximating rules. For instance, bat individuals fly stochastically having a velocity (speed) vi at a position xi with a fixed frequency fmin , changing wavelength λ and loudness A0 to chase pries. Depending upon the closeness from their target, the bats can naturally balance the frequency of transmitted pulses and its rate ri ∈ [0, 1]. The loudness changes from a positive large A0 to a minimum constant Amin and bats frequencies f are in a range of [fmin , fmax ]. Initially, bat algorithm defines an objective function, then generates a k bat population. For each bat, a set of parameters is defined and initialized that are: velocity vi , frequency fi , pulse rate ri and loudness Ai . Afterwards, bats’ swarm of each generation move in the search space by updating their velocities vi and positions xi utilizing these equations: fi = fmin + (fmin − fmax )β

(1)

vit = vit−1 + [xt−1 − x∗ ]fi i

(2)

634

I. Bida and S. Aouat

xti = xt−1 + vit i

(3)

Where β is a random parameter ∈ [0, 1], x∗ is the current global best location. vit and xti represent bat’s velocity and position at a time line t. To generate a local solution around the best current, bats apply a random walk following the above equation: xnew = xold + εAt

(4)

ε is a stochastic value ∈ [−1, 1] and At is the average loudness of all bats at t. The loudness of bats is decreased and the pulse emission’s rate is increased according to the following equations: = αAt+1 At+1 i i

(5)

rit+1 = ri0 exp(−γt)

(6)

Where α, γ are constants, α ∈ [0, 1] and γ > 0.

4

The Proposed Bat Algorithm Based Approach for Trees Constructing

In this section, we present “Bat-Tree-Constructor”, our novel approach designed for constructing optimal decision trees. 4.1

Bat-Tree-Constructor Mechanism and Basics

The proposed Bat-Tree-Constructor generally follows a similar structure as the classical bat algorithm, although basic changes were introduced and needed to address decision trees inducing problem. Our algorithm constructs N-ary trees and can deal with both nominal and continuous types of attributes. On continuous candidate, we performed an inequality discretization procedure, as follows:  1 if atti j (x)  c (7) Datti (x) = 0 otherwise D: a dataset, atti : ith attribute of the dataset D atti j (x): x instance value of the ith attribute for its j th condition c: cut-off point (threshold) value. Furthermore, we have chosen accuracy as a fitness function to evaluate the constructed solutions (decision trees), as described below: Where

R−e (8) R With: R is the number of train records and e is misclassified records value. The philosophy of ri and Ai parameters remained exactly in the same form as in classic bat algorithm. In addition, we proposed new equations for bats’ velocities vi and positions xi , aiming decision trees construction. Accuracy =

Optimal Decision Trees Classifiers Induction

635

The Proposed Velocity. We considered to relate vi to similarity distance between bat’s i constructed tree xi and swarm’s global-best tree x∗ . Alternatively stating, how close is the bat induced tree to the global-best tree. We represent this distance by a subtraction of hamming distance from the maximum number of nodes at a certain level l of trees, it’s given by: vit = Max {Nb Nodes (xi )l , Nb Nodes (x∗ )l } − HammingDist l (xi , x∗ )

(9)

We defined the hamming distance between two bats at a level l as the number of non-corresponding elements between two decision trees (xi , x∗ ). Later a computational example will be demonstrated in Sect. 4.3. Normalizing vit values is an essential step, intending a balance of the previous formula to avoid driving the selection process of attributes by one criterion. For this end, we performed a division on the obtained velocity by maximum nodes number, as following: vit Norm =

vit Max {Nb Nodes (xi )l , Nb Nodes (x∗ )l }

(10)

The Proposed Position. The next movement xti of the bat i has been adapted in a way that it suits the problem we are resolving. It decides which next appropriate attribute to select then add in the tree being induced, it’s calculated for all remaining attributes at a time step t. Afterward, the bat i chooses the attribute that maximizes the operation. The proposed xti metric combines C4.5 Gain ratio heuristic and the normalized velocity (Eq. 10), it’s given by: μ

xti = ArgMax {(GainRatio(Attributes)) + (vit Norm)ν }

(11)

μ and ν are random numbers ∈ [0, 1], introduced to give both measures (gain ratio and distance similarity) a chance to drive the selection. The Algorithm 1 presents the proposed Bat-Tree-Constructor outlines where each bat of the swarm construct its solution tree by following a divide-andconquer approach, with the difference that attributes are chosen based on heuristic information (Gain ratio) and similarity distance. A movement equation rule (Eq. 11) is applied to decide the next visited attribute vertex. We point out that while comparing built trees, a tree is judged to be better than another, only when its accuracy is greater and its nodes number is less or equal to the other. This condition guarantees a harmony between trees’ accuracies and sizes. 4.2

Solution Encoding

For our implementation, we represented decision trees xi by a vector of quadruples, each entry in the vector has the following structure: (Edgeij , Level, Attribute i, Leaf node). The four elements of the quadruplet entry are described and explained in Table 1.

636

I. Bida and S. Aouat

A root node entry is given like: (Null, 0, Attribute i, Null). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

Input: Train & Test instances, Attributes list, Classes set. Output: Best discovered tree. Fitness function F : Accuracy measure (equation 8) foreach Bat i in k population do Initialize Loudness A0 ∈ [0, 0.7] and Pulse rate ri ∈ [0, 0.4] end GBest Tree=N ull; while Max iteration Not reached do foreach Bat in k population do xi = Construct a Decision Tree using equations 10,11 if rand > ri then Select a tree x among the current iteration best trees Prune the selected tree x end xi rand = Generate a new Tree by flying randomly if (rand < Ai ) & (F(x∗ ) < F(xi rand )) then Accept the new random tree solution xi rand Increase ri and decrease Ai using equations 5 and 6 end end end Rank bats’ trees and return best solution x∗ (GBest Tree)

Algorithm 1: The Proposed Bat-Tree-Constructor Algorithm

Table 1. An entry structure Element

Signification

Edgeij

The edge between a node attribute i and one of its conditions j

Level

A tree level

Attribute i The Attribute i that will be attached to the branch’s edge Leaf node

4.3

A leaf node that will be joined to the edge, this information help keep track about whether a leaf node is reached

Hamming Distance Calculation

As mentioned earlier, a bat’s hamming distance is the number of noncorresponding elements between its decision tree and the global best at a time t. The following example (Fig. 1) will clear any ambiguities. Taking into account two decision trees x1 and x∗ , we calculate the hamming distance of the tree x1 compared to the best tree x∗ at different levels of comparison: 0, 1 and 2.

Optimal Decision Trees Classifiers Induction

Att1 test1 Level 1

Att1 test2 Att2

Leaf1 Att2 test1

Level 2

x∗ Att1

x1 Att1

Level 0

Leaf2

637

Att1 test1

Att1 test2 Att3

Leaf1 Att2 test2

Att3 test

Leaf3

Leaf2

Fig. 1. Two different constructed decision trees: x1 and x∗

– At Level 0: HammingDist0 (x1 , x∗ ) = 0. There is no difference in elements. – At Level 1: HammingDist1 (x1 , x∗ ) = 0 + 1 = 1. We take into account the previous level, there is only one different element, encoded as: (Att1: Att1 test2, 1, Att2, Null). – At Level 2: HammingDist2 (x1 , x∗ ) = 1 + 2 = 3. There are two different elements at level 2, plus the one found at level 3. Encoded as: (Att1: Att1 test2, 1, Att2, Null), (Att2: Att2 test1, 2, Null, Leaf2), (Att2: Att2 test2, 2, Null, Leaf3).

5

Experimentations and Results

A set of experiments were performed on the proposed learning algorithm to validate its efficiency. Bat-Tree-Constructor was tested on 29 datasets from the UCI learning archive. The algorithm was compared firstly against Ant-Tree-Miner swarmbased algorithm and then with other well-known inducers: C4.5, CART, and Random-Tree. In trials, we used Weka-3.6.9 implementation for C4.5, Cart, and RandomTree. Furthermore for Ant-Tree-Miner experiments, we compared against the results obtained in papers [9,10]. The datasets were divided into 10-fold stratified cross-validation partitions to ensure diversity of instances in each generation. For each fold, we conducted different test architectures, where we varied: the swarm size in {20, 80, 150, 200} and iterations number in {100, 200, 300}. After tests, we fixed Bat-tree-Constructor parameters at best architecture, as follows: Swarm size at 80, Iterations at 200, α and γ both at 0.9. We present in Table 2 the experimental results obtained for the best architecture in term of Average Tree Accuracies and Nodes Number.

638

I. Bida and S. Aouat

Table 2. A comparative study between Bat-Tree-Constructor and other inducers in term of average accuracy (Acc) and average nodes number DataFile

Swarm-based Technics Bat-TreeConstructor

haberman

Classical Technics

Ant-Tree-Miner C4.5

CART

Acc

Size

Acc

Acc

81.15 %

12.5



72.87 %

25

73.52 %

15 66.01 %

213

7.5



80.30 %

21

84.84 %

17 80.30 %

45

hayes-roth-test 83.33 %

Size Acc

Random-Tree Size Acc

Size

iris

99.02 %

7

94.9 %

96 %

9

94 %

5 92 %

17

labor

96.45 %

4.5



73.68 %

5

78.94 %

9 66.66 %

30

balance-scale

89.60 %

100.33 74.5 %

76.64 %

103

77.28 %

59 50.23 %

349

heart-statlog

93.72 %

65



76.66 %

35

76.66 %

11 76.29 %

331

zoo

100 %

15

100 %

92.07 %

17

40.59 %

2 62.37 %

101

ecoli

92.06 %

66.33

83.9 %

84.22 %

43

81.54 %

21 78.27 %

137

glass

92.22 %

57.2

71.0 %

66.82 %

59

66.35 %

19 70.09 %

97

breast-w

99.28 %

60.33

94.0 %

94.56 %

27

93.84 %

17 94.56 %

91

6 42.78 %

195

flags

77.50 %

29.33



59.27 %

69

35.56 %

lymph

96.66 %

51.66

77.8 %

77.02 %

34

72.29 %

17 75 %

134

breast-cancer

85.71 %

28.33

94.56 %

75.52 %

6

70.62 %

32 66.78 %

444

autos

92.67 %

66.66



81.95 %

69

63.41 %

47 76.58 %

168

heart-h

96.62 %

102

67.45 %

80.95 %

10

77.55 %

12 76.87 %

172

dermatology

98.64 %

30.33

95.35 %

93.98 %

40

91.53 %

29 87.43 %

106

cmc

55.73 %

210



52.13 %

263

52.81 %

139 46.63 %

1663

credit-a

90.75 %

40.33

86.64 %

86.08 %

42

85.65 %

27 80.57 %

425

heart-c

95.15 %

36

54.46 %

77.55 %

51

76.56 %

16 74.25 %

175

diabetes

81.81 %

62.33



73.82 %

39

75.26 %

49 68.09 %

271

audiology

91.23 %

50.66

80 %

77.87 %

54

72.56 %

28 65.48 %

365

car

90.01 %

143

93.8 %

92.36 %

182

87.67 %

149 83.15 %

586

Colic

92.12 %

6.66



85.32 %

6

84.78 %

12 79.07 %

405

ionosphere

98.14 %

18.33

90.21 %

91.45 %

35

89.45 %

9 87.74 %

61

annealing

84.99 %

60.5

95.2 %

90.98 %

71

90.53 %

72 93.20 %

812

cylinder-bands

81.47 %

50.5

73.92 %

57.77 %

1

59.07 %

credit-g

91.66 %

49.33

71.3 %

70.5 %

arrhythmia

70.5 %

107.5



64.38 %

kr-vs-kp

93.95 %

107.5



99.43 %

430 63.51 %

4360

71.8 %

96 67.1 %

1073

99

69.46 %

29 51.10 %

311

59

98.99 %

67 96.27 %

699

140

As shown in graph (Fig. 2), Bat-Tree-Constructor demonstrated clear enhancements in most datasets average accuracies (in 23 datasets from 29) compared to the outcomes obtained in Ant-Tree-Miner, C4.5, CART and RandomTree algorithms. We also notice in Bat-Tree-Miner a varying average decision trees sizes, explained by the fact that we introduced the pruning phase as a random walk.

Optimal Decision Trees Classifiers Induction Bat-Tree-Constructor

Ant-Tree-Miner

C4.5

CART

639

Random-Tree

100 90

Accuracy

80 70 60 50 40

ha h a b yes er -ro ma th- n tes t iris ba la lab he nce- or art s c a -st l e atl og zo o eco l g i bre lass ast -w fla bre lym gs ast -ca p h nc e au r t de hea o s rm rt ato -h log y c cre mc di he t-a art d i a -c au bete dio s log y car ion Co osp lic cyl ann here ind eal er- ing ba c nd arr redi s hy t-g th kr- mia vskp

30

Datasets

Fig. 2. Average accuracies results of the comparative study

6

Conclusion

The idea we have introduced in this paper is very interesting and innovative, which we believe never been explored before. We have presented the first Approach based on Bat Algorithm for optimal decision trees construction problem. Evaluating and testing our proposed approach on real nominal and continuous datasets confirmed its effectiveness. For future research, it would be interesting to use a parallel adaptation of Bat-Tree-Constructor for a faster version. It would likewise be fascinating to investigate other strategies to compare decision trees other than the hamming distance we suggested. Performing a real random walk around the constructed trees may lead the swarm to produce better solutions. Generally, the conducted experimentation has confirmed that Bat Algorithm is indeed a powerful swarm intelligence meta-heuristic that can be adjusted and adapted to many complex problems without losing its strength. Acknowledgments. We would like to thank Professor Habiba Drias of University of Science and Technology Houari Boumediene, for sharing her expertise and the beneficial explanations she provided during the doctoral courses on swarm intelligence.

640

I. Bida and S. Aouat

References 1. Cheng, S., Liu, B., Ting, T.O., Qin, Q., Shi, Y., Huang, K.: Survey on data science with population-based algorithms. Big Data Anal. 1(1), 3 (2016) 2. Dhaenens, C., Jourdan, L.: Metaheuristics for Big Data (Computer Engineering Series: Metaheuristics Set). Wiley-ISTE (2016) 3. Batra, M., Agrawal, R.: Comparative analysis of decision tree algorithms. In: Panigrahi, B.K., Hoda, M.N., Sharma, V., Goel, S. (eds.) Nature Inspired Computing, pp. 31–36. Springer, Singapore (2018) 4. Hancock, T., Jiang, T., Li, M., Tromp, J.: Lower bounds on learning decision lists and trees. Inf. Comput. 126(2), 114–122 (1996) 5. Gandomi, A., Yang, X.-S., Talatahari, S., Alavi, A.: Metaheuristic algorithms in modeling and optimization. In: Metaheuristic Applications in Structures and Infrastructures, pp. 1–24 (December 2013) 6. Veenhuis, C., Koppen, M., Kruger, J., Nickolay, B.: Tree swarm optimization: an approach to pso-based tree discovery. In: The 2005 IEEE Congress on Evolutionary Computation, vol. 2, pp. 1238–1245. IEEE (2005) 7. Fieldsend, J.: Optimizing decision trees using multi-objective particle swarm optimization. In: Swarm Intelligence for Multi-objective Problems in Data Mining, pp. 93–114 (2009) 8. Li, X.H., Li, L., Fu, X.: The application of improved dynamic decision tree based on particle swarm optimization during transportation process. In: Advanced Materials Research, vol. 936, pp. 2247–2253. Trans Tech Publ. (2014) 9. Otero, F.E.B., Freitas, A.A., Johnson, C.G.: Inducing decision trees with an ant colony optimization algorithm. Appl. Soft Comput. 12(11), 3615–3626 (2012) 10. Salama, K.M., Otero, F.E.B.: Learning multi-tree classification models with ant colony optimization (2014) 11. Boryczka, U., Kozak, J.: Ant colony decision trees–a new method for constructing decision trees based on ant colony optimization. In: International Conference on Computational Collective Intelligence, pp. 373–382. Springer (2010) 12. Boryczka, U., Kozak, J.: Enhancing the effectiveness of ant colony decision tree algorithms by co-learning. Appl. Soft Comput. 30, 166–178 (2015) 13. Boryczka, U., Kozak, J.: An adaptive discretization in the acdt algorithm for continuous attributes. In: Computational Collective Intelligence. Technologies and Applications, pp. 475–484 (2011) 14. Parpinelli, R.S., Lopes, H.S.: New inspirations in swarm intelligence: a survey. Int. J. Bio-Inspired Comput. 3(1), 1 (2011) 15. Xin She Yang and Xingshi He: Bat algorithm: literature review and applications. Int. J. Bio-Inspired Comput. 5(3), 141 (2013) 16. Yang, X.-S.: A new metaheuristic bat-inspired algorithm. In: Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74. Springer, Heidelberg (2010)

A New Parallel Method for Medical Image Segmentation Using Watershed Algorithm and an Improved Gradient Vector Flow Hayat Meddeber(&) and Belabbas Yagoubi(&) Department of Computer Science, University of Oran1 Ahmed Ben Bella, Oran, Algeria [email protected], [email protected]

Abstract. Image segmentation is a fundamental task in image analysis responsible for partitioning an image into multiple sub-regions based on a desired feature. This paper presents a parallel approach for fast and robust object detection in a medical image. First, the proposed approach consists to decompose the image into multiple resolutions by a Gaussian pyramid algorithm. Then, the object detection in the higher pyramids levels is done in parallel by a Hybrid model combining Watershed algorithm, GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models where the initial contour is subdivided into sub-contours, which are independent from each other. Each sub-contour converges independently in parallel. The last step of our approach consists to project the sub-contours detected in the low resolution image to the high-resolution image. The experimental results were performed using a number of synthetic and medical images. Its rapidity is justified by runtime comparison with a conventional method. Keywords: Object detection  Medical image  Parallel execution Multi-resolution  Watershed algorithm  Gradient vector flow

1 Introduction The principal objective of image segmentation is to locate and delimit the entities present in the image. Many segmentation methods have been proposed, among which the active contours or “Snakes” introduced by Kass, Witkin and Terzopoulos [1]. Since their publication, these deformable models have received tremendous attention in the research community [2–4]. However, the classical snakes have two main disadvantages. The first, it is not easy to handle boundary concavities and the active contours are very sensitive to noise. Secondly, it is the high time of convergence to the object boundaries. To solve the first problem numbers of methods have been proposed. Xu and Prince [5] proposed a snake using gradient vector flows (GVF) as a new external force, where a vector diffusion equation is introduced in order to diffuse the gradient of the edge map extracted from the images. Xu and co-workers [6] introduced two weighting coefficients that can change in the image domain to the iteration equation of the GVF external force field. In this way, they obtained a new external force called the generic © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 641–651, 2019. https://doi.org/10.1007/978-3-030-03577-8_70

642

H. Meddeber and B. Yagoubi

gradient vector flow (GGVF). Ning et al. [7] proposed a normal gradient vector flow (NGVF) external force field. The NGVF abandons the tangent diffusion, making it difficult for the NGVF Snakes model to protect the weak edge of images. In this context, Wang et al. [8] proposed a normally biased gradient vector flow (NBGVF) external force field. NBGVF completely retains the tangent diffusion and is capable of adapting normal diffusion to image structure. In [9] a novel generalized gradient vector flow snakes model is proposed combining GGVF and NBGVF models. The authors adopt a new type of coefficients setting in the form of convex function to improve the ability of protecting weak edges while smoothing noises. In [10], a novel improved scheme was proposed based on the GVF-snake. The central idea is introduce dynamic balloon force and tangential force to strengthen the static GVF force. To reduce the computational complexity and to improve the insensitiveness to noise, [11] combine watershed algorithm with GVF snake model. To solve the second problem (time of convergence), [12] explore the use of a Bspline model of the feature to reduce the state space of the problem. The authors demonstrate a real-time, parallel implementation of B-spline snakes on a network of transputers [13]. Have developed a new parallel and scalable algorithm for active contour extractions. Their algorithm contains only communications between neighboring processors to achieve high performance parallel execution. [14] presents a new approach for integrating an approximate parallelism constraint in deformable models. The proposed parallel double snakes evolve simultaneously two contours, in order to minimize an energy functional which attracts these contours towards high image gradients and enforces the approximate parallelism between them. In [15] a multi-agent system is proposed based on NetLogo platform for implementing parametric contour active model or snake. In this paper, a parallel object detection approach for medical images is proposed. The first step of our approach consists to use a multi-resolution analysis to decrease the noises sensitivity of the active contours and to achieve fast convergence of the Snake. Then the localization of the contours is done in parallel using a hybrid model combining Watershed algorithm, GGVF and NBGVF models. The rest of this paper is organized as follows: Sect. 2 present the technique of our multi-resolution representation. The watershed algorithm is summarized in Sect. 3. In Sect. 4, we describe GVF-Snake model. We give the principles and foundations of the proposed approach in Sect. 5. Some experimental results are presented in Sect. 6. Finally, Sect. 7 concludes the paper and proposes future work.

2 Gaussian Pyramid Gaussian pyramid (GP) is an effective and simple structure with multi-resolution to explain images. In GP, all the layers can be obtained by the following equation iteratively [17]. G0 ðx; yÞ ¼ Iðx; yÞ

ð1Þ

A new parallel method for medical image segmentation

Gl ðx; yÞ ¼

X2 m¼2

X2 n¼2

643

wðm; nÞGðl1Þ ð2x þ m; 2y þ nÞ

Where: W is a 5  5 Gaussian kernel. The original image G0 is taken as the level (0) of the decomposition; that is, the bottom of the pyramid. For the remaining levels, the data within a level (1) are obtained by filtering (Low-pass filters) the data within the previous level (1–1). Therefore, the image can be decimated, after filtering, by a factor of 2 in each spatial component [18].

3 Watershed Transform The watershed transform [19] is a popular segmentation method coming from the field of mathematical morphology. The intuitive description of this transform is quite simple: if we consider the image as a topographic relief, where the height of each point is directly related to its gray level, and consider rain gradually falling on the terrain, then the watersheds are the lines that separate the “lakes” that form. Generally, the watershed transform is computed on the gradient of the original image, so that the catchment basin boundaries are located at high gradient points, the watershed transform presents some advantages. 1. The watershed lines always correspond to the most significant edges between the markers. So this technique is not affected by lower-contrast edges, due to noise, that could produce local minima and, thus, erroneous results, in energy minimization methods. 2. Even if there are no strong edges between the markers, the watershed transform always detects a contour in the area. This contour will be located on the pixels with higher contrast [20].

4 GVF-SNAKE GVF [5] is an external force field (u(x, y), v(x, y)) constructed by diffusing the edge map gradient vectors (edge force) (fx, fy) away from edges to the homogeneous regions, at the same time keeping the constructed field as close as possible to the edge force near the edges. This is achieved by minimizing the following energy functional: ZZ   h  2 i  1 lðUx2 þ Uy2 þ Vx2 þ Vy2 þ fx2 þ fy2 ðU  fx Þ2 þ V  fy EðU; VÞ ¼ dxdy 2 ð2Þ Where µ is a non-negative parameter expressing the degree of smoothness.

644

4.1

H. Meddeber and B. Yagoubi

GGVF Model

In [6], Xu and co-workers introduced a new external force called the generic gradient vector flow (GGVF). This model provides an approach to the problem that GVF Snakes can hardly converge towards the long narrow sunken regions and is not very robust to the noise. The evolution equation of this external force field is: Vtggvf ðx; y; tÞ ¼ gðjrf jÞr2 V ðx; y; tÞ  hðjrf jÞ½V ðx; y; tÞ  rf 

ð3Þ

gðjrf jÞ ¼ ejrf j=K

ð4Þ

hðjrf jÞ ¼ 1  ejrf j=K

ð5Þ

Where the parameter K determines the weight of the smooth term and the data term. 4.2

NBGVF Model

In [8] Wang et al. proposed a normally biased gradient vector flow (NBGVF) external force field. NBGVF completely retains the tangent diffusion and is capable of adapting normal diffusion to image structure. NBGVF provides a solution to the weak edge protection problem. In [9] the authors combine GGVF and NBGVF to propose a novel external force model. The improved version of the external force is defined as a vector field, and it can be obtained by using the following energy functional: ZZ E ðV Þ ¼

gðx; yÞðgsðx; yÞVNN þ hsðx; yÞVTT Þdxdy þ hðx; yÞðV  rf Þdxdy

hsð f Þ ¼

8 <

3 f : 8s3

1 5f þ 8s þ 0

1 2

ðjej  sÞ ð0\jej\sÞ ðjej ¼ 0Þ

gsð f Þ ¼ 1  hsð f Þ

ð6Þ

ð7Þ ð8Þ

Where VNN and VTT denote the second-order derivative along the normal and tangent direction. gðjrf jÞ and hðjrf jÞ denote the coefficients of the smooth and data terms in Eq. (6).

5 Proposed Approach There are four major steps in the proposed method including: (1) multi-resolution decomposition, (2) Initial segmentation in low-resolution image by watershed method, (3) Parallel Object segmentation by an improved GVF Snake “IGVF”, and (4) Projection of Snakes points from low resolution image to high resolution image.

A new parallel method for medical image segmentation

5.1

645

Multi-resolution Decomposition

The multi-resolution decomposition is performed by Gaussian pyramid (see Sect. 2). In this work, we have a pyramid of two levels at most. 5.2

Watershed Method

In the proposed method, initial segmentation is done in low-resolution images by watershed algorithm. Marker-controlled watershed segmentation follows this procedure: 1. First the gradient of the image is computed, and then this is given as input to marker controlled watershed segmentation. 2. Compute foreground markers. These are connected blobs of pixels within each of the objects. 3. Compute background markers. These are pixels that are not part of any object. 4. Compute the watershed transform. The output contours of the marker controlled watershed segmentation will be the initial contours for IGVF active contours. 5.3

Parallel Object Segmentation

A. Division of the Initial Contour Once the initial contour is drawn by watershed method, we divide it into two subcontours. It is possible to extend the number of sub-contours to four or eight. First, four points A, B, C and D are created define a rectangle surrounding the initial contour. Then we split the rectangle into two, four or eight sub-rectangles by connecting the points (A–C), (B–D), (e–g) et (f–h) as shown in Fig. 1.

Fig. 1. Four sub-contours.

And we add sampled points above and below the drawn line with an equal distance. Each sub-contour is assigned to one thread which then runs in parallel to converge towards the object boundary. B. IGVF Evolution After dividing the initial contour, the sub-contours begin to converge in parallel by the IGVF-Snake method [9], combining GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models. The convergence of the

646

H. Meddeber and B. Yagoubi

active contour is fast at low resolution due to the elimination of details. The energy function used is given by the following formula:

EðCÞ ¼

XN i¼1

ða E : continuityðpiÞ þ b E : curvatureðpiÞ þ c E : IGVF ðpiÞÞ

ð9Þ

Pi: i = 1..N are the snake points and a, b, c are coefficients attached to each energy. The internal energy is calculated from two forces called continuity and Curvature [15]. The continuity Force causes the Snake points to become more equidistant. The curvature Force prevent that active contour contains isolated points. The external force field can be obtained by minimizing Eq. (6). The Euler-Lagrange equation of the energy functional can be written as: g:ðgs : VNN þ hs : VTT Þ þ h : ðV  rf Þ ¼ 0

ð10Þ

The evolution equation of this external force can be written as: Vt ðx; y; tÞ ¼gðjrf jÞðgsð f ÞVNN ðx; y; tÞ þ hsðf ÞVTT ðx; y; tÞÞ  hðjrf jÞ½V ðx; y; tÞ  rf 

ð11Þ

The minimization of the functional energy E(c) is performed by the iteration of Fast Greedy algorithm [21]. For each point of the Snake, and for a 3  3 neighborhood, the Fast Greedy algorithm can be summarized by the following steps: 1. Calculate the energy E(c) for the Snake point and its four cardinal neighbors 2. Normalization of the energies calculated in 1. 3. If one of the four cardinal neighbors has an energy less than or equal to the energy of Snake point, it is not necessary to examine the four diagonal neighbors and go to 5. 4. Calculate the energy E(c) of the diagonal neighbors. 5. Move the Snake point to the neighbor that minimizes energy. When the sub-contours finish their evolution, a projection step is performed in parallel, which move the IGVF-Snakes points from the low resolution images to the high-resolution images. 5.4

Projection of IGVF-Snakes Points

To display each IGVF-Snake point of the image at level “i” on the image at level “i − 1”, we followed the algorithm below:

A new parallel method for medical image segmentation

647

6 Experimental Results To study the performances of the proposed approach, we used several types of images: synthetic images, echocardiographic images and biological images. In Gaussian pyramid decomposition step we used filters (5  5) and we fixed the value of Gaussian kernel parameter at 0.68. For active contours object segmentation in the higher levels of the pyramids, we used a 3  3 neighborhood, and our stopping criterion is the stability of 80% of IGVF-snake points. Figure 2 present an object with concave shape. The original resolutions of images were (337  225) pixels and the resolution of images in level 2 is 84  57. The initial contour is divided into two sub-contours which converge in parallel by IGVF-Snake. We notice that the object contour is well detected; it returns to the simplicity of the image which represents no noise. The final segmentation combining watershed and IGVF-snake show that IGVF has a large capture range and is able to move into boundary concavities.

Fig. 2. Result of synthetic image «Concave shape». IGVF-snake parameters: a = 0.8, b = 0.18, c = 0.9, IGVF {k, s} = {1, 0.5}

648

H. Meddeber and B. Yagoubi

Figure 3 shows pyramid of biological image “microbe”. The decomposition is done on image resolution (512  512), level 1 contains image of resolution 256  256 and the resolution of the highest level (level 2) is 128  128. The object contour is well detected in all levels of biological image although their images introduce noise. That returns to the multiresolution decomposition which is based on the application of low pass filters, and to the application of watershed transform.

Fig. 3. Result of biological image «Microbe». IGVF-snake parameters: a = 1, b = 0.13, c = 1, IGVF {k, s} = {1, 0.2}

Figure 4 present pyramid of echocardiographic image. The original image resolution is (320  240) pixels and the resolution of reduced image (is 160  120). The echocardiographic images are much noisier compared to the biological. Despite this, we can see that the detection obtained is good for this type of images.

Fig. 4. Result of echocardiographic image. IGVF-snake parameters: a = 0.6, b = 0.14, c = 1, IGVF {k, s} = {1, 0.5}

The proposed method is compared with Traditional GVF-snake using a manual initialization (circle) instead of the watershed transform. The experimental result shows that, the proposed approach is more robust to the noise (see Fig. 5), and it can converge to object boundary with a big concave shape (see Fig. 6). A temporal comparison was made between the developed approach and a sequential IGVF-Snake that does not use the principle of dividing the initial contour into sub-contours. This comparison is summarized by Table 1. Through these results, we note that the average improvement obtained is 38%. It returns to the parallel execution of the sub-contours and to the use of «Fast Greedy algorithm» which optimizes the neighborhood window compared to Greedy algorithm [22].

A new parallel method for medical image segmentation

649

Fig. 5. Comparative results “G0”. (a) Original image, (b) Traditional GVF-snake using a manual initialization (circle) and (c) Proposed method.

Fig. 6. Comparative results “G0”. (a) Original image, (b) Traditional GVF-snake and (c) Proposed method. Table 1. Improvements of the detection time (in second)

Synthetic image «Concave shape» Biological image «Microbe» Echocardiographic image

Sequential approach 04.039

Proposed approach Greedy 03.632

Proposed approach Fast Greedy 03.780

07.820

05.873

05.315

08.550

07.250

06.822

7 Conclusion and Perspectives Active Contour Model is a very efficient technique of image Segmentation. But there are also some drawbacks of this model as any other technique. This article proposes a fast and robust object detection method based on the principle of dividing the initial contour into several sub-contours that converge simultaneously. We developed a hybrid model combining watershed transform and an improved gradient vector flow for medical images. Our results show that this model can converge in the concavities and its low sensitiveness to noise. In future we want to integrate multi-agent system that can accelerate the implementation of the proposed approach (by cooperation) and to improve its results (by a

650

H. Meddeber and B. Yagoubi

competition). We will also define other metrics of performance to evaluate and compare our approach with other already existing.

References 1. kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 55, 321–331 (1988) 2. Ashraf, A., Safaai, B., Nazar, Z.: A novel image segmentation enhancement technique based on active contour and topological alignments. Adv. Comput. Int. J. (ACIJ) 2(3), 1–7 (2011) 3. Inderpreet, K., Amandeep, K.: Modified active contour snake model for image segmentation using anisotropic filtering. IRJET 3(5) (2016) 4. Jaiswal, R.S., Sarode, M.V.: A review on role of active contour model in image segmentation applications. IJARCCE 6(5) (2017) 5. Xu, C., Prince, J.: Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process. 7(3), 359–369 (1998) 6. Xu, C., Prince, J.: Generalized gradient vector flow external forces for active contours. Signal Process. 71, 131–139 (1998) 7. Ning, J., Wu, C., Liu, S., Yang, S.: NGVF: an improved external force field for active contour model. Pattern Recognit. Lett 28, 58–93 (2007) 8. Wang, Y., Liu, L., Zhang, H., Cao, Z., Lu, S.: Image segmentation using active contours with normally biased GVF external force. IEEE Signal Process. Lett. 17, 875–878 (2010) 9. Zhang, R., Shiping, Z., Zhou, Q.: A novel gradient vector flow snake model based on convex function for infrared image segmentation. 16, 1756 (2016). https://doi.org/10.3390/ s16101756 10. Mengmeng, Z., Qianqian Li., Lei Li., Peirui, B.: An improved algorithm based on the GVFsnake for effective concavity edge detection. J. Softw. Eng. Appl. 6, 174–178 (2013) 11. Jayadevappa, D., Srinivas Kumar, S., Murty, D.: A hybrid segmentation model based on watershed and gradient vector flow for the detection of brain tumor. Int. J. Signal Process. Image Process. Pattern Recognit. 2(3), 29–42 (2009) 12. Curwen, R.M., Blake, A., Cipolla, R.: Parallel implementation of Lagrangian dynamics for real-time snakes. In: Proceedings of the British Machine Vision Conference, pp. 29–35 (1991) 13. Wakatani, A.: A scalable parallel algorithm for the extraction of active contour. In: Proceedings of the PARELEC, Trois- Rivieres, Quebec, Canada, 27–30 August, pp. 94–98 (2000) 14. Rossantl, F., et al.: Parallel double snakes. Application to the segmentation of retinal layers in 2D-OCT for pathological subjects. J. Med. Imaging Health Inform. 48, 3857 (2015) 15. Fekir, A., Benamrane, N.: Segmentation of medical image sequence by parallel active contour. Adv. Exp. Med. Biol. 696, 515–522 (2011) 16. Mostafa, M.G., Tolba, M.F., Gharib, F.F., A-Megeed, M.: A Gaussian multiresolution algorithm for medical image segmentation. In: IEEE 7th International Conference on Intelligent Engineering Systems, INES, Assiut-Luxor, Egypt (2003) 17. Kim, J.B., Kim, H.J.: Multiresolution based watersheds for efficient image segmentation. Pattern Recognit. Lett. 24(1–3), 473–488 (2003) 18. Zhang, T., Mu, D.-j., Ren, S.: Information hiding (IH) algorithm based on gaussian pyramid and ghm multi-wavelet transformation. Int. J. Digit. Content Technol. Its Appl. 5(3), 210 (2011)

A new parallel method for medical image segmentation

651

19. Beucher, S., Meyer, F.: The morphological approach to segmentation: the watershed transform. In: Dougherty, E.R. (ed.) Mathematical Morphology in Image Processing, vol. 12, pp. 433–481. Marcel Dekker, New York (1993) 20. Jos, B.T.M, R., Meijster, A.: The watershed transform: definitions, algorithms and parallelization strategies. Fundamenta Informaticae 41, 187–228 (2001) 21. Lam, K., Yan, H.: Fast greedy algorithm for active contours. Electron. Lett. 30(1), 21–23 (1994) 22. Williams, D., Sham, M.: A fast algorithm for active contour and curvature estimation. Comput. Vis. Graph. Image Process. Image Underst. 55(1), 14–26 (1992)

Object Detecting on Light Field Imaging: An Edge Detection Approach Yessaadi Sabrina1,2(&) and Mohamed Tayeb Laskri2 1

The University Center Abdelhafid Boussouf, Mila, Algeria [email protected] 2 Badji Mokhtar University, UBM, Annaba, Algeria [email protected]

Abstract. In this paper, we propose a new sensing strategy for object detection and tracking based on, first, the extraction of visual information of a frame image such as a discrete object, specifically: edges. At this level, we focus on the description of light field images based on the detection and the localization of significant intensity variation, or what it is called: edge. This will be performed using the first and the second derivative, well known as Gradient and Laplacian. The edge image will use to detect and localize the object in the image scene, by performing a correlation process. Keywords: Plenoptic imaging  Computational photography  Light fields Object detecting  Laplacian  Gradient  Edge detection  Object tracking Correlation

1 Introduction Object detection has often been used for object segmentation and tracking for computer vision systems. This approach has gained increasing interest due to its social and security potential and plays an important role in visual surveillance applications, pedestrian detection, and tracking, image retrieval, and face recognition [14]. Earlier measurement methods suffer from drawbacks due to complexes scene properties (geometrics, illumination, …, etc.). To improve these limits, new optical methods for describing objects are efficiently and largely invaded in computer vision and machine learning [3, 9]. Quite recently, considerable attention has been paid to plenoptic imaging, this approach captures information on the three-dimensional light field of appropriate scene and records, simultaneously, the location information and the propagation direction of the object light plane [1, 2]. These computed measures are used as efficient descriptors to detect and track moving objects [17]. In another hand, Edge detection is a long-standing problem in computer vision. It is an important phase for several computer vision and image processing techniques, as pattern recognition, image segmentation, image matching, object detection, and tracking. This method aims to locate pixels of high variation in intensity from the other neighbor’s pixels [8, 15]. In this work, we propose a novel approach to detect objects in light fields images, based on the edge detection approach. The edge detection task is performed by the © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 652–661, 2019. https://doi.org/10.1007/978-3-030-03577-8_71

Object Detecting on Light Field Imaging

653

computing of the first and the second derivative as known as the Gradient and the Laplacian operators, respectively. These two operators furnish the edge image that will be used to detect an object based on the correlation and the matching algorithm. For the matching algorithm, we used the correlation technique. Correlation, or what we call in image processing: Digital Image Correlation or DIC, was used as an effective and efficient measurement tool for templates matching in image and object deformation. DIC is based on numerical computation performed on digital image processing to measure image deformation and object displacement. This measurement is evaluated by various correlation criterions known as the coefficients of correlation [10].

2 Related Works In the graphics community, successful approaches have been proposed to perform the object detection process, based on edge detection, under a variety of condition scene and camera constraints. However, existing solutions suffer from many underlying assumptions, due to complexes scene properties (geometrics, illumination, …, etc.). Recently computer vision community has converged in the use of optical techniques: The Light Field imaging, as a new representation of the image scene. The light field imaging, or, the Lumigraph as Gortler et al. called [1, 7], records the 3D Information of the scene in the image plane, this information is represented by the location of the individual light rays, that is defined by the position coordinates and the propagation direction of the incoming light, defined by the incidence angles [1, 7]. As a new technique, the light field was largely used for several technologies of computer vision and image processing. However, few works were presented for object detection, using the light field image. In their work, Dansereau et al. proposed a framework to detect changes in mobile light field cameras [4, 5]. Shimada, Nagahara and Taniguchi work on light field imaging for object detection, they generate an arbitrary in-focus plan and an out-focus one that filled the background region. The object detection is used for video surveillance, too, and performed by computing the viewpoint of spatial-temporal light field consistency in processing the light rays [12]. In a second work, Shimada et al. presented a new change detection strategy using light rays, they determine in-focus and an out-focus area, by generating an active surveillance field. The evaluating of the focusses defines light rays source. The temporal changes are captured by the update of the light ray’s background [13]. Boyle [2], extract the depth information from the 4D plenoptic function, for background/foreground separation of light field images. For our contribution, we use the light field images for object detection based on edge representation and detection. The detection and the tracking of the object are performed with a correlation process. Our model was performed and tested using all correlation criterions proposed by Pan [10]. In his Survey, Pan proposed a review paper, that resumes the most used correlation criteria. Pan demonstrates that there are three robust and efficient correlation criterions: the zero-mean normalized crosscorrelation (ZNCC) criterion, the ZNSSD, zero-mean normalized sum of squared

654

Y. Sabrina and M. T. Laskri

difference criterion and a parametric zero-mean normalized sum of squared difference (PSSDab) criterion, with two unknown parameters a and b [10]. However, we choose the ZNCC as an example for this paper. Our model was tested using the Stanford 3D scanning repository-Synthetic Light Field Archive. This database is a part of Stanford light field database. It contains 19 subsets describing 19 objects for each subset. Based on two-plane light field parameterization, the subset database contains image view spaced in either 5  5 or 7  7 views of different 3D scenes. All light fields are rendered as Portable Network Graphics images (.png).

3 The Light Field Imaging The light field is a vector function that describes the amount of light flowing in every direction through every point in space. The direction of each ray is given by the 5D plenoptic function, and the magnitude of each ray is given by the radiance. light field Image is described as a field of rays, according to their orientation and their position [1, 7]. 3.1

The Plenoptic Function

The radiance along unchanging light rays arrangement in 3D space describe a 5D function, that we call the Plenoptic Function. This function describes the 3D (x, y, z) coordinates with ɵ and ɸ angles. Assuming that the radiance is constant along an empty space, we get the 4D plenoptic function, or the 4D light field [1, 16]. 3.2

The 4D Light Field

The light field in 4D is defined, using two distant planes: UV and ST, intersecting the optical axis. The amount of coming light ray at the plane UV, through the plane ST, is defined by the 4-Dimensional plenoptic function LF (u, v, s, t) [1, 7] (Fig. 1).

Fig. 1. Light filed two planes parametrization

Where: (u, v) and (s, t) denote, respectively, the distance between the two planes UV and ST from the optical axis [16].

Object Detecting on Light Field Imaging

655

Fig. 2. The original Light field image

4 Edge Detection and Derivative Functions Differentiation is an appropriate way to determine the edge position. It describes discontinuity in an image, where it defines the variation of intensity value in the image [8, 15]. It can be expressed by the calculation of the first derivative (the gradient) or the second derivative (the Laplacian) of the discrete function representing the image intensity [8]. This representation is efficient, as it is an optimal representation of the image basing, only, on its significant descriptors: edges. 4.1

The Gradient of an Intensity Image

The gradient of an image has a very important property, it always points in the direction of the greatest rate of change of the intensity at the location (x, y) [8, 15]. Mathematical Definition For a function of intensity values, I (x, y), the gradient, at (x, y) position, is defined as the two-dimensioned column vector rI, where:   dI ðx; yÞ dI ðx; yÞ rI  GradðIÞ  ; ð1Þ dx dy Where: Gx 

dI ðx; yÞ dx

ð2Þ

Gy 

dI ðx; yÞ dy

ð3Þ

Geometrical Properties of the Gradient The Gradient Magnitude. The gradient magnitude denotes the rate of change in the direction of the gradient vector, at a given location. this magnitude computes a length of the gradient vector [8]:

656

Y. Sabrina and M. T. Laskri

Magðx; yÞ ¼ MagðrI Þ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi G2x þ G2y

ð4Þ

For more suitable computation, the magnitude in Eq. (4), can be approximated by absolute values of Gx and Gy, as followed: Magðx; yÞ  jGxj þ jGyj

ð5Þ

The Gradient Orientation. The computing of the angle gives the gradient vector orientation as followed:    h ¼ tan1 Gx Gy

ð6Þ

In case of Gy is equal to zero at the (x, y) position, the angle h is equal to P/2 if Gx is positive or −P/2 else. Gradient and First Order Derivatives The approximation of the first derivative can be computed using a different kind of masks. In our work, we have used the Sobel mask, known for its high immunity to noise relative to other ones [8]. Image convolution with this mask is obtained using the following formula: 2

1 @I ðx; yÞ ffi I ðx; yÞ  4 0 @x 1 2 1 @Iðx; yÞ ffi Iðx; yÞ  4 2 @y 1

2 0 2

3 1 0 5 1

3 0 1 0 2 5 0 1

ð7Þ

ð8Þ

As results, we get: Gx ðx; yÞ ¼ ðI ðx þ 1; y  1Þ þ 2  I ðx þ 1; yÞ þ I ðx þ 1; y þ 1ÞÞ  ðI ðx  1; y  1Þ þ 2  I ðx  1; yÞ þ I ðx  1; y þ 1ÞÞ

ð9Þ

Gy ðx; yÞ ¼ ðI ðx  1; y þ 1Þ þ 2  I ðx; y þ 1Þ þ I ðx þ 1; y þ 1ÞÞ  ðI ðx  1; y  1Þ þ 2  I ðx; y  1Þ þ I ðx þ 1; y  1ÞÞ

ð10Þ

Gradient Vector Properties Subsuming Eqs. 7 and 8 in Eqs. 5 and 6 we get the image magnitude (a) and image orientation (b), illustrated in Fig. 3. Figure 2, represents the original image. Figure 4 shows the obtained edge detector image, after a non-maxima suppression process, applied to the obtained gradient image.

Object Detecting on Light Field Imaging

657

Fig. 3. The magnitude image and the orientation of gradient image

Fig. 4. The edge image using gradient operator

4.2

Second Derivative for Image Representation

In what follows, we will use the representation of the image by the Laplacian operator. Laplacian and the Second Derivative of an Image The Laplacian is an isotopic derivation operator (invariant to the rotation of the image), and linear (calculated from the derivation which is a linear operation) [8]. The Laplacian of a function f is calculated as follows: r2 f ¼

@2f @2f þ @x2 @y2

ð11Þ

The definition of the second, partial and directional derivatives according to x and y are expressed by the following equations: @2f ¼ f ðx þ 1; yÞ þ f ðx  1; yÞ  2  f ðx; yÞ @x2

ð12Þ

@2f ¼ f ðx; y þ 1Þ þ f ðx; y  1Þ  2  f ðx; yÞ @y2

ð13Þ

658

Y. Sabrina and M. T. Laskri

And so, the Laplacian is obtained by subsuming Eqs. (12) and (13), in Eq. (11), as follows: r2 f ¼ f ðx þ 1; yÞ þ f ðx  1; yÞ þ f ðx; y þ 1Þ þ f ðx; y  1Þ  4  f ðx; yÞ

ð14Þ

5 Experimental Results for Edge Detection Computing using the Eqs. 14, Fig. 5(a) and (b) show, respectively, the obtained Laplacian image before and after thresholding.

Fig. 5. The edge image using Laplacian operator

6 Object Detection and Digital Image Correlation Detection and tracking of objects in images is an important phase in the object recognition process. One of the techniques allowing the realization of this task is the correlation techniques. 6.1

Definition of the Correlation

Proposed in mathematics by Karl Pearson in 1896, the correlation is defined as a similarity measure quantity to be calculated. Mathematically, the correlation is defined by the product of two functions, representing two signals [6, 11]. The following mathematical formula defines this calculation: Corðx; yÞ ¼ Where: Cor (x, y): Cov (x, y):

covðx; yÞ rx  ry

ð15Þ

Represents the degree of correspondence between the variables x and y, it is also called the linear correlation coefficient of Pearson. Is the covariance matrix of the variables x and y. ∂x and ∂y are the standard deviations of the two variables.

Object Detecting on Light Field Imaging

6.2

659

Zero Mean Normalized Cross-Correlation Criterion (ZNCC)

To perform the correlation phase, we used, as mentioned before, the Zero Mean Normalized Cross Correlation Criterion (ZNCC). It is the most used in research works, characterized by its robust and efficiency results. The ZNCC criterion is Invariant to the scale and offsets changes of the image [10]. The ZNCC is defined according to the following formula: CorI;g ðx; yÞ ¼

1 Lg  Cg  rI ðx; yÞ  rg XLg =2 XCg =2  l¼Lg =2

k¼Cg =2

 2 ðIðx þ l; y þ kÞ  lI ðx; yÞÞ  gðl; kÞ  lg ð16Þ

Where lg and are rg the mean and the variance of the sub-image, lI [m, n] and rI [m, n] are the mean and the variance of the image calculated at each iteration browsing the image I with the sub-image g [6, 10, 11].

7 Examples and Results As mentioned above, images used in the tests are images from the Stanford 3D scanning repository-synthetic light field archive, specifically the subset named: DragonAndBunnies. Only perspective views of a light field are used in this light field set.

Fig. 6. Image and sub-image used in the test

In our tests, we used 500 different images. and the results mentioned in the reports are those of the following image and sub-image, illustrated in Fig. 6. Regardless of different definitions and representations of the used images, the correlation result is the same in term of the position of the maximum correlation value. Whatever the image representation, the pixel of the maximum value of the ZNCC criterion is unique, this pixel has a gray level equal to 255 (white pixel). This value is unique and corresponds to the maximum value of the ZNCC coefficient (equals to 1).

660

7.1

Y. Sabrina and M. T. Laskri

Identical Maximum Correlation Pixel

If the object belongs to the image, the pixel with the maximum correlation value is detected in the same position. In the case of our example, the pixel with the maximum intensity value is unique and is detected at the position x = 244 and y = 720. See Fig. 7.

Fig. 7. The correlation image

The corresponding wireframe mesh representation of the three image representations are illustrated in Fig. 8. The color is proportional to surface height or the pixel intensity value.

(a). Simple image

(b). Gradient image

(c). Laplacian image

Fig. 8. The wireframe mesh representation of image correlation

8 Conclusion Object detection based on edge detection in the Plenoptic light field is an interesting challenge of the recent research in computational imaging and computer vision. In the Plenoptic light field, we can get multiple views of the same image, from different directions. In addition, both position and orientation task that image light field contains can efficiently perform the edge detection task. Throw multiple experiences, we demonstrate that the simple edge detection technique based on the first and second derivatives, combined with the classical DIC technique, can be efficient to detect and track objects.

Object Detecting on Light Field Imaging

661

In conclusion, the study has shown that light field images deal, efficiently, with the object detection and tracking process drawbacks. The preprocessing phase will be neglected as we can get high-resolution images, with only special sophisticated cameras. This specificity resolved the problem of sensitivity of the differentiation to the noise.

References 1. Adelson, E.H., Bergen J.R.: The plenoptic function and the elements of early vision (1991) 2. Boyle, K.C.: Occluded Edge Detection in Light Field Images for Background Removal 3. Cristóbal, G., Peter, S., Hugo, T., (eds.): Optical and Digital Image Processing: Fundamentals and Applications. Wiley, New York (2013) 4. Dansereau, D.G.: Plenoptic signal processing for robust vision in field robotics (2013) 5. Dansereau, D.G., Stefan, B.W., Peter, I.C.: Simple change detection from mobile light field cameras. Comput. Vis. Image Underst. 145, 160–171 (2016) 6. Kumar, B.V., Mahalanobis, A., Juday, R.D.: Correlation Pattern Recognition. Cambridge University Press, New York (2005) 7. Levoy, M.: Light fields and computational imaging. Computer 39(8), 46–55 (2006) 8. McAndrew, A.: A Computational Introduction to Digital Image Processing. Chapman and Hall/CRC, London (2015) 9. Murphy, K.P.: Machine learning: a probabilistic perspective (2012) 10. Pan, B.: Recent progress in digital image correlation. Exp. Mech. 51(7), 1223–1235 (2011) 11. Rakotomalala, R.: Analyse de corrélation. Cours statistique à l’université de lumière Lyon 2, 89 (2015) 12. Shimada, A., Nagahara, H., Taniguchi, R.I.: Object detection based on spatio-temporal light field sensing. Inf. Media Technol. 8(4), 1115–1119 (2013) 13. Shimada, A., Nagahara, H., Taniguchi, R.I.: Change detection on light field for active video surveillance. In: 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE, August 2015 14. Murat, T.A.: Digital Video Processing. Prentice Hall Press, Upper Saddle River (2015) 15. Umbaugh, S.E.: Digital Image Processing and Analysis with MATLAB and CVIPtools: Applications with MATLAB® and CVIPtools. CRC Press, Boca Raton (2017) 16. Xu, Y., Lam, M.L.: Light field vision for artificial intelligence. In: Artificial Intelligence and Computer Vision, pp. 189–209. Springer, Cham (2017) 17. Zobel, M., Fritz, M., Scholz, I.: Object tracking and pose estimation using light-field object models. In: VMV, pp. 371–378, November 2002

Arab Handwriting Character Recognition by Curvature Aissa Kerkour Elmiad(&) Computer Science Research Laboratory, Faculty of Sciences Oujda, University Mohammed 1er, Oujda University, Oujda, Morocco [email protected]

Abstract. Recognition for shape of Arab handwritten characters are susceptible to shape greatly varies in shape and size (Fig. 1). Curvature can be an efficient representation for learning from the writer style features. In this paper, we improve the effectiveness of our system for recognition of Arabic handwritten characters based Bézier curves [6]. A new character descriptor based on the hybridization of two methods of Bézier approximation and curvature increases recognition. Keywords: Curvature Pattern recognition

 Bézier curves  Shape descriptor  Arabic character

1 Related Work The curvature is widely used in the recognition of textures, and for the orientation of the lines of the Arab script manuscript and Latin. In the work Guillaume Joutel, we find a detailed exhaustive study of this approach [1]. Scholars writing specialists [2] have extensively exploited the hypothesis that writing can be described by these orientations and curves. But unfortunately, this use as primitive for recognition, was naive. Guillaume Joutel has noticed the presence of stable directions, breaks of lines and many curves very specific and specific to a style of writing. He used this feature to correct the orientation of medieval texts. On the other hand, the authors [3], it gives the dimension of orientations dominate calculate the density of the curves. The work concerning the detection of curvature, developed at the beginning of the research of Atteneave [4]. It has shown that the points of the curve whose locally maximal curvature values contain a lot of useful information on this curve. These data are then sufficient to characterize the polygon. It gives a form of approximation of this curve. He showed that this method of detecting dominant points can lead to a good representation of a planar curve. In addition, a shape representation based on the detection of dominant points has several advantages. First, there is a reduction of data when saving this representation instead of using the original curve. Another advantage and that this representation focuses on the main characteristics of the shape of this curve, it is therefore useful to use in the recognition of forms, the extraction of the characteristics or the decomposition of a curve into significant parts. Therefore, these points play a crucial role in curve approximation, pattern recognition, and image indexing. They also have applications in other fields of computer vision. © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 662–671, 2019. https://doi.org/10.1007/978-3-030-03577-8_72

Arab Handwriting Character Recognition by Curvature

663

Fig. 1. The letter “ ” written by different writers.

2 Computation of Curvature Based on the Oscillator Circle The radius of curvature of a plot noted K, indicates its level of curvature: the higher the radius of curvature, the more the drawing approaches a straight line, and vice versa. Mathematically, the radius of curvature is the absolute value of the radius of the circle tangent to the curve at the desired point, a circle which “fits this curve as well as possible”. This circle is called the osculating circle at the curve at this point. The radius of curvature is also the inverse of the curvature: K = 1/R. In this study, we use the Bézier curve to adapt to the curve of handwritten letters in Arabic. We use a different approach to find the best approximation of the curve so that it looks like the shape of the curve of Arabic letters. We calculate the Curvature value in the control points by analytical differentiation of the Bézier curve. We then calculate the osculating radius. In the literature, existing methods are proposed based on different definitions of curvature in discrete space. These definitions are deduced from their counterparts from the continuous case. In continuous space, they are equivalent, but it is not the same thing in discrete space. Therefore, we suggest us readers to see the works for equivalent definitions in continuous space that are the starting points for constructing the definitions of discrete curvature [1] (Fig. 2). This description is invariant in rotation and in translation.

Fig. 2. Estimator of curvature of Coeurjolly et al. based on the left and right half-tangents.

664

A. Kerkour Elmiad

In this work we chose the method of de Coeurjolly et al. [1] to estimate the radius of curvature. They proposed a solution to estimate discrete curvature. This estimator is based on the normalized curvature of order m. The idea is to construct at each point pi of the curve the ends of the longest discrete right segment PG and right PD. The estimate of the curvature at the point pi of the curve is given by the inverse of the radius of the circumscribed circle of the triangle PDpiPG. We note the lengths of the righthand segments of this triangle as a, b, and c. Then, the radius of the circumscribed circle is determined by: abc RC ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ða þ b þ cÞða þ b  cÞða þ b  cÞðb þ c  aÞ

ð1Þ

Then, the curvature at this point is estimated as follows: (2) K¼

1 RC

ð2Þ

3 Application of Curvature to Hand-Written Arabic Character After first presented the functioning of the discrete version of the curvature that we have used, we explain in detail how we were able to extract the deferential characteristics of the writing, namely the curvature and the control points of the form. We will then show how these two characteristics can be grouped in one and the same matrix. We use this matrix as part of a recognition system. We present the implementation of these two application frameworks, their specificities and their performances in comparison with methods of the state of the art.

4 Analysis of the Curvature By analyzing the handwritten Arabic characters, one seeks to make a description of their forms. We basically focus on the control points of the trace [6]. In these checkpoints, it is observed that the trace carries several deformations. It is therefore possible to define the forms of writing, as functions having a high geometric regularity, [5]. So the trace of the character has several characteristics: curvatures, slopes….. This kind of attributes is widely used in the recognition of Arabic characters. 4.1

Thinning and Bézier Curve

Before extracting the features, character images are converted to thin curve segments having single pixel thickness. Traditional thinning strategies lead to deformation of character shapes, especially where the strokes get branched out; after a thinning step (skeletonization), we obtain thin characters see [6]. The cubic Bézier curve can only

Arab Handwriting Character Recognition by Curvature

665

have 4 points, we divide the shape into different parts. The shape of the character is approximated by Bezier [6, 7]. 4.2

Curvature in Control Point

The idea is to calculate at each point Qi of control point of the curve previously calculated by the Bézier approximation (see Fig. 4) Algorithm of curvature estimator based on the left and right half-tangents:

In Fig. 3, the center of the circle C, is located along the normal line a distance R from the point of contact. We have presented in this part the algorithm of approximation of the curvature for curves in dimension two. These methods exploit the advantages of using Bézier [6] control points and are more efficient than existing

Fig. 3. Extraction of the values of curvatures in the control points and osculating circles in these points.

666

A. Kerkour Elmiad

methods; the author [8] uses which simply use the integer segment recognition methods. This approximation allows us extract local properties of curves in dimension two. We use this method to identify and classify Arabic handwritten character shapes. 4.3

Application

After the skeletonization step and the modelling step of the character form by the Bézier model see [6]. We give the algorithm extraction of the values of curvatures in the control points

We present the characteristics in the form of matrix Mc, the matrix Mc which contains the following characteristics; so, the features of the character C with a writer h are: 0

Q1 B u1 B MC ¼ B B v1 @ e1 r1

Line Line Line Line

Q2 u2 v2 e2 r2

    

1 Qr ur C C vr C C er A rr

1 and 2: Presents the coordinates of xi and yi of the points Qi, 3 and 4: Contains the derivative on the right and the left of the points Qi, 5: Contains character type 0 single and 1 for loop, 6: Contains the values of curvature ri

5 Training Phase This is the most essential step in developing character recognition system. The knowledge base or training set is developed to store the features of characters or various sizes and styles and is one problem major. Of the fact that the matrix MH of the features is not a standardized dimension. The characteristic matrix of each character to a number of different colons, and these in the same character written by the same writer Therefore the development of a wider inclusive knowledge base is required for

Arab Handwriting Character Recognition by Curvature

667

achieving accurate results during recognition of characters. We have classified the 28 Arabic characters have 16 characters which forms the 16 classes. This classification is based on the elimination of diacritical points. Then we classify these classes into two categories: S simple class: {‫ ;}ﺍ ﻝ ﺩ ﺭ ٮ ﮞ ىﺎ ﮎ ﺡ ﻉ ﺱ‬B loop class:{‫}ﻩ ﻡ ﻭ ڡ ٯ ﺹ‬

5.1

Features of the Character

We perform several tests of extraction of radius of curvature. This test on the 168 arabic characters makes it possible to characterize the different characters by this radius of curvature. This relevant data information gave us intervals for the different behaviors of the curve from its value (see Fig. 4) (Table 1).

Fig. 4. Number of powerful curvature value and Bend radius interval.

Table 1. Example the bend radius interval for the simple class. Character Number of powerful curvature value Bend radius interval ‫ﺩ‬ 1 I2 ‫ﺡ‬ 2 I4 ‫ﻉ‬ 3 I6. I7 = [0 0.0088 0.0031 0.0032 0.5000]; = [0 0.0080 0.0014 0.0285 – 0.1768]; = [0 0.0290 0.1280–0.2347 0.0907 – 0.0491 – 0.0215 0.0444 0.0078 0.0072 0] I2 = [–0.0043, –0.0107]; I3 = [–0.0177, 0.0883]; I4 = [0.0176, 0.1314]; I5 = [–0.0017, –0:0171]; I6 = [–0.0020, –0.0254]; I7 = [0.0015, 0.0222].

668

A. Kerkour Elmiad

The table shows that for example the character 7aa is characterized by two values of curvatures. Several tests were performed to characterize these intervals. These intervals contain the minimum and maximum values that the different curvature values can reach. Similarly, the shape of the Ain character is characterized by four values. 5.2

Dimensionality Reduction

Our learning strategy consists in approximating the values of the curvatures of each character using the tpaps function of matlab R17. The Matlab R17 tpaps function returns the structure of a surface passing closer to the known points by a polynomial approximation of degree 3. This structure is then allowed to trace the curves of the 16 characters. This basis of curves and stoker as a basis of learning see figure. These 17 were written by 10 people of different types. 5.3

Correlation Between Test Characters

We calculated the Spearman correlation coefficient. This coefficient allowed us to frequently indicate the existence of a non-linear relation. It should be noted that the calculation of correlation does not give the percentage of error but rather a kind of degree of resemblance (Fig. 5 and Table 2).

Fig. 5. Compare tpaps all caracters.

Arab Handwriting Character Recognition by Curvature

669

Table 2. Table of multiple correlations. ‫ﺍ‬ ‫ﻝ‬ ‫ﺩ‬ ‫ﺭ‬ ‫ٮ‬ ‫ﮞ‬ ‫ﻯ‬ ‫ﮎ‬ ‫ﺡ‬ ‫ﻉ‬ ‫ﺱ‬

‫ﺍ‬

‫ﻝ‬

‫ﺩ‬

1.0000 0.4744 0.9134 −0.9990 −0.9424 0.9988 −0.9807 0.0193 0.1775 −0.5269 −0.9769

‫ﺭ‬

‫ٮ‬

‫ﮞ‬

‫ﻯ‬

‫ﮎ‬

‫ﺡ‬

‫ﻉ‬

‫ﺱ‬

1.0000 0.3005 −0.4806 −0.6927 0.5090 −0.3707 0.6470 −0.2722 −0.4427 −0.3129

1.0000 −0.9276 1.0000 −0.7503 0.9358 1.0000 0.9103 −0.9985 −0.9504 1.0000 −0.9727 0.9859 0.8649 −0.9764 1.0000 −0.3209 −0.0000 −0.3313 0.0401 0.1599 1.0000 0.5548 −0.2117 0.1061 0.1681 −0.3632 −0.8679 1.0000 −0.1540 0.4951 0.6982 −0.5298 0.3623 −0.7073 0.7236 1.0000 −0.9331 0.9760 0.8659 −0.9687 0.9843 0.1500 0.2903 −0.4297 1.0000

6 Recognition As a first step, for the classification we choose a strategy. Our strategy is to approximate the values of the curvature of the unknown character using the tpaps function. Then, make a correlation between the different curves of the learning base. The decision will be considered acceptable for the best result of correlations. Moreover, in other times, we set a board to take the three characters for the decision. These three characters are just close in the correlation coefficient board.

7 Test and Results We did this test on a student promotion from our university. This promotion contains 120 students. Each student has written the 16 characters. So our test database contains a total of 16  160 test images. We have 120 tables to analyze. Below an example of a test table. The first line present the test characters and in the first column, we finds the learning characters (Table 3). Table 3. Example of Multiple Correlation Tables for a Simple Character Test. ‫ﺍ‬ ‫ﻝ‬ ‫ﺩ‬ ‫ﺭ‬ ‫ٮ‬ ‫ﮞ‬ ‫ﻯ‬ ‫ﮎ‬ ‫ﺡ‬ ‫ﻉ‬ ‫ﺱ‬

‫ﺍ‬

‫ﻝ‬

1.0000 0.9134 0.9988 0.8775 −0.7424 −0.4744 −0.5807 0.0193 −0.6990 −0.6269 −0.5769

‫ﺩ‬

‫ﺭ‬

‫ٮ‬

‫ﮞ‬

‫ﻯ‬

‫ﮎ‬

‫ﺡ‬

‫ﻉ‬

‫ﺱ‬

0.9955 0.7505 0.9772 −0.9413 0.9819 0.9982 −0.9540 0.8706 0.7548 0.9608 0.8315 0.8738 0.8995 0.8472 0.9890 −0.8739 −0.8921 0.6743 −0.7183 −0.9966 .9092 0.2957 0.1250 −0.0573 0.5468 −0.0977 0.5407 0.8937 −0.1109 −0.3706 −0.1601 −0.3637 0.2891 −0.1418 0.5367 0.8996 −0.7190 0.3432 −0.5345 −0.8557 −0.4338 −0.4837 0.4043 −0.5794 0.8483 −0.6859 0.7622 0.6650 −0.7344 −0.3877 −0.8705 −0.9643 0.0515 0.2412 0.9960

670

A. Kerkour Elmiad

We analyze the tables multiple correlation the following results can be deduced: (a) This rate is diagonal; this justifies the reliability of our approach for the radius of curvature. (b) The tables show that the following character forms a class (see values too close in column). Three sets of data increasingly correlated R1 = 0.91245, r2 = 0.12354, r3 = −.09 (c) It is indeed possible to consider the problem of the analysis of writings in terms of the dependence of characterization on the writing style or on the writer by controlling the highly informative elements of a style (which describe each style individually). (d) It is realized that in the case where ‫ ﺭ‬is strong, the values of the columns could be as much associated with highly resembling forms of characters in the writing. And vice versa. The low value gives the prediction that the unknown characters do not resemble the characters in the learning (it gives no information about it). (e) Obviously, there are always vicious cases where the two coefficients give the same result and where the relationship is biased. In this case, the writer ignores them set to write the right character. The system gives classification rate of about 96.62% that is quite well. Our approach makes 73 errors out of the 2 160 test images, which represents an error percentage of 3,38%. The top few misclassifications are character ‫ ﺩ‬vs ‫ ﺭ‬Which strong. In generalt, the class we find the majority of errors between these character because their forms for writing is too near. This explains their degree of curvature close enough. The confusion between ‫ ﺩ‬vs ‫ ﺭ‬are hard for any classifiers, even humans. The same thing for the class which presents the loops, we finds the confusion between the character w and ‫ ﺹ‬and ‫ ﻭ‬and ‫ﻡ‬, also for the character ‫ ﻑ‬and ‫ﻕ‬.

8 Conclusion and Future Directions Overall, the system provides operational levels of accuracy by using a curvature. However, there are various ways to extend and improve our approach. For one, OCR can be improved by considering the context in which a character by incorporating in the recognition, the control points and their tangents associate to the curvature values simultaneously. this information we would improve classification error for similar characters like ‫ ﺩ‬and ‫ ;ﺭ‬while such pairs are difficult to distinguish between out of context, if we know the character is surrounded by numbers of the diacritical points, we can provide a much more accurate the recognition. In this work, we have made a general study on the curvature applied to Arabic characters handwritten. This study allowed us to give a good characterization of the Arabic forms by the curvature, moreover better recognition rate compared to the state of the art.

Arab Handwriting Character Recognition by Curvature

671

References 1. Smith, T.F., Waterman, M.S.: Identification of common molecular subsequences. J. Mol. Biol. 147, 195–197 (1981) 2. May, P., Ehrlich, H.C., Steinke, T.: ZIB structure prediction pipeline: composing a complex biological workflow through web services. In: Nagel, W.E., Walter, W.V., Lehner, W. (eds.) Euro-Par 2006. LNCS, vol. 4128, pp. 1148–1158. Springer, Heidelberg (2006) 3. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999) 4. Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid information services for distributed resource sharing. In: 10th IEEE International Symposium on High Performance Distributed Computing, pp. 181–184. IEEE Press, New York (2001) 5. Foster, I., Kesselman, C., Nick, J., Tuecke, S.: The Physiology of the Grid: an Open Grid Services Architecture for Distributed Systems Integration. Technical report, Global Grid Forum (2002) 6. National Center for Biotechnology Information. http://www.ncbi.nlm.nih.gov 7. Coeurjolly, J.L., Lachaud, J.L., Levallois, J.: Integral based curvature estimators in digital geometry. In: Discrete Geometry for Computer Imagery. LNCS, vol. 7749, pp. 215–227. Springer (2013) 8. Niels, R., Vuurpijl, L., Schomaker, L.: Automatic allograph matching in forensic writer identification. Int. J. Pattern Recognit. Artif. Intell. (IJPRAI) 21(1), 61–81 (2007)

Language Identification for User Generated Content in Social Media Randa Zarnoufi1(&), Hamid Jaafar2, and Mounia Abik3 1

IPSS Research Team, FSR, Mohammed V University, Rabat, Morocco [email protected] 2 Polydisciplinary Faculty of Safi, Caddi Ayyad University, Safi, Morocco [email protected] 3 IPSS Research Team, ENSIAS, Mohammed V University, Rabat, Morocco [email protected]

Abstract. In this paper we address the problem of Language Identification (LID) of user generated content in Social Media Communication (SMC). The existent LID solutions are very accurate in standard languages and normal texts. However, for non standard ones (i.e. SMC) this is still unreachable. To help resolve this problem, we present a language independent LID solution for non standard use of language, where we combine linguistic tools (morphology analyzers) and statistical models (language models) in a hybrid approach to identify the standard and non standard languages included in these SMC texts. Our solution treats also the Code Switching phenomenon between standard languages and dialect as well as the normalization of SMC special expressions and dialect, and finally the spelling correction of OOV words. Keywords: Language identification Code switching  Standard language

 Social media  Noisy text  Dialect

1 Introduction Today, people are more and more connected through Social Media Communication (SMC) platforms sharing messages, photos, videos, etc. Most of them use free expressions to show their points of views, affections and knowledge to other tiers. To perform the analysis of SMC user generated text, for instance sentiment analysis and opinion mining, we have to tackle the noisy nature of these texts, such as the use of symbols, abbreviations, misspelled words. Moreover, in multilingual communities, while writing, people alternate words from two or more languages in the same sentences this phenomenon is called Code Switching (CS). Code Switching can be structurally divided into intra-sentential (see example 1) which occurs inside the sentence and inter-sentential (see example 2) occurring outside the sentence boundary. Example 1: Tu vas nous créer des (French) big problems (English) daba tchouf (Moroccan Arabic Dialect). Example 2: L3az khouti (Moroccan Arabic dialect). Nous irons tous au match (French). Let’s go now! (English).

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 672–678, 2019. https://doi.org/10.1007/978-3-030-03577-8_73

Language Identification for User Generated Content

673

Existing Text Mining techniques are not able to process this type of text for its complexity and the most of these techniques are built for standard languages and uses. LID is a Natural Language Processing that aims to automatically detect the language of a given text. LID is widely used in NLP tasks as it is not an end in itself. But, it is considered as frontend for other processing (i.e. opinion mining). For monolingual standard text document, LID is considered as a solved problem affirmed by an accuracy approaching 100% [1] (with using common words1 frequency, n-gram2 models, supervised classification techniques…). However, for SMC, language identification is far from a solved task [2]. This is related to short documents length, special vocabulary use, in addition to CS sentences. Our main contribution in this paper is a LID solution dedicated to SMC text. This solution allows the identification of standard and non standard languages (Dialects) included in CS as well as SMC special vocabulary. Since, dialects are low resourced languages, we adopt a hybrid approach combining knowledge based and corpus based techniques. In this work, we focus on intra-sentential CS which presents a high complexity for LID. As use case we apply our solution to detect the languages involved in CS text containing English, French and Moroccan Arabic dialect. In this paper, we do not evaluate our LID system. But we reserve the evaluation to a future publication. The paper is organized as follows, we first cite some related works and then we introduce our LID approach. Finally, we conclude by some examples in addition to some future directions.

2 Related Works The majority of existent LID approaches and systems work on the sentence or document level. But, due to the nature of SMC text containing CS sentences (multilingual words), we have to find the language in the token3 level. The most used approaches are corpus based or hybrid where knowledge resources and corpora are combined. In corpus based approach, for unsupervised LID both [3, 4] used Latent Dirichlet Allocation (LDA). In [3] they considered languages as topics and exploited LDA to cluster un-annotated tweets into these languages. [4] used LID to filter or identify the primary language from the other languages present in a corpus. For supervised techniques, in [5] the authors made use of characters and word level representation with RNN4 model and word embedding. And in a recent work [6] a character-based sequence-to-sequence RNN model was used to detect dialect and language varieties in CS. Regarding hybrid approach, [7] used morphological analyzer for Modern Standard Arabic (MSA) and dictionary for dialect identification. In addition, they used Sound Change Rules to explore the possible phonological variants and finally n-grams

1 2 3 4

Common words: conjunctions, prepositions, determiners…. N-gram: sequence of characters or words. Token: Lexical unit. RNN: Recurrent Neural Network.

674

R. Zarnoufi et al.

Language Models. Also [8] used a mixture of techniques including dictionary lookup, language model, and logistic regression model and CRF with and without context. In general, the obtained results by these approaches are promising, but more improvements are being sought as confirmed by [9]. For further details in related works to LID until 2017, the survey [10] can be very informative. These method presented above are particularly interested in LID in standard use of standard languages. The dialects were rarely addressed and the SMC writing style (abbreviations, short text…) was not largely considered. But this is understandable given the lack of appropriate corpora for this complex task. In this work we are interested in LID of standard and dialect languages in CS texts used in SMC. Given that, there are no readily-available corpora of manually labeled CS texts with word-level language annotations, notably in low resourced languages like dialects. Consequently, for LID we adopt a hybrid approach where we employ morphology analyzers and spelling correctors followed by language models. In the next section we develop the detailed methodology for our LID solution.

3 Languages Identification In this paper, we present our LID system dedicated for CS and noisy text. The aim of this processing step is to define the language of each word (token) within a CS sentence. This is done by using Morphological Analyzers (MAs), Spelling Correctors and language models. This solution is language independent; so we can add or remove the used modules as required (according to the languages embedded in the CS). Moreover, this technique allows the detection of Out Of Vocabulary (OOV) words. The LID adopted approach is hybridization between knowledge based and statistical techniques. This solution is influenced by the work of [7]. The first stage is input text pre-processing. The second one include morphology analysis and language model. And the finale stage is spelling correction for detected OOV words (non recognized words by the second stage), combined with spelling normalization for probably Dialect words. At the end of this analysis, we will be able to identify the language of each token in the text and for non recognized tokens; they will be tagged as ‘Other’. The detailed methodology is explained hereafter (Fig. 1). 3.1

Pre-processing

This stage achieves segmentation to separate sentences, tokenization to separate words within sentence and normalization to handle speech effects by the suppression of words characters repeated more than twice (i.e. gooooood becomes good and greaaaaaat becomes greaat) and to transform text characters to lower case. To deal with abbreviations, we use the normalisation lexicon dictionary5 developed by [11] for English micro-blog normalisation. For the other studied languages, we use SMC special lexicon collected from the web. These steps prepare the text for MA processing. In our study,

5

http://people.eng.unimelb.edu.au/tbaldwin/etc./emnlp2012-lexnorm.tgz.

Language Identification for User Generated Content

675

Fig. 1. LID system architecture

we did not remove punctuation and emoticons, as they still significant for several task like sentiment analysis and opinion mining. Moreover, they can be easily replaced by their meaning in the considered language. Also, Named Entities are not treated as they do not alter the LID result. 3.2

Morphological Analysis

In order to find the language of each token, we use a set of Morphological Analyzers (MAs) for standard languages and dictionaries for the non standard ones or dialects At the end of pre-processing and before handling MA, we check if the tokens are containing digits. These tokens are often belonging to dialect. Thus, we first check if they exist in the dialect dictionary; otherwise, they are misspelled words that must be checked by the spelling correctors as well as the other words rejected by the MAs. The LID operates as following: we pass each token through the set of MAs for standard languages (one MA for each involved language). Once it’s recognized by one of them, we assign to it as language the analyzer one. In the case where a token was recognized by more than one MA a set of unigram language models (for all involved languages) will resolve this conflict by selecting the language where this word is most frequently used. For dialect, where its transliteration is done with phonetic typing using Romanized alphabet, we perform dictionary look up for each probably dialect word (rejected by the MAs). We note that, we consider loan words or borrowing used in dialect as vocabularies belonging to this dialect. For instance, tconnecta in Moroccan Arabic (connected in English) is borrowed from French (connecté) and adapted with using phonology and morphology rules of Moroccan Arabic. These words are considered as Moroccan Arabic and detected using special lexicons for SMC. 3.3

Spelling Correction

OOV words are the non recognized tokens by the MAs and also by the dictionaries. Those tokens are very likely misspelled words or variants of dialect words, so they must be first treated by the spelling checkers-correctors set. This is done by multiple correctors, one for each used language. The goal is to look for a corrected form for the input word and therefore to find its language.

676

R. Zarnoufi et al.

Each OOV word is checked by all correctors which propose a correction for it. Once the word is recognized by a corrector, it is tagged by its language. In the worst case, we keep it unchanged, and we assign to it ‘Other’ as language tag. Also, we use unigram language models to deal with multi-recognized words as done in MAs stage. The spelling correctors used for standard languages are based on Norvig6 statistical spelling corrector. This corrector; known by its simplicity; is based on statistical language model and suggests one output as the most probable word correction. Since there is not any available corrector for dialect, moreover, dialect has not a standard orthography. While writing, users improvise spelling and the most noticeable differences are about using vowels (e.g. the sound[y] can be expressed by “ai” or “ay”). Thereby, we judge that a phonetic corrector is the suitable one in our case. So, we build up our spelling normalization and correction based on adapted phonetic algorithm Soundex7 enhanced by other techniques. It offers the possibility to deal with multiple transcriptions of each word with ignoring the different vowels variants, and so allowing both normalization and correction. Beside this algorithm, we use a dialect dictionary to select the most appropriate form of the misspelled dialect word according to the shortest Edit distance [12]. The detailed method of our spelling corrector will be the subject of another work.

4 Resources and Examples We choose to apply our solution to CS sentences containing words from French, English and Moroccan Arabic (MA) dialect. For MA, we are interested in the transcription using Romanized script (some works use the Arabic script for MA [13]). The pre-processing is done with our developed tools. For LID of standard languages, we use as morphological analyzer the platform GATE known for its performance and coverage of several languages. As mentioned earlier, we use dictionaries for SMC abbreviations (41181 tokens for English). Concerning dialect, we use the Moroccan Arabic dictionary elaborated by [14]. It comprises 48 000 words. We also use special SMC lexicons for MA dialect To enlarge the coverage of each language we use a combination of formal and social media corpora for language models in morphology analysis and spelling correction. Which are English and French text of Europarl-v7 [15] (formal text) which contains 60 million words per language and Twitter posts [16] (informal text) composed of 559 sentences per language. We notice that the full validation will be done after finishing data gathering. Some examples of LID system output for text extracted from a SMC public conversation are listed below (Table 1).

6 7

https://norvig.com/spell-correct.html. https://en.wikipedia.org/wiki/Soundex#cite_note-8.

Language Identification for User Generated Content

677

Table 1. First test and result examples for LID system

5 Conclusion We presented in this paper our LID system dedicated to SMC user generated content which can deal with the problems posed by free text including Code Switching, abbreviations, and misspelling words with a language independent approach. We also focused on the identification of standard languages and dialects as under resourced languages. We adopted a hybrid solution mixing Knowledge Based techniques with Corpus based ones. Where, we used morphology analyzers along with language models to detect standard languages and dictionaries for dialect. We also used spelling correctors to handle OOVs. This work still a first attempt for LID of SMC text, further improvements are required, such as, manage with OOV words that can be new incoming vocabulary especially for dialect. Also, detection of sequence of language tags over a sequence of words as done in [17], which can be useful for some applications like machine translation.

678

R. Zarnoufi et al.

References 1. McNamee, P.: Language identification: a solved problem suitable for undergraduate instruction. J. Comput. Sci. Coll. 20, 94–101 (2005) 2. Baldwin, T.: Language identification in the Wild (2017) 3. Voss, C., Tratz, S., Laoudi, J., Briesch, D.: Finding Romanized Arabic dialect in code-mixed tweets. In: Proceedings of the 9th International Conference on Language Resources and Evaluation, pp. 188–199 (2014) 4. Zhang, W., Clark, R.A.J., Wang, Y.: Unsupervised language filtering using the latent Dirichlet allocation. Comput. Speech Lang. 39, 47–66 (2016) 5. Samih, Y., Maharjan, S., Attia, M., Kallmeyer, L., Solorio, T.: Multilingual code-switching identification via LSTM recurrent neural networks. In: Proceedings of the Second Workshop on Computational Approaches to Code Switching, pp. 50–59 (2016) 6. Jurgens, D., Tsvetkov, Y., Jurafsky, D.: Incorporating Dialectal Variability for Socially Equitable Language Identification. In: ACL, pp. 51–57 (2017) 7. Elfardy, H., Diab, M.: Token level identification of linguistic code switching. In: Proceedings of COLING 2012: Posters, pp. 287–296 (2012) 8. Nguyen, D., Do, A.S.: Word level language identification in online multilingual communication. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, 18–21 October 2013, pp. 857–862 (2013) 9. Barman, U., Das, A., Wagner, J., Foster, J.: Code mixing: a challenge for language identification in the language of social media. In: First Workshop on Computational Approaches to Code Switching, pp. 21–31 (2014) 10. Jauhiainen, T., Lui, M., Zampieri, M., Baldwin, T., Lindén, K.: Automatic language identification in texts: a survey. J. Artif. Intell. Res. 1–97 (2018) 11. Han, B., Cook, P., Baldwin, T.: Automatically constructing a normalisation dictionary for microblogs. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jeju Island, Korea, 12–14 July 2012, pp. 421–432 (2012) 12. Damerau, F.J.: A technique for computer detection and correction of spelling errors. Commun. ACM 7, 171–176 (1964) 13. Samih,Y.: Detecting code-switching in Moroccan Arabic social media. In: SocialNLP workshop at IJCAI 2016 (2016) 14. Jaafar, H.: Le Nom et l’Adjectif dans l’Arabe Marocain: Etude Lexicologique, Ph.D. Thesis (2012) 15. Koehn, P.: Europarl : a parallel corpus for statistical machine translation. In: MT Summit, pp. 79–86 (2005) 16. Ling, W., Xiang, G., Dyer, C., Black, A., Trancoso, I.: Microblogs as parallel corpora. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, pp. 176–186 (2013) 17. Adouane, W., Dobnik, S.: Identification of languages in Algerian Arabic multilingual documents. In: Proceedings of The Third Arabic Natural Language Processing Workshop (WANLP), Valencia, Spain, 3 April 2017, pp. 1–8 (2017)

Vision-Based Distance Estimation Method Using Single Camera: Application to the Assistance of Visually Impaired People Wafa Saad Al-shehri1,2(&), Salma Kammoun Jarraya1,3(&), and Manar Salama Ali1(&) 1

2

Department of Computer Science, King Abdul-Aziz University, Jeddah, Saudi Arabia [email protected], {smohamad1,mali}@kau.edu.sa Department of Computer Science, Taif University, Taif, Saudi Arabia 3 MIRACL-Laboratory, Sfax, Tunisia

Abstract. In this paper, we test, implement and evaluate different distance calculation methods to determine the best method for computing the distance of obstacle to the visually impaired user. This work will be important for visually impaired people, it provides a significant information about the obstacles such as the type and distance of the obstacle in unknown environments from partial visual information based on computer vision techniques. In order to determine the distance with low complexity and high accuracy, among the existing distance calculation methods, we adopt three methods to calculate the distance of obstacle using a single camera. Also, to increase the awareness of the explored environment, we provide experimental results concerned with several aspects of distance calculation method. The experimental results show that the relative error between the detected viewing distance and the actual viewing distance of the used method is within −0.36 to −1.81%. Keywords: Visually impaired  Computer vision Distance calculation  Smartphone application

 Image processing

1 Introduction Visually impaired people suffer many challenges to navigate safely in their day life. Autonomous navigation in indoor and outdoor environment can be the main challenge that hinders a visually impaired (VI) person from performing daily tasks. Indeed, VI people do not have sufficient information about their surrounding environments, so they face difficulties in navigation. Computer vision offers rich information about surrounding environments that can be used for developing real-time applications. Several computer vision based systems and methods have been proposed for obstacle detection, classification and distance calculation. However, except for ENVISION [3, 4], most computer vision-based systems/methods require special equipment such as Microsoft Kinect sensors (e.g. [5, 6]) and wearable cameras (e.g. [1, 7]). In addition, these systems impose on the user to carry a specific type of camera which must be © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 679–686, 2019. https://doi.org/10.1007/978-3-030-03577-8_74

680

W. S. Al-shehri et al.

mounted in a fixed position. In contrast, ENVISION is a navigational assistance system that runs on a smartphone. It does not impose any constraint on the user to wear a special garment. However, ENVISION is only capable of detecting obstacles with no additional information about them like type and distance of obstacles. Such important information for VI people requires obstacle classification and distance calculation methods in order to provide them with better recognition of the surrounding environment. In this work, we propose to extend ENVISION system to ENVISION V2, and enhance the ENVISION by adding an obstacle classification and distance calculation methods. These methods will be integrated into the ENVISION system to offer a smartphone-based mobility aid that is capable of both detecting and providing information about the type and distance of obstacles. The main contribution of this work is experimental evaluation of different distance computation methods applied to compute distance of obstacle using single camera and handles the high level of noise and bad resolution of frames captured by the phone camera. The remainder of this paper is structured as follows. Section 2 reviews the main works on obstacle detection, classification and distance calculation methods. Section 3 describes the details about ENVISION V2. To put the contribution of this work into perspective, Sect. 4 presents three methods for calculating the distance of the obstacle, and provides details about the off-line work to develop these three methods. Section 5 provides an experimental evaluation that illustrates the efficiency and accuracy of the used distance calculation method. Finally, a summary of the findings of this work and future works are presented in Sect. 6.

2 Related Works Obstacle detection, classification and distance calculation are most important steps in navigation assistance systems which are based on computer vision. In the literature, several methods for obstacle detection, classification and distance calculation are proposed to assist the visually impaired people. In [2], the authors introduce a real-time obstacle recognition system to assist the visually impaired people for safe navigation. Based on the relative distance to the camera, the obstacle is marked as urgent or normal, the distance of the obstacle to the user is calculated by mathematical calculations to determine if the obstacle is suited in the proximity of the visual impaired person. Also, the authors extend this work by employing the bone conduction headphones to hear the audio warnings [1]. In [5], the authors propose a system to help the visually impaired people. This system calculates the depth map of image to classify and estimate the distance and velocity of the obstacles. In [6], the system uses Microsoft Kinect sensor to generate depth information, it gives the visually impaired people vocal message about the distance to the obstacle and the obstacle class by calculating the depth map of image. In the depth map, the depth represents the distance between the obstacle and the sensor. An effective and wearable mobility aid assistance for visually impaired people is proposed [7], it uses RGBD sensor. The information concerned with the distance to the closest obstacle is provided by the audio messages without exploiting depth modulation. In [8], RGBD camera is used to observe the environment. This system is extended to the previous work [9] by sending the parallel operations to

Vision-Based Distance Estimation Method Using Single Camera

681

the GPU to achieve a real time system. Although distance estimation is a significant information of obstacle classification methods to avoid collisions, this system does not take distance in its consideration. In [10], the system uses mobile Kinect which is mounted on the user body. It sends the information about obstacle such as the type and distance to the visually impaired user. Kinect sensor provides depth data which is used to calculate the distance from the user to obstacle. A smart garment prototype based on ultrasonic is proposed [11], it is a real-time adaptive obstacle classification system. The ultrasonic sensors could locate the obstacles accurately with less processing and provide accurate distance information but they don’t provide information about the type of the detected object. In [12], the authors propose an object recognition method to help the visually impaired people to know the type and the distance of the object in an environment. The distance of object is calculated from the depth data obtained by Kinect sensor. All above systems require special or expensive equipment except [1, 2]. Moreover, All systems use 3D images to calculate the distance except [1, 2] which use only 2D images. ENVISION V2 system will be similar to the work of [1] but it will not require that the user wear chest harness to carry the smartphone (cf. Fig. 1(b)), so the user will look abnormal and feel uncomfortable, also, the smartphone camera will capture a good quality of images and the entire obstacle appears in the image (cf. Figure 3). In contrast, by giving the user the freedom to carry the smartphone at a 45° angle with the landscape mode, ENVISION V2 makes the user feel more comfortable (cf. Fig. 1(a)). However, this incurs several challenges for our method: the images have noise and bad resolution; the obstacle is not completed and part of the obstacle may appear in the image (cf. Fig. 2). In addition, some systems impose that the camera attached to the walking stick [5, 12]; this may make the user seem unnatural (cf. Fig. 1 (c), (d)). Also, the system in [7] imposes on the user to wear lenses and holders attached to an RGBD camera; this may make the user feel uncomfortable (cf. Fig. 1(e)). In addition, the system in [11] needs five ultrasonic sensors which may expensive device and the system impose on the user to wear sonar embedded garment (cf. Fig. 1(f)). In this work, minimum restrictions are imposed on the user. It provides significant information about the obstacles which are the type and distance of the obstacle. In the following section, we present details about ENVISION V2.

3 Envision V2 ENVISION V2 is an extended version of ENVISION by adding the obstacle classification and distance calculation methods. ENVISION is a real-time and assistance system to help visually impaired people to avoid obstacles, it detects both static and dynamic obstacle regions based on prediction models. The architecture of ENVISION is shown in Fig. 4. It operates in four steps: (1) speech recognition, (2) path finding, (3) obstacle detection, and (4) merging phase. In step (1), it recognizes the requested destination using (Google Voice API†) and passes it to step (2) to find a valid path to the destination. ENVISION implements step (2) using GPS technology (Google Maps API‡ and Google Maps Directions API§). Step (3) produces region(s) of obstacle which will be as an input to obstacle classification step. Finally, in the merging phase (step 5), ENVISION system generates an intelligent decision representing an

682

W. S. Al-shehri et al.

Fig. 1. Prototype of ENVISION system and other existing systems

Fig. 2. Samples of the images which taken by ENVISION system [3, 4]

Fig. 3. Samples of the images which taken by ALICE device [1]

appropriate voice message and an alert to the user when an obstacle is detected. The obstacle classification and distance calculation methods will be integrated as step (4) in ENVISION system [ENVISION V2] as illustrated in Fig. 4. ENVISION V2 operates in the above steps in addition to two main steps which are obstacle classification and distance calculation methods. It applies machine learning techniques for obstacle classification step and distance calculation method to determine the distance of obstacle to the visual impaired user. These methods would allow to provide further information about the type and distance of the detected obstacles. The detected obstacle are marked as normal or danger based on its class and distance. In our proposed classification

Vision-Based Distance Estimation Method Using Single Camera

683

method, we consider three classes: person, vehicle and other. The method provides voice message to inform the user about the obstacle which exist in his\her path. In order to develop the proposed solution, we conduct our work in two stages: off-line stage and exploitation; and evaluation stage. In the following section, we provide details about developing and evaluating the used distance calculation method.

4

Obstacle Classification + Distance Calculation

5

Fig. 4. Architecture of ENVISION [3, 4]

4 The Used Distance Calculation Method In order to select the appropriate distance calculation method, we implement and test three methods. These method are described in details in the following sections. 4.1

Distance Calculation Method Using Triangle Similarity

In this method, the triangle similarity is utilized to determine the distance from the smartphone camera to a known object. Let’s say we have an object with a known width W. We then place this object some distance D from the camera. We take a picture of the object using the camera and then measure the apparent width in pixels P. This allows us to derive the perceived focal length F of the camera. F ¼ ðP  DÞ=W:

ð1Þ

Then, we move the camera both closer and farther away from the object, the triangle similarity is applied to determine the distance of the object to the camera. D0 ¼ ðW  FÞ=P:

ð2Þ

684

4.2

W. S. Al-shehri et al.

Distance Calculation Method Using Sensor Height

In this method, we calculate the distance using the ratio of the size of the object in picture and the height of the object in real life which is the same as the ratio between the focal length and distance of object and camera. So, the simple formula for getting distance of object from camera is as follows: DistanceðmmÞ ¼ ðfocal lengthðmmÞ  real height of the objectðmmÞ  camera frame height in deviceðpixelsÞÞ=ðimage heightðpixelsÞ  sensor heightðmmÞÞ:

4.3

ð3Þ

Distance Calculation Method Using Law of Sines

In this method, we use the law of sines to find the distance. The law of sines for triangle ABC with sides a, b, and c opposite to the angles A, B and C, respectively, as follows: sin A sin B sin C a b c ¼ ¼ and ¼ ¼ a b c sin A sin B sin C

ð4Þ

So, the law of sines says that in a single triangle, the ratio of each side to its corresponding opposite angle is equal to the ratio of any other side to its corresponding angle.

5 Experimental Results In this section, we provide experimental results concerned with several aspects of obstacle distance calculation method. We use only 2D images that are captured by a smartphone camera at a resolution of 1280  720 pixels. In order to evaluate the performance of the used distance calculation method, we use the relative error to measure the error between the measured distance and actual distance. We use set of images which have been by captured by a smartphone camera. We apply different distance calculation methods on these images. Then, we evaluated the experimental results of the distance calculation methods with the help of the relative error. Suppose viewing distance measured by the used method is f and the actual viewing distance is g. The relative error is calculated by: d¼

f g  100 %: f

ð5Þ

A set of experimental results for the viewer at different viewing distances for three methods are shown in Tables 1, 2 and 3. The viewer moves along the optical axis of the camera from far to near. As can be seen from the experimental results of the three methods, the relative error of the first method can be controlled within −0.36 to −1.81%. While the relative error of the second method is from −11.17 to −13.48%. The relative error of the third method is 23.07%. The error floating of first method is small which illustrates the high accuracy of the calculated viewing distance. Since the

Vision-Based Distance Estimation Method Using Single Camera

685

first method is the most closer to real distance, we select this method to calculate distance of obstacle and integrate it to ENVISION V2. Table 1. The experimental results and error of first method. Experiment number 1 2 3

The actual distance (inch) 20 35 60

The detected viewing distance (inch) 19.643 34.162 59.783

The relative error (%) −1.8174% −2.453% −0.36298%

Table 2. The experimental results and error of second method. Experiment Number 1 2 3

The actual distance (inch) 20 35 60

The detected viewing distance (inch) 17.733 30.840 53.968

The relative error (%) -12.784% -13.488% -11.177%

Table 3. The experimental results and error of third method. Experiment Number 1 2 3

The actual distance (inch) 40 46 54

The detected viewing distance (inch) 52.349 57.107 63.77

The relative error (%) 23.077% 19.298% 14.286%

6 Conclusion In this paper, we use ready distance calculation methods to determine the best method for computing the distance to help the visually impaired people to navigate safely in unknown environment. Thus, we present, test and evaluate three distance calculation methods with the use of a single camera and 2D image. Then, triangle similarity method is selected since the experimental results show that the relative error of the this method can be controlled within [−0.36 to −1.81%]. Future works will focus on applying and evaluating the proposed distance calculation method on real-time, and integrating it into the ENVISION V2 system.

686

W. S. Al-shehri et al.

References 1. Tapu, R., Mocanu, B., Zaharia, T.: A computer vision-based perception system for visually impaired. Multimed. Tools Appl. 76, 1–37 (2016) 2. Tapu, R., Mocanu, B., Bursuc, A., Zaharia, T.: A smartphone-based obstacle detection and classification system for assisting visually impaired people, pp. 444–451. IEEE (2013) 3. Alshehri, M.A., Jarraya, S.K., and Ben-Abdallah, H.: A mobile-based obstacle detection method: application to the assistance of visually impaired people, pp. 555–564. Springer (2016) 4. Khenkar, S., Alsulaiman, H., Ismail, S., Fairaq, A., Jarraya, S.K., Ben-Abdallah, H.: ENVISION: assisted navigation of visually impaired smartphone users. Procedia Comput. Sci. 100, 128–135 (2016) 5. Lakshmanan, R., Senthilnathan, R.: Depth map based reactive planning to aid in navigation for visually challenged, pp. 1229–1234. IEEE (2016) 6. Huang, H.-C., Hsieh, C.-T., Yeh, C.-H.: An indoor obstacle detection system using depth information and region growth. Sensors 15, 27116–27141 (2015) 7. Poggi, M., Mattoccia, S.: A wearable mobility aid for the visually impaired based on embedded 3D vision and deep learning, pp. 208–213. IEEE (2016) 8. Vlaminck, M., Hiep, Q.L., Vu, H., Veelaert, P., Philips, W.: Indoor assistance for visually impaired people using a RGB-D camera, pp. 161–164. IEEE (2016) 9. Vlaminck, M., Jovanov, L., Van Hese, P., Goossens, B., Philips,W., Pizurica, A.: Obstacle detection for pedestrians with a visual impairment based on 3D imaging, pp. 1–7. IEEE (2013) 10. Hoang, V.-N., Nguyen, T.-H., Le, T.-L., Tran, T.-H., Vuong, T.-P., Vuillerme, N.: Obstacle detection and warning system for visually impaired people based on electrode matrix and mobile Kinect. Vietnam J. Comput. Sci. 2, 1–13 (2016) 11. Sampath, D., Wimalarathne, G.: Obstacle classification through acoustic echolocation, pp. 1–7. IEEE (2015) 12. Takizawa, H., Yamaguchi, S., Aoyagi, M., Ezaki, N., Mizuno, S.: Kinect cane: object recognition aids for the visually impaired, pp. 473–478. IEEE (2013)

1D Signals Descriptors for 3D Shape Recognition Kaoutar Baibai1(&), Mohamed Emharraf1, Wafae Mrabti2, Youssef Ech-choudani3, Khalid Hachami1, and Benaissa Bellach1 1

2

LSE2I Laboratory, School of Applied Sciences Engineering, Mohamed I University, Oujda, Morocco {k.baibai,m.emharraf}@ump.ac.ma IIAN Laboratory, Faculty of Sciences Dhar El Mahraz, Sidi Mohamed Ben Abdellah University, Fez, Morocco [email protected] 3 Laboratory of Engineering and Materials Science, University of Reims Champagne-Ardenne, Reims, France [email protected]

Abstract. In this paper, we propose a new 3D shape recognition approach. This approach is based on the shape recognition of a 3D object based on the processing of 1D signals, in order to reduce calculation complexity. The recognition is based on the calculation of shape descriptors from 1D signals. The first step in our approach is to convert 3D shape into 1D signals using a multi-line projection. These signals represent information on the third dimension of the object (Z). Then, the next step consists in calculating the 1D descriptors of these signals. These descriptors are used as input data of the classifier based on Euclidean distance to recognize the 3D object. The results of testing the proposed approach give an accuracy of 99.1%. This approach offers a simple, fast and efficient 3D shape recognition methodology, which makes our approach competitive for real-time applications. Keywords: Recognition Classification

 3D shape  1D signal  Descriptors

1 Introduction The basic concept of feature-based object recognition strategies is as follows [12, 13] and [14]: Each input data is searched for a specific entity type; this feature is then compared to a database containing objects to verify if there are recognized objects [1, 2]. Feature based techniques can be grouped into two approaches 2D and 3D approaches. 2D approaches are based on the extraction of characteristics from 2D images. 3D approaches require the passage through 3D models. Approaches based on 2D descriptors have been the subject of extensive research in the field of computer vision [3]. Among the most used descriptors are: the SURF and SIFT descriptors. The SIFT descriptors are based on the calculation of the gradient in each point at the scale of the region surrounding the keypoint [3]. The SURF descriptor describes a © Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 687–693, 2019. https://doi.org/10.1007/978-3-030-03577-8_75

688

K. Baibai et al.

keypoint by a set of main orientations of the rectangular window surrounding this point [7]. The approaches based on 3D descriptors are essentially based on the analysis of the 3D model to recognize. A standard descriptor used in the literature is the spin image [8], where the set of points in the neighborhood of the point of interest is mapped to grid similar to an image. This idea is enlarged by the signature of orientation histograms (SHOT) [9], which construct a histogram of gradients in a support region around a key point instead of distances of points. The two existing approaches have some limitations especially for real-time applications. For 2D approaches the analysis of 3D data is done only from the view based aspect. It is not yet sufficient for understanding 3D shapes, because when converting 3D shapes into 2D images, 3D spatial geometry information is inevitably lost. For 3D approaches, the complexity of 3D data processing consists of the reconstruction of the 3D model followed by the analysis of this model in order to recognize it. To remedy these problems we propose a new approach that consists in the shape recognition of 3D objects based on descriptors of 1D signals. The methodology followed is based on the processing of 1D signals. The methodology followed for pattern recognition consists firstly in extracting the 1D signals by following this steps: • Step 1: Projection of multi-line pattern on the surface of 3D object [11], • Step 2: Acquisition of the 2D image, • Step 3: Extracting lines from 2D image, and each line to 1D signal. After determining the 1D signals, the next step consists in detecting the characteristic points followed by calculating the descriptors of each characteristic point. These descriptors are the elements of the characteristic vector of the 3D object to be recognized, which will subsequently be the input data of the classifier. To test the validity of our approach we used a kNN classifier based on Euclidean distance.

2 1D Signal Descriptors In order to determine the useful information of 1D signals to know the 3D objects, our approach consists in determining the characteristic vector of the object. This vector is constituted by the descriptors of 3D object. The basic concept of object recognition strategies feature-based is the following: For each object this technique consists in the detection of the keypoint then defines for each point the corresponding descriptors to build a vector which characterizes the object [4]. These descriptors may be local or global depending on how the signal is processed. 2.1

Keypoints Extraction

A keypoint is an instance of the 1D signal characterizing the shape of 3D object. A signal can contain several keypoints, and the number of these points increases if the object has a great shape variation. Figure 1 shows an example of keypoint. Among these techniques for detection keypoints of 1D signals, the methods based on the

1D Signals Descriptors for 3D Shape Recognition

689

calculation of local extremums of a signal. For the 1D signals used these two parameters give relevant information on the shape of 3D object. Local extremums are calculated by defining a fixed size window that traverses the signal elements. For each instance the local maximum is calculated. The same procedure is followed to determine the minimums.

Fig. 1. Example of 1D signal keypoints detection.

2.2

Keypoints Descriptor

After determining keypoints, an important step is to calculate the keypoints descriptors. The global descriptive, see Eqs. 1 and 2, vector Vdesc of an object is composed by set of characteristic vectors vi of each 1D signal, where vi is the combination of two types of local and global descriptors. Where Vdesc is descriptive vector of object, the vD1G, and vD1L are Global and Local descriptors, and I is iteration number [1 2 3 … 19]. For each object, the descriptive vector Vdesc is constituted by set of descriptors of the 19 signals, vi. Vdesc ¼ fv1 ; v2 ; v3 ; . . .v19 g

ð1Þ

Vi ¼ ½vD1G ; vD1L 

ð2Þ

2.2.1 Global Descriptors To describe the general shape of the 3D object, we use global descriptors. As global descriptors of 1D signal we use two quantities: the mean of signal and the total number of maximums [10]. • The mean is the arithmetic mean. It can be represented by Eq. 3: mean ¼ 1=n 

X

xi :

ð3Þ

• The total number of maximums represents the number of local maximums of 1D signal, see the following Eq. 4: nbreM ¼ sizeðmaÞ:

ð4Þ

690

K. Baibai et al.

2.2.2 Local Descriptors For our study, we use as local descriptors the extremums that reflect the maximum of information on the Z axis (Fig. 2).

Fig. 2. Example of extremums extraction (maximums and minimums).

• The extremums are the local maximums and minimums. Let Z be a function defined on a set D and I is an element of D, see Fig. 1. And x is real of I: • ma is maximum of Z on I if there is x of I that Z(x)  ma. ma ¼ 8I 2 D=maxðZðxÞÞ:

ð5Þ

• mi is minimum of Z on I if there is x of I that Z(x)  mi. mi ¼ 8I 2 D=minðZðxÞÞ

ð6Þ

• Curvature is calculated by determining the gradient of the signal. It is a vector quantity indicating how a physical quantity varies in space. In our case study this quantity is represented by third dimension Z, so the gradient gives relevant information on the variation of shape of 3D object. As descriptors we take the gradient of local maxima ma. Where c is Difference between elements of vector Z and cm is gradient at the maximums. c ¼ @Z=@x ! cm ¼ cðmaÞ:

ð7Þ

3 Objects Classification Classification is the step to know an unknown object from its description by assigning a class, in our case 3D object study. The 1D signal classification technique used is kNN based on Euclidian distance. The KNN algorithm is one of the simplest classification algorithms and one of the most used learning algorithms. The KNN has already been used in statistical estimation and pattern recognition in the early 1970. To determine which of the K instances of the learning data set is closest to a new input, a distance

1D Signals Descriptors for 3D Shape Recognition

691

measurement is used. For real value input variables, the most popular distance measure is Euclidean distance, other distance measures like [5, 6]. As mentioned previously, each 3D object is characterized by a vector formed by a set of descriptors used as input of the classifier, it used also as parameters to form the kNN classifier distance matrix. The process followed to classify a 3D object is as follows: • Step 1 – Learning: it consists in learning the classifier a set of data (vectors) to build a general model. These vectors are calculated from 1D signals of each object according to different view angles. The learning base consists of 6 classes i.e. 6 3D objects. • Step 2 – Validation: consists of validating the proposed decision model. It main purpose is to have good performance of the classification model, and as an evaluation method we use k-fold cross-validation with k = 10. It consists in subdividing data into k subset. Then, a sample is selected as a validation set and k − 1 as learning data. This operation is repeated k times. The accuracy of the prediction is calculated by the quadratic mean of k errors, see Eq. 8. Where m is a number data well predicted, i is iteration number from 0 to 10 and n is total number of data. X Precision ¼ 1=n  mi : ð8Þ

4 Results and Conclusion In this section, we present our test work that proves the effectiveness and feasibility of the proposed approach. The test work is performed on a 3D shape set composed by different shapes; Fig. 3 shows examples of studied shapes. The first step consists in extracting these signals. To do this, a multiline projection is applied on the 3D model then the 1D signals are extracted, Fig. 4 shows an example. The descriptors of these signals are calculated first by determining the keypoints. For each keypoint, the corresponding descriptors are calculated. The characteristic vector of the object is constituted by all these descriptors. Then, for each object, we define a set of vectors calculated according to several view angles that constitute a learning base for the kNN classifier. To evaluate the effectiveness of the decision model, we use k-fold crossvalidation with k = 10. By applying our proposed approach on six 3D models of different shapes, we obtain an accuracy of 99.1%. The test data base is constituted by 10 vectors corresponding to 10 other view angles.

Fig. 3. Example of 3D model.

692

K. Baibai et al.

a) Line:

1

Line:

0.2

Line:

-1

-0.5

0 Line: 5

0.5

1

1.5

0 -2

2

-1

-0.5

0 Line: 6

0.5

1

1.5

2

0.6

0.6

0.4

0.4

0 -2

0.2 -1.5

-1

-0.5

0 Line:

0.5

1

1.5

-1

-0.5

0 Line:

0.5

1

1.5

2

-1

-0.5

0 Line: 13

0.5

1

1.5

2

0 -2

0.5

1

1.5

2

-1

-0.5

0 Line:

-1

-0.5

0.5

1

1.5

2

0 -2

0 -2

-1.5

-1

-0.5

0

0 Line: 14

0.5

1

1.5

2

-1.5

-1

-0.5

0 Line:

0.5

1

1.5

2

0.5

1

1.5

2

0 -2

0

0.5

1

1.5

2

0 -2

-1.5

-1

-0.5

0.5

1

1.5

2

8

0 Line:

0.5

1

1.5

2

12

0.5

-1.5

-1

-0.5

0 Line: 15

0.5

1

1.5

2

0 -2

0.8

0.8

0.6

0.6

-1.5

-1

-0.5

0 Line:

16

0.5

1

1.5

2

0.4 0.2

0 -2

-1.5

-1

-0.5

0 Line:

0.5

1

1.5

2

0 -2

-1.5

-1

-0.5

0

0.5

1

1.5

2

19

0.3 0.2

0.1 -0.5

0 -2

0.4

0.3 0.2

-1

0

1

18

0.4

-1.5

-0.5

11

0.4

0 -2

17

0.2

-1

Line:

0.5

-1.5

0.5

-1.5

0.6 0.4

-1.5

1

0.2 0 -2 0.8

0 -2

7

0.5

Line:

1

0.5

0

1

0.5

-1.5

-0.5

10

1

1

-1

0.2 -1.5

9

0.5

0 -2

0 -2

2

-1.5

Line:

0.6

1

0.2 0.1

0 -2 0.8

0.4

4

0.3

0.2 0.1

-1.5

0.8

0.2

Line: 0.4

0.3

0.1 0.05

-1.5

0.8

3

0.4

0.15

0.1 0.05 0 -2

2

0.2

0.15

0.1 -1.5

-1

-0.5

0

0.5

1

1.5

2

0 -2

-1.5

-1

-0.5

0

0.5

1

1.5

2

(b) Fig. 4. Example of 3D model (a), 1D signals extraction of 3D model (b).

This article presented a new approach for 3D shape recognition. This approach consists in the recognition based on the descriptors of the 1D signals. The test work shows the feasibility of our approach. It offers many advantages, including the following: • The rapidity compared to other 2D-3D approaches, since the processing is done directly on 1D data. • Its robustness: the test results of the approach gave an accuracy of 99.1%. • Its efficiency: given the simplicity of the calculations carried out. As future work, we are looking to improve prediction accuracy, by adding and improving descriptors.

References 1. Andreopoulos, A., Tsotsos, J.K.: 50 years of object recognition: directions forward. Comput. Vis. Image Underst. 117, 827 (2013) 2. Matas, J., Obdrzalek, S.: Object recognition methods based on transformation covariant features. In: 12th European Signal Processing Conference (2004) 3. Lowe, D.G.: Distinctive image features from scale invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004) 4. Feature-Based Recognition of Objects (1993) 5. Fix, E., Hodges, J.: Discriminatory analysis. Nonparametric discrimination: Consistency properties. Technical Report 4, US AF School of Aviation Medicine, Randolph Field, Texas (1951) 6. Bugres, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2, 121–167 (1998) 7. Bay, H., Tuytelaars, T., Surf, G.L.V.: Speeded up robust features. In: European Conference on Computer Vision (ECCV 2006), pp. 404–417 (2006) 8. Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21, 433–449 (1999) 9. Tombari, F., Salti, S., Di Stefano, L.: Unique signatures of histograms for local surface description. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence on Lecture Notes in Bioinformatics), vol. 6313, pp. 356–369 (2010)

1D Signals Descriptors for 3D Shape Recognition

693

10. Susto, G.A., Cenedese, A., Terzi, M.: Time-series classification methods: review and applications to power systems data. In: Big Data Application in Power Systems, pp. 179– 220. Elsevier (2018) 11. Baibai, K., Elfakhouri, N., Bellach, B.: 3D acquisition system for 3D forms recognition. In: 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), pp. 1–6. IEEE (2017) 12. Shah, S.A.A., Bennamoun, M., Boussaid, F.: Keypoints-based surface representation for 3D modeling and 3D object recognition. Pattern Recogn. 64, 29–38 (2017) 13. Chen, J., Fang, Y., Cho, Y.K.: Performance evaluation of 3D descriptors for object recognition in construction applications. Autom. Constr. 86, 44–52 (2018) 14. Zhi, S., Liu, Y., Li, X., Guo, Y.: Toward real-time 3D object recognition: a lightweight volumetric CNN framework using multitask learning. Comput. Graph. 71, 199–207 (2018)

Dynamic Textures Segmentation and Tracking Using Optical Flow and Active Contours Ikram Bida(B) and Saliha Aouat Laboratory for Research in Artificial Intelligence (LRIA), Department of Computer Science, University of Science and Technology Houari Boumediene Bab-Ezzouar, Algiers, Algeria {ibida,saouat}@usthb.dz

Abstract. A large number of scenes composing our visual world are perceived as dynamic textures, displaying motion patterns with a certain spatial and temporal regularity such as swaying trees, smoke, fire, human movements, flowing water and others. In real scenes, encountering dynamic texture superimposition is quite frequent, in which, we are challenged to separate each region aside in order to improve their analysis. This research paper presents a novel approach for segmenting then tracking dynamic textures in video sequences, using optical flow and static manual active contours, which we adapt to be dynamic and fully automatic. Experiments were conducted on DynTex and YUP++ datasets, where the achieved results demonstrated a success of the proposed approach to segment and track dynamic textures effectively. Keywords: Dynamic textures · Static textures · Segmentation Active contours · Motion estimation · Optical flow

1

Introduction

Dynamic textures analysis is an attractive and actual computer vision study area that is receiving a serious attention nowadays due to its crucial role as a powerful component in the video content interpretation. It is applied in vast areas of applications like indexing and video retrieval, video surveillance and environmental monitoring, spatiotemporal segmentation, dynamic background subtraction, Dynamic texture tracking and synthesis. Despite the variety and efficiency of these applications, finding a universal definition for dynamic texture appears to be a real issue for the computer vision community. While browsing literature, we came across diverse definitions although Dubois’s [1] seems to be the most profound and complete one, they defined dynamic texture as a textured structure that can be rigid or deformable. c Springer Nature Switzerland AG 2019  ´ Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 694–704, 2019. A. https://doi.org/10.1007/978-3-030-03577-8_76

Dynamic Textures Segmentation and Tracking

695

Characterized by repetitive spatial and temporal phenomena, this structure has a deterministic or anarchic movement induced by a force generated internally, received externally or produced by the camera movement. Static segmentation or single image segmentation is a low-level processing computer vision problem that transforms the digital image at pixel level; it consists on partitioning the image into a set of homogeneous regions of related pixels with common properties for the purpose of identifying each region. In this context, various image segmentation methods were introduced, amidst which, we cite two main families: based contour family like deformable methods (called also active contours) and based region family such clustering, classification, region growing, watershed and others [2]. Segmenting static textures figuring in single images is a very intriguing task given the type of images, type of applications, and the different contexts. However, expanding this notion temporally i.e. segmenting textures progression over time (named dynamic texture segmentation) is a more challenging task, and it encounters major difficulties as dynamic textures present random motions and changes in shape that are very complex to segment, or moreover to track, particularly wherever the video background is loaded and also textured. This work addresses the problem of segmenting dynamic textures present in video sequences, where we propose an original approach that segments on spatial and temporal levels. We offer a complete system that adapts and enhances one of the very efficient static segmentation technics: static active contours, to be qualified as Automatic and Dynamic. The rest of the paper is structured as follows. Section 2 reviews globally the related works. Section 3 is consecrated to static active contours background, basics and the different implementation methods. In addition, Mukherjee static technic is detailed to be used and adapted in the suggested system. The proposed approach is presented in Sect. 4 and experimentations are exposed in Sect. 5. We end this work by a conclusion and some open perspectives.

2

Related Works

In literature, a great variety of methods were suggested to segment dynamic textures. Mainly, we categorize them into four families, then we introduce a new fifth family to which belongs our proposed method: Mathematical modelbased family techniques, Motion-based family techniques, Multi-scale transform approaches and Feature-based family techniques. Mathematical model-based techniques model the dynamic textures then use the parameter’s model to serve as descriptors for segmenting different regions. Szummer and Picard [3] represented a sequence of dynamic texture images using a space-time autoregressive (STAR) model, where the luminance of each temporal pixel was estimated using a linear combination of the spatio-temporal neighborhood. Ghanem and Ahuja [4] utilized Hidden Markov Models to model dynamic textures. Markov Random Fields were also utilized in [5,6] for dynamic texture segmentation. Doretto et al. [7] introduced another model named Linear Dynamical System (LDS), it was applied for dynamic textures segmentation.

696

I. Bida and S. Aouat

Motion-based family approaches are widely employed [8–10], they relate dynamic textures to motion data rather than scanning static images. Feature-based family techniques extract images features to characterize dynamic textures, Zhao and his colleagues [11] extended static Local Binary Pattern (LBP) method into temporal scale to be used for spatio-temporal segmentation known as Volume Local Binary Patterns. Rahman and Murshed [12] utilized spatial and temporal co-occurrence matrices based features for dynamic texture segmentation. Multi-scale transform approaches are often used for signal processing, it analysis the different frequencies composing textures. In [13], 3-D discrete Fourier transform was applied to segment dynamic textures. Dubois et al. [14] segmented the different spatio-temporal directions of the dynamic texture according to the orientation and scale of the 3-D curvelets’ coefficients. Multi-wavelet transforms [15] were also utilized to segment the dynamic texture into disjoint regions. Our work introduces a new family approach of dynamic textures segmentation which we believe never been explored before: Geometric family, where we suppose that dynamic textures can be represented by moving contours, whose motion can be tracked in time.

3

Static Active Contour Segmentation: Background, Basics and Models

Active contours have been very successful, particularly in the field of image processing since Kass team publication [16], in which they introduced the first model named snakes. 3.1

Principal

Static active contours idea is to evolve an initial 2D curve as shown in Fig. 1, it’s positioned manually on the image’s area of interest. The curve moves and slowly follows the region contours according to a partial differential equation, it’s generally deduced from an energy functional to be minimized/maximized which is based on the concept of internal and external energies [17]. Utilizing active contours for a given application requires: defining an energy functional to be minimized then deducing an evolution equation that optimizes this energy functional using an appropriate method.

Fig. 1. Evolution process of an active contour towards the region of interest

Dynamic Textures Segmentation and Tracking

3.2

697

Active Contour Energy Minimization/maximization

The energy functional attached to any active contour Γ is as follows: E(Γ ) = Einternal (Γ ) + Eexternal (Γ )

(1)

Where Einternal is the internal energy and Eexternal is the external energy. Internal Energy. Called regularization energy, it defines the curve’s rigidity and points’ cohesion. So, it prevent the active contour individual points from getting too far from their neighbors [18]. External Energy. It guides active contour’s line to the actual contours present on the image, considering either information present on the contour (defined by a high value of gradient) or regions information delimited by the contour [18]. 3.3

Active Contours Implementation Models

A multitude of active contour models have been proposed belonging to two main classes of representation: – Parametric (Explicit) models such: Snakes and Gradient Vector Flow snake. – Geometric (Implicit) models such: Geodesic and Mumford-Shah models. In this work, we are interested in second class models, particularly in Mumford-Shah inspired implementations like Mukherjee’s [19] formulated in a level set framework. More details about level set formulation can be found in [20]. Mukherjee Static Active Contour. Is one of the latest static active contours methods, proposed in 2015 [19], it models the image as a set of constant lighting regions, and perform a two-classes segmentation using smooth functions C1m (x) and C2m (x) referring to two regions intensities separated by a contour. They are presented as a combination of basic Legendre linear functions as follows:   C1m (x) = αk Pk (x) C2m (x) = βk Pk (x) (2) k

k

Where Pk is a multidimensional Legendre polynomial of k degree. A full explanation of the energy functional and contour evolution of Mukherjee approach is found in [19].

4

The Proposed Automatic and Dynamic Active Contours for Dynamic Textures Segmentation

This section is dedicated to portray and point out the detailed lines of the proposed approach named: Automatic and Dynamic Active Contours (ADAC).

698

I. Bida and S. Aouat Motion Estimation

Dynamic texture segmenation

Dynamic texture tracking

Video Images

Fig. 2. The proposed ADAC main steps

As already mentioned, traditional active contours are static since they act on single images, and manual because they require a user intervention for the initialization phase (i.e. position an initial contour in the vicinity of interest region). Therefore, we’ve made fundamental modifications and improvements to these models by qualifying them to be dynamic and fully automatic for the purpose of segmenting and tracking dynamic textures. ADAC follows up three principle steps as schematized in Fig. 2: Motion estimation, Dynamic texture segmentation and finally tracking. In what follows, the proposed framework ABCs will be viewed, moreover, each step will be detailed aside. 4.1

ADAC Algorithm and Mechanism

The general algorithm of ADAC system is as below:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Input: Video frames (containing dynamic texture). Output: Segmented frames. f rame i=2 ; while ( ¬(Existing motion) & ¬(Existing Segmented area) do Read current frame (f rame i) ; Convert it to grayscale ; Estimate its motion between two frames: (f rame i − 1, f rame i) ; if (Existing motion) then Compute Bounding box of motion fields ; Calculate gravity center ; Position an active contour on a gravity center ; Deform it over the area; if (Existing Segmented area) then Save segemented area’s contour ; f rame i + + ; end end f rame i + +; end foreach f rame i in remaining f rames do Position the saved active contour on top of f rame i ; Deform it over the area ; Save segemented area’s contour ; f rame i + + ; end

Algorithm 1. The Proposed ADAC Algorithm

Dynamic Textures Segmentation and Tracking

699

Considering a sequence of images, ADAC system firstly estimates the motion fields utilizing optical flow method, it is loopy applied on the first frames until a motion is detected, this step automates the process of initialization, so instead of choosing the region of interest and positioning the initial curve manually, we sought to spot out the fields that have moving textures automatically. Subsequently, this positioned active contour is deformed over the identified motion fields area in order to segment dynamic texture. Finally, the segmented area will be followed in the remaining frames. For this goal, we suggest to use the segmented area’s position (represented by a contour) in the previous frame as the new initial position of the active contour in the current frame, and thus deforms it on the new area of interest. A simplified explanation of the algorithm is presented in the following diagram (Fig. 3): 4.2

ADAC Detailed Steps

Motion Estimation. Manuel initialization of active contours can not be retained within fully automatic frameworks. For this cause, we propose the automatic active contours that are selfinitialized on estimated motion fields. Several existing methods calculate optical flow aiming to estimate displacement on a sequence of images. They integrate information on spatial and spatio-temporal neighborhood. It is calculated between two frames taken at t and t + Δt, for each pixel at a location (x, y, t) of I(x, y, t) intensity. Finding the optical flow reside on solving the following constraint equation: Ix U + Iy V = −It

(3)

Where U and V are the horizontal and vertical motion vectors respectively. In ADAC system, we exploited Horn and Schunck method [21] to assess these motion fields. It computes spatio-temporal image brightness derivatives (Ix , Iy , It ), then minimizes the Eq. 3 to find U and V at each image’s pixel, it’s given by the following iterative equations: k

U k+1 = U −

k

k

k

k

Ix (Ix U + Iy V + It ) Iy (Ix U + Iy V + It ) k , V k+1 = V − (4) α2 + Ix2 + Iy2 α2 + Ix2 + Iy2

With: k denotes the previous iteration and k + 1 current to be calculated U and V are neighborhood averages of a pixel (x, y), α is a smoothing weight (Fig. 4).

700

I. Bida and S. Aouat

Fig. 3. Diagram schematizing ADAC algorithm

Dynamic Texture Segmentation. Once motion fields are estimated, we proceed for segmentation phase. A bounding box is calculated over the moving area then active contour’s initial curve is positioned on its gravity center, as follow (Fig. 5): Hereafter, we compute the evolution equation according to Mukherjee’s method in order to update and evolve the initial curve. In implementation, we considered giving 7 attempts for contour’s points to converge if no changes were noticed. Figure 6 demonstrates the curve evolution procedure over a Waterfall scene.

Dynamic Textures Segmentation and Tracking

Sequence (648ea10.avi) of DynTex

701

Sequence (Street static cam 29 01.mp4) of YUP++

Fig. 4. Samples of motion estimation by Horn and Schunck method

(a)

(b)

(c)

Fig. 5. Initialization process of dynamic texture segmentation: (a) Estimated motion fields, (b) Gravity center of bounding box, (c) Curve positioning

(a1)

(a4)

(a2)

(a3)

(b)

(c)

Fig. 6. Active contour’s evolution: (a1) Curve at iteration 30, (a2) Iteration 90, (a3) Iteration 150, (a4) Iteration 250 (convergence), (b) Segmented texture foreground, (c) Segmented texture mask

Dynamic Texture Tracking. The implemented active contours are qualified of dynamic for their ability to track moving textures, where the strategy of tracking utilizes the segmented contour in the previous frame as the new in the current frame. This, after applying an erosion to diminish its size. We point out that displacement’s amplitude of tracked dynamic textures will certainly have an impact on segmentation results. Consequently, large displacements between successive images can cause the active contour to diverge as the initialization in the current image is made with respect to the previous.

702

5

I. Bida and S. Aouat

Experimentation and Results

In this segment, we illustrate some of the obtained results (Figs. 7 and 8), where a variety of scenes was selected, containing different dynamic textures: inside and outside doors, natural and artificial, random and repetitive.

(a)

(b)

Fig. 7. Dynamic textures segmentation samples of DynTex dataset (Swaying tree, Candle’s fire): (a) Segmented contours, (b) Segmented masks

(a)

(b)

Fig. 8. Dynamic textures segmentation samples of YUP++ dataset (Rushing river, Forest’s re, Sky clouds and Fountain): (a) Segmented contours, (b) Segmented masks

Dynamic Textures Segmentation and Tracking

703

Unfortunately, we did not find an already segmented version of DynTex and YUP++ datasets to which we could computationally compare our obtained results, thus for this work a visual evaluation is applied on these outcomes to judge whether it’s a decent segmentation.

6

Conclusion

In this contribution, a new approach for dynamic texture segmentation and tracking has been introduced. The proposed complete system “ADAC” performs a spatial segmentation then a temporal tracking, it has adapted and enhanced static active contours to be fully automatic utilizing optical flow to initialize the active contour curve then dynamically track the segmented dynamic texture. Presented segmentation and tracking results seemed visually satisfying. For future works, we will be constructing a manual ground truth segmented DynTex and YUP++ datasets, which will serve as a reference for computational evaluations. Utilizing block motion techniques instead of optical flow for motion estimation would be very interesting to improve the approach on term of speed.

References 1. Dubois, S., Peteri, R., Menard, M.: Decomposition of dynamic textures using morphological component analysis. IEEE Trans. Circuits Syst. Video Technol. 22(2), 188–201 (2012) 2. Zaitoun, N.M., Aqel, M.J.: Survey on image segmentation techniques. Procedia Comput. Sci. 65, 797–806 (2015). International Conference on Communications, Management, and Information Technology (ICCMIT 2015) 3. Szummer, M., Picard, R.W.: Temporal texture modeling. In: Proceedings of 3rd IEEE International Conference on Image Processing, pp. 823–826, September 1996 4. Ghanem, B., Ahuja, N.: Extracting a fluid dynamic texture and the background from video. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, June 2008 5. Frantc, V.A., Makov, S.V., Voronin, V.V., Marchuk, V.I., Stradanchenko, S.G., Egiazarian, K.O.: Video segmentation in presence of static and dynamic textures. Electron. Imaging 15, 1–6 (2016) 6. Chen, L., Qiao, Y.: Markov random field based dynamic texture segmentation using inter-scale context. In: 2016 IEEE International Conference on Information and Automation (ICIA), pp. 1924–1927 (2016) 7. Doretto, G., Chiuso, A., Wu, Y.N., Soatto, S.: Dynamic textures. Int. J. Comput. Vis. 51(2), 91–109 (2003) 8. Chen, J., Zhao, G., Salo, M., Rahtu, E., Pietikainen, M.: Automatic dynamic texture segmentation using local descriptors and optical flow. IEEE Trans. Image Process. 22(1), 326–339 (2013) 9. Sasidharan, R., Menaka, D.: Dynamic texture segmentation of video using texture descriptors and optical flow of pixels for automating monitoring in different environments. In: 2013 International Conference on Communication and Signal Processing, pp. 841–846 (2013)

704

I. Bida and S. Aouat

10. Soygaonkar, P., Paygude, S., Vyas, V.: Dynamic texture segmentation using texture descriptors and optical flow techniques, vol. 328, pp. 281–288 (2015) 11. Zhao, G., Pietik¨ ainen, M.: Dynamic texture recognition using volume local binary patterns. In: Vidal, R., Heyden, A., Ma, Y. (eds.) Dynamical Vision, pp. 165–177. Springer, Heidelberg (2007) 12. Rahman, A., Murshed, M.: Segmentation of dynamic textures. In: International Conference on Computer and Information Technology, pp. 1–6, December 2007 13. Li, J., Chen, L., Cai, Y.: Dynamic texture segmentation using 3-d Fourier transform. In: 2009 Fifth International Conference on Image and Graphics, pp. 293–298, September 2009 14. Dubois, S., P´eteri, R., Menard, M.: Segmentation de textures dynamiques: une m´ethode bas´ee sur la transform´ee en curvelet 3D et une structure d’octree. In: Colloque GRETSI, Dijon, France, page Id 630, September 2009 15. Kamarasan, M., Savitha, V.: Content based image retrieval using wavelet transforms with dynamic texture (DT). Int. J. Adv. Comput. Eng. Netw. (IJACEN) (2017) 16. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988) 17. Fares, W.A.: D´etection et suivi d’objets par vision fond´es sur segmentation par contour actif bas´e r´egion. Ph.D. thesis, Universit´e de Toulouse (2013) 18. Allier, B.: Contribution ` a la num´erisation des collections: apports des contours actifs. Th`ese en Informatique, Institut National Des Sciences Appliqu´ees de Lyon, Lyon (2003) 19. Mukherjee, S., Acton, S.T.: Region based segmentation in presence of intensity inhomogeneity using legendre polynomials. IEEE Signal Process. Lett. 22(3), 298– 302 (2015) 20. Cremers, D., Rousson, M., Deriche, R.: A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape. Int. J. Comput. Vis. 72(2), 195–215 (2007) 21. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Technical report, Cambridge, MA, USA (1980)

Author Index

A Abada, Driss, 205 Abbasi, Maryam, 512 Abik, Mounia, 672 Achahod, Samir, 77 Adam, Michel, 106 Aguirre-Munizaga, Maritza, 181, 301 Aidara, Chérif Ahmed Tidiane, 188, 194, 233 Aidemark, Jan, 400 Ait El Mouden, Zakariyaa, 594 Ajarroud, Ouafa, 312 Ajhoun, Rachida, 21 Al-Aboody, Nadia, 256 Albuquerque, Daniel, 512 Alharbi, Ayidh, 342 Ali Khoudja, Meriem, 542 Ali, Manar Salama, 679 Aljahdali, Sultan, 613 Allaoui, Rabha, 603 Alnabhan, Najla, 256 Al-Rawishidy, Hamed, 256 Al-shehri, Wafa Saad, 679 Anass, Rabii, 352 Aouat, Saliha, 631, 694 Arias T, Susana A., 1 Askenäs, Linda, 400 Assami, Sara, 21 Ayachi, Yassine, 442 Azamat, Dzhonov, 225 Azizi, Abdelmalek, 384, 393 Azizi, Mostafa, 331, 361, 588 Azizi, Yassine, 331 B Bahaj, Mohamed, 565 Baibai, Kaoutar, 687

Baidada, Mohammed, 39 Barros, Teodoro Alvarado, 572 Bellach, Benaissa, 687 Belouadah, Hocine, 245 Benyamina, Abou El Hasan, 290 Berramla, Karima, 290 Berrich, Jamal, 442 Biaye, Bala Moussa, 188, 194, 233 Bida, Ikram, 631, 694 Bouarfa, Hafida, 542 Boudaa, Abdelghani, 245 Boulouz, Abdellah, 205 Boumezbeur, Insaf, 412 Braik, Malik, 613 C Caldeira, Filipe, 512 Carrillo, Freddy Giancarlo Salazar, 572 Chafiq, Nadia, 118, 126 Chaoui, Allaoua, 290 Cherkaoui, Mohamed, 625 Chiadmi, Dalila, 492, 502 Coulibaly, Amadou, 188, 194, 233 Crisóstomo, João, 154 Cruz-Duarte, Sebastián, 269 Cubillos-Calvachi, Juan, 433 D Dahar, Hind, 451 Dahbi, Kawtar Younsi, 502 Daoud, Moncef, 106 Daoudi, Najima, 21 Delgado, Carlota, 301 Diagne, Serigne, 188, 194, 233 Dimov, Dimo, 373 Dmitry, Alexandrov, 225

© Springer Nature Switzerland AG 2019 Á. Rocha and M. Serrhini (Eds.): EMENA-ISTL 2018, SIST 111, pp. 705–707, 2019. https://doi.org/10.1007/978-3-030-03577-8

706 E Ech-choudani, Youssef, 687 El Beqqal, Mohamed, 361 El Fazazi, Hanaa, 82 El Houm, Yassine, 625 El Mariouli, Majda, 471 Elboukhari, Mohamed, 331 Elgarej, Mouhcine, 82 Elkoubaiti, Houda, 100 Elmaizi, Asma, 521 Elmekki, Hanae, 492 Emharraf, Mohamed, 687 Ettifouri, El Hassane, 442 Ezzouak, Siham, 393 F Fareh, Messaouda, 542 Ferreira, Maria Eduarda, 154 Fournier, Helene, 164 Frison, Patrice, 106 Furtado, Pedro, 512 G Gabli, Mohammed, 282, 588 Gaona-García, Paulo, 433 Gaona-García, Elvis, 269 Gaona-García, Paulo Alonso, 552 Gaona-García, Paulo, 269 Gaye, Khalifa, 188, 194, 233 Gomez A, Hector F., 1, 572 Goosen, Leila, 90, 134, 144 Gouws, Patricia, 144 Gueraoui, Kamal, 367 Guisser, M’hammed, 625 Gutiérrez-Ardila, Carlos, 433 H Hachami, Khalid, 687 Hadi, Saleh, 225 Hajar, Moha, 594 Hajji, Tarik, 384, 460 Hamdane, Mohamed Elkamel, 290 Hammouch, Ahmed, 521 Hendel, Fatiha, 321 Hendel, Mounia, 321 Hernández-Rosas, José, 181 Herrera-Cubides, Jhon Francined, 552 Hosny, Manar, 10 Housni, Mohamed, 118, 126 Hurtado, Jorge Alonso Benitez, 572 I Idri, Ali, 312

Author Index J Jaafar, Hamid, 672 Jaara, El Miloud, 282 Jakimi, Abdeslam, 594 Jamil, Ouazzani Mohammed, 460 Jarraya, Salma Kammoun, 679 K Kechadi, Tahar M., 342 Kerkour Elmiad, Aissa, 662 Khadija, Ouazzani Touhami, 352 Khourdifi, Youness, 565 Kop, Rita, 164 L Laassiri, Jalal, 471 Labbadi, Moussa, 625 Lagos-Ortiz, Katty, 181 Lahmer, Mohamed, 216 Lamharhar, Hind, 492, 502 Lamiri, Abdenaby, 367 Lanet, Jean Louis, 361 Lanet, Jean-Louis, 384 Laskri, Mohamed Tayeb, 652 Llerena, Luis Antonio, 572 Lozada T., Edwin Fabricio, 572

M Makhlouf, Sid Ahmed, 482 Mansouri, Khalifa, 29, 39, 77, 82 Marques, Gonçalo, 424 Martínez V, Miguel A., 1 Martinez, Carlos E., 1 Martins, Pedro, 512 Massaq, Abdellah, 205 Mbise, Esther Rosinner, 56 Mbise, Esther-Rosinner, 171 Meddeber, Hayat, 641 Mermri, El Bekkaye, 282 Mokhtari, Anas, 588 Molyneaux, Heather, 164 Montenegro-Marin, Carlos, 433 Montenegro-Marín, Carlos, 552 Montero, Calkin Suero, 171 Moussetad, Mohamed, 118 Mrabet, Radouane, 100 Mrabti, Wafae, 687 Mwandosya, Godfrey, 56, 171 N Naranjo-Santamaria, Joselito, 572 Nassereddine, Bouchaib, 337

Author Index Ngugi, James, 90 Nhaila, Hasna, 521 O Ordoñez, Richard Eduardo Ruiz, 572 Ouaissa, Mariyam, 216 Ouerdi, Noura, 384 Ounsa, Roudiès, 352 P Palisse, Aurelien, 384 Piedrahita-Gonzalez, Juan, 433 Pitarma, Rui, 154, 424 Poirier, Franck, 29, 39, 77 Q Qbadou, Mohamed, 82 R Rahhou, Adnane, 44, 67 Ramirez-Yela, Joel, 301 Real-Avilés, Karina, 181 Rhattoy, Abdallah, 216 Rossi, Giovanni, 531 Roudies, Ounsa, 451 S Sá, Filipe, 512 Sabrina, Yessaadi, 652 Safsouf, Yassine, 29 Saliha, Assoul, 352 Samadi, Abderrazzak, 82 Sanchez, Natalia Bustamante, 1 Sanchez-Cevallos, Estefania, 1

707 Sarhrouni, Elkebir, 521 Sarr, Edouard Ngor, 233 Sastoque-Mahecha, Marco, 269 Sheta, Alaa, 613 Sinche-Guzmán, Andrea, 181 Suero Montero, Calkin, 56 T Taibi, Mostafa, 603 Talbi, Mohammed, 44, 67 Tomé, Paulo, 512 Touahni, Raja, 603 Toumi, Bouchentouf, 442 Toussaint, Godfried T., 580 Tsonev, Yuliyan, 373 V Varón-Capera, Álvaro, 552 Vásquez-Bermudez, Mitchell, 181 Vera Lucio, Néstor, 301 Vergara-Lozano, Vanessa, 181, 301 W Wanzeller, Cristina, 512 Y Yagoubi, Belabbas, 482, 641 Z Zarnoufi, Randa, 672 Zarour, Karim, 412 Zaydi, Mounia, 337 Zeggwagh, Gamal, 367 Zellou, Ahmed, 312

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.