Harmony Search and Nature Inspired Optimization Algorithms

The book covers different aspects of real-world applications of optimization algorithms. It provides insights from the Fourth International Conference on Harmony Search, Soft Computing and Applications held at BML Munjal University, Gurgaon, India on February 7–9, 2018. It consists of research articles on novel and newly proposed optimization algorithms; the theoretical study of nature-inspired optimization algorithms; numerically established results of nature-inspired optimization algorithms; and real-world applications of optimization algorithms and synthetic benchmarking of optimization algorithms.


135 downloads 5K Views 36MB Size

Recommend Stories

Empty story

Idea Transcript


Advances in Intelligent Systems and Computing 741

Neha Yadav · Anupam Yadav  Jagdish Chand Bansal · Kusum Deep  Joong Hoon Kim Editors

Harmony Search and Nature Inspired Optimization Algorithms Theory and Applications, ICHSA 2018

Advances in Intelligent Systems and Computing Volume 741

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results.

Advisory Board Chairman Nikhil R. Pal, Indian Statistical Institute, Kolkata, India e-mail: [email protected] Members Rafael Bello Perez, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cuba e-mail: [email protected] Emilio S. Corchado, University of Salamanca, Salamanca, Spain e-mail: [email protected] Hani Hagras, University of Essex, Colchester, UK e-mail: [email protected] László T. Kóczy, Széchenyi István University, Győr, Hungary e-mail: [email protected] Vladik Kreinovich, University of Texas at El Paso, El Paso, USA e-mail: [email protected] Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwan e-mail: [email protected] Jie Lu, University of Technology, Sydney, Australia e-mail: [email protected] Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexico e-mail: [email protected] Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazil e-mail: [email protected] Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Poland e-mail: [email protected] Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Kong e-mail: [email protected]

More information about this series at http://www.springer.com/series/11156

Neha Yadav Anupam Yadav Jagdish Chand Bansal Kusum Deep Joong Hoon Kim •



Editors

Harmony Search and Nature Inspired Optimization Algorithms Theory and Applications, ICHSA 2018

123

Editors Neha Yadav School of Engineering and Technology BML Munjal University Gurgaon, Haryana India Anupam Yadav Department of Sciences and Humanities National Institute of Technology Srinagar, Uttarakhand India

Kusum Deep Department of Mathematics Indian Institute of Technology Roorkee Roorkee, Uttarakhand India Joong Hoon Kim School of Civil, Environmental and Architectural Engineering Korea University Seoul Korea (Republic of)

Jagdish Chand Bansal Department of Mathematics South Asian University New Delhi India

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-981-13-0760-7 ISBN 978-981-13-0761-4 (eBook) https://doi.org/10.1007/978-981-13-0761-4 Library of Congress Control Number: 2018943721 © Springer Nature Singapore Pte Ltd. 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

It is a matter of pride that 4th International Conference on Harmony Search, Soft Computing and Applications (ICHSA 2018) is being organized in India for the very first time. It is noted that earlier editions of this conference were held at South Korea and Spain. This annual event of ICHSA is a joint effort of many reputed institutes: BML Munjal University, Gurugram; National Institute of Technology Uttarakhand; and Korea University. The first and second series of this conference were held at Korea University, Seoul, Republic of Korea. Professor Joong Hoon Kim, Korea University, has successfully organized first two versions in his Parent University. The third conference of the series was organized at Tecnalia, Bilbao, Spain. Keeping the legacy of the conference on it was a proud moment to organize it in India at BML Munjal University in collaboration with NIT Uttarakhand, Korea University, and Soft Computing Research Society during 7–9 February 2018. The focus of ICHSA 2018 is to provide a common platform for all the researchers working in the area of harmony search and other soft computing techniques and their applications to diverse areas of control systems, data mining, game theory, supply chain management, signal processing, pattern recognition, big data applications, cloud computing, defence disaster modelling, renewable energy, robotics water and waste management, structural engineering, etc. ICHSA 2018 attracted a wide spectrum of thought-provoking articles. A total of 117 high-quality research articles were selected for the appearance in the form of this proceedings. We strongly hope that the papers published in this proceedings will be helpful for improving the understating of various soft computing methods, and it will inspire many upcoming researchers in this field as a torchbearer. The real-life applications presented in this proceedings show the contemporary significance and future scope of soft computing methods. The editors express their sincere gratitude to ICHSA 2018, Chief Patron, Patron, Keynote Speakers, Chairs of the conference, reviewers and local organizing committee; without their support, it would be impossible to maintain the quality and standards of this conference series. We pay our sincere thanks to the Springer and its team for their invaluable support in the

v

vi

Preface

preparation and publication of this conference proceedings. Over and above, we express our deepest sense of gratitude to the ‘BML Munjal University’ for facilitating the hosting of the conference. Gurgaon, India Srinagar (Garhwal), India New Delhi, India Roorkee, India Seoul, Korea (Republic of)

Neha Yadav Anupam Yadav Jagdish Chand Bansal Kusum Deep Joong Hoon Kim

Organizing Committee

Chief Patron Mr. Akshay Munjal, President, BMU

Patrons Prof. (Dr.) B. S. Satyanarayana, Vice Chancellor, BMU Prof. (Dr.) M. B. Srinivas, Dean SOET, BMU

Honorary Chair Prof. Joong Hoon Kim, Korea University, Seoul, South Korea

General Chairs Prof. Kusum Deep, Professor, Mathematics, IIT Roorkee Dr. Jagdish Chand Bansal, South Asian University, New Delhi Dr. Kedar Nath Das, NIT Silchar

Conveners & Organizing Chairs Dr. Neha Yadav, Assistant Professor, Mathematics, BMU Dr. Anupam Yadav, Assistant Professor, NIT Uttarakhand

vii

viii

Local Organizing Committee Dr. Ziya Uddin, BMU Dr. Rishi Asthana, BMU Dr. Ranjib Banerjee, BMU Dr. Akhlaq Husain, BMU Dr. Kalluri Vinayak, BMU Dr. Maheshwar Dwivedi, BMU Dr. Rakesh Prasad Badoni, BMU Dr. Mukesh Mann, BMU Dr. Pradeep Arya, BMU Dr. Sumit Roy, BMU Prof. Goldie Gabrani, BMU Dr. Swati Jha, BMU Dr. Deepti Sharma, BMU Dr. Vaishali Sharma, BMU Mr. Nilaish, BMU Dr. Ashok Kumar Suhag, BMU Dr. Sanmitra Barman, BMU Dr. Nandita Choudhary Dr. Sanjay Kashyap, BMU Mr. Jai Prakash Bhardwaj, BMU Ms. Neera Sood, BMU

Publicity Chairs Mr. Nilaish, BMU Dr. Shwetank Avikal, Graphic Era University

Organizing Committee

Advisory Committee

National Advisory Committee • • • • • • • • • • • • • • • •

Prof. Kusum Deep, IIT Roorkee Prof. Swagatam Das, ISI Kolkata Prof. Laxmidhar Behera, IIT Kanpur Prof. Ajit Kumar Verma, IIT Bombay Prof. Mohan K. Kadalbajoo, LNMIIT, Jaipur Dr. Manoj Kumar, MNNIT Allahabad Dr. J. C. Bansal, South Asian University, New Delhi Dr. Kedar Nath Das, NIT Silchar Dr. Manoj Thakur, IIT Mandi Dr. Krishna Pratap Singh, IIIT Allahabad Dr. Harish RTU, Kota Dr. Amreek Singh, DRDO Chandigarh Prof. Sangeeta Sabharwal, NSIT Delhi Prof. U. C. Gupta, IIT Kharagpur Dr. Nagendra Pratap Singh, NCBS Dr. Harish, RTU Kota

International Advisory Committee • • • • • • •

Prof. J. H. Kim, Korea University, South Korea Prof. Z. W. Geem, Gachon University, South Korea Prof. Javier Del Ser, Tecnalia Research and Innovation, Spain Dr. Lipo Wang, Nanyang Technological University, Singapore Dr. Patrick Siarry, Universit de Paris 12, France Prof. Xin-She Yang, Middlesex University, UK Prof. Chung-Li Tseng, University of New South Wales, Australia

ix

x

• • • • • • •

Advisory Committee

Prof. I. Kougias, European Commission, Joint Research Centre Prof. K. S. McFall, Kennesaw State University, USA Dr. D. G. Yoo, Korea University, South Korea Dr. Ali Sadollah, Iran Dr. Donghwi Jung, Korea University, South Korea Prof. A. K. Nagar, Liverpool Hope University, UK Prof. Andres Iglesias, University of Cantabria, Spain

Contents

Privacy Preserving Data Mining: A Review of the State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shivani Sharma and Sachin Ahuja

1

An MCDM-Based Approach for Selecting the Best State for Tourism in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rashmi Rashmi, Rohit Singh, Mukesh Chand and Shwetank Avikal

17

Gravitational Search Algorithm: A State-of-the-Art Review . . . . . . . . . . Indu Bala and Anupam Yadav

27

Investigating the Role of Gate Operation in Real-Time Flood Control of Urban Drainage Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fatemeh Jafari, S. Jamshid Mousavi, Jafar Yazdi and Joong Hoon Kim

39

Molecular Dynamics Simulations of a Protein in Water and in Vacuum to Study the Solvent Effect . . . . . . . . . . . . . . . . . . . . . . . . . . Nitin Sharma and Madhvi Shakya

49

An Exploiting Neighboring Relationship and Utilizing an Overhearing Concept for Improvement Routing Protocol in Wireless Mesh Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammad Meftah Alrayes, Neeraj Tyagi, Rajeev Tripathi and Arun Kumar Misra

57

A Comparative Study of Machine Learning Algorithms for Prior Prediction of UFC Fights . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hitkul, Karmanya Aggarwal, Neha Yadav and Maheshwar Dwivedy

67

Detection of a Real Sinusoid in Noise using Differential Evolution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gayathri Narayanan and Dhanesh G. Kurup

77

xi

xii

Contents

Inherited Competitive Swarm Optimizer for Large-Scale Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prabhujit Mohapatra, Kedar Nath Das and Santanu Roy Performance Comparison of Metaheuristic Optimization Algorithms Using Water Distribution System Design Benchmarks . . . . . . . . . . . . . . Ho Min Lee, Donghwi Jung, Ali Sadollah, Eui Hoon Lee and Joong Hoon Kim

85

97

Comparison of Parameter-Setting-Free and Self-adaptive Harmony Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Young Hwan Choi, Sajjad Eghdami, Thi Thuy Ngo, Sachchida Nand Chaurasia and Joong Hoon Kim Copycat Harmony Search: Considering Poor Music Player’s Followship Toward Good Player . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Sang Hoon Jun, Young Hwan Choi, Donghwi Jung and Joong Hoon Kim Fused Image Separation with Scatter Graphical Method . . . . . . . . . . . . 119 Mayank Satya Prakash Sharma, Ranjeet Singh Tomar, Nikhil Paliwal and Prashant Shrivastava Ascending and Descending Order of Random Projections: Comparative Analysis of High-Dimensional Data Clustering . . . . . . . . . 133 Raghunadh Pasunuri, Vadlamudi China Venkaiah and Bhaskar Dhariyal Speed Control of the Sensorless BLDC Motor Drive Through Different Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Vikas Verma, Nidhi Singh Pal and Bhavnesh Kumar Urban Drainage System Design Minimizing System Cost Constrained to Failure Depth and Duration Under Flooding Events . . . . . . . . . . . . . 153 Soon Ho Kwon, Donghwi Jung and Joong Hoon Kim Analysis of Energy Storage for Hybrid System Using FLC . . . . . . . . . . 159 Ayush Kumar Singh, Aakash Kumar and Nidhi Singh Pal Impact of Emission Trading on Optimal Bidding of Price Takers in a Competitive Energy Market . . . . . . . . . . . . . . . . . . . . . . . . 171 Somendra P. S. Mathur, Anoop Arya and Manisha dubey Impact of NOVEL HVDC Superconducting Circuit Breaker on HVDC Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Tarun Shrivastava, A. M. Shandilya and S. C. Gupta Palmprint Matching based on Normalized Correlation Coefficient and Mean Structural Similarity Index Measure . . . . . . . . . . . . . . . . . . . 193 Deval Verma, Himanshu Agarwal and A. K. Aggarwal

Contents

xiii

A Comparative Study on Feature Selection Techniques for Multi-cluster Text Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Ananya Gupta and Shahin Ara Begum Fuzzy Decision Tree with Fuzzy Particle Swarm Optimization Clustering for Locating Users in an Indoor Environment Using Wireless Signal Strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Swathi Jamjala Narayanan, Boominathan Perumal, Cyril Joe Baby and Rajen B. Bhatt Optimization Approach for Bounds Involving Generalized Normalized d-Casorati Curvatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Pooja Bansal and Mohammad Hasan Shahid Particle Swarm Optimization with Probabilistic Inertia Weight . . . . . . . 239 Ankit Agrawal and Sarsij Tripathi An Evolutionary Algorithm Based Hyper-heuristic for the Job-Shop Scheduling Problem with No-Wait Constraint . . . . . . . . . . . . . . . . . . . . 249 Sachchida Nand Chaurasia, Shyam Sundar, Donghwi Jung, Ho Min Lee and Joong Hoon Kim An Evolutionary Algorithm Based Hyper-heuristic for the Set Packing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Sachchida Nand Chaurasia, Donghwi Jung, Ho Min Lee and Joong Hoon Kim Developing a Decision-Making Model Using Interval-Valued Intuitionistic Fuzzy Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Syed Abou Iltaf Hussain, Uttam Kumar Mandal and Sankar Prasad Mondal A Multi-start Iterated Local Search Algorithm with Variable Degree of Perturbation for the Covering Salesman Problem . . . . . . . . . 279 Pandiri Venkatesh, Gaurav Srivastava and Alok Singh A New Approach to Soft Hyperideals in LA-Semihypergroups . . . . . . . 293 Sabahat Ali Khan, M. Y. Abbasi and Aakif Fairooze Talee Adjusted Artificial Bee Colony Algorithm for the Minimum Weight Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Adis Alihodzic, Haris Smajlovic, Eva Tuba, Romana Capor Hrosik and Milan Tuba Decision-Making Proposition of Fuzzy Information Measure with Collective Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Anjali Munde Exact Algorithm for Lð2; 1Þ Labeling of Cartesian Product Between Complete Bipartite Graph and Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Sumonta Ghosh, Prosanta Sarkar and Anita Pal

xiv

Contents

The Forgotten Topological Index of Graphs Based on New Operations Related to the Join of Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Prosanta Sarkar, Nilanjan De and Anita Pal Clustering and Auction in Sequence: A Two Fold Mechanism for Participatory Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Jaya Mukhopadhyay, Vikash Kumar Singh, Sajal Mukhopadhyay and Anita Pal High-Order Compact Finite Difference Scheme for Euler–Bernoulli Beam Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Maheshwar Pathak and Pratibha Joshi Test Case Optimization and Prioritization Based on Multi-objective Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Deepti Bala Mishra, Rajashree Mishra, Arup Abhinna Acharya and Kedar Nath Das PSO-SVM Approach in the Prediction of Scour Depth Around Different Shapes of Bridge Pier in Live Bed Scour Condition . . . . . . . . 383 B. M. Sreedhara, Geetha Kuntoji, Manu and S. Mandal Replenishment Policy for Deteriorating Items Under Price Discount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Anubhav Namdeo and Uttam Kumar Khedlekar Performance Emission Characterization of a LPG-Diesel Dual Fuel Operation: A Gene Expression Programming Approach . . . . . . . . . . . . 405 Amitav Chakraborty, Sumit Roy and Rahul Banerjee Comprehensive Survey of OLAP Models . . . . . . . . . . . . . . . . . . . . . . . . 415 Harkiran Kaur and Gursimran Kaur Energy Efficiency in Load Balancing of Nodes Using Soft Computing Approach in WBAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Rakhee and M. B. Srinivas Single Image Defogging Based on Local Extrema and Relativity of Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 R. Vignesh and Philomina Simon Improved Edge-Preserving Decomposition Based on Single Image Dehazing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 S. K. Anusuman and Philomina Simon Global and Local Neighborhood Based Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Shakti Chourasia, Harish Sharma, Manoj Singh and Jagdish Chand Bansal

Contents

xv

Rough Set Theoretic and Logical Study of Some Approximation Pairs Due to Pomykala . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Pulak Samanta The Benefits of Carrier Collaboration for Capacity Shortage Under Incomplete Advance Demand Information . . . . . . . . . . . . . . . . . 471 Arindam Debroy and S. P. Sarmah Allocation of Bins in Urban Solid Waste Logistics System . . . . . . . . . . . 485 P. Rathore and S. P. Sarmah Image Segmentation Through Fuzzy Clustering: A Survey . . . . . . . . . . 497 Rashi Jain and Rama Shankar Sharma Study of Various Technologies in Solar Power Generation . . . . . . . . . . 509 Siddharth Gupta, Pratibha Tiwari and Komal Singh Reduction of Test Data Volume Using DTESFF-Based Partial Enhanced Scan Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Ashok Kumar Suhag Performance Analysis and Optimization of Vapour Absorption Refrigeration System Using Different Working Fluid Pairs . . . . . . . . . . 527 Paras Kalura, Susheem Kashyap, Vishal Sharma, Geetanjali Raghav and Jasmeet Kalra Vehicle Routing Problem with Time Windows Using Meta-Heuristic Algorithms: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Aditya Dixit, Apoorva Mishra and Anupam Shukla Design and Aerodynamic Enhancement of Wing for BMW 5 Series Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 A. Agrawal, A. Juneja, A. Gupta, R. Mathur and G. Raghav Semi-distributed Modelling of Stormwater Drains Using Integrated Hydrodynamic EPA-SWM Model . . . . . . . . . . . . . . . . . . . . . 557 M. K. Sinha, K. Baier, R. Azzam, T. Baghel and M. K. Verma A MCDM-Based Approach for Selection of a Sedan Car from Indian Car Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 Rohit Singh, Rashmi and Shwetank Avikal Design and Simulation of Photovoltaic Cell Using Simscape MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579 Sucheta Singh, Shubhra Aakanksha, Manisha Rajoriya and Mohit Sahni A Regulated Computer Cooling Method: An Eco-Friendly Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Kumar Gourab Mallik and Sutirtha Kumar Guha

xvi

Contents

Robust Control Techniques for Master–Slave Surgical Robot Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 Mohd Salim Qureshi, Gopi Nath Kaki, Pankaj Swarnkar and Sushma Gupta OLAP Approach to Visualizations and Digital ATLAS for NRIs Directory Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Harkiran Kaur, Kawaljeet Singh and Tejinder Kaur Problems Associated with Hydraulic Turbines . . . . . . . . . . . . . . . . . . . . 621 Aman Kumar, Kunal Govil, Gaurav Dwivedi and Mayank Chhabra A Sine-Cosine Optimizer-Based Gamma Corrected Adaptive Fractional Differential Masking for Satellite Image Enhancement . . . . . 633 Himanshu Singh, Anil Kumar and L. K. Balyan Electrical Conductivity Sensing for Precision Agriculture: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Sonia Gupta, Mohit Kumar and Rashmi Priyadarshini Spam Detection Using Ensemble Learning . . . . . . . . . . . . . . . . . . . . . . . 661 Vashu Gupta, Aman Mehta, Akshay Goel, Utkarsh Dixit and Avinash Chandra Pandey A Coupled Approach for Solving a Class of Singular Initial Value Problems of Lane–Emden Type Arising in Astrophysics . . . . . . . . . . . . 669 Pratibha Joshi and Maheshwar Pathak Identification of Hindi Plain Text Using Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679 Siddheshwar Mukhede, Amol Prakash and Maiya Din A Variable Dimension Optimization Approach for Text Summarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687 Pradeepika Verma and Hari Om Minimizing Unbalance of Flexible Manufacturing System by Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Kritika Gaur, Indu and Vivek Chawla “Big” Data Management in Cloud Computing Environment . . . . . . . . . 707 Mohit Agarwal and Gur Mauj Saran Srivastava Automatic Optimization of Test Path Using Firefly Algorithm . . . . . . . . 717 Nisha Rathee, Rajendra Singh Chillar, Sakshi Vij and Sakshi Kukreja Image Denoising Techniques: A Brief Survey . . . . . . . . . . . . . . . . . . . . 731 Lokesh Singh and Rekhram Janghel

Contents

xvii

Applying PSO Based Technique for Analysis of Geffe Generator Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741 Maiya Din, Saibal K. Pal and S. K. Muttoo An Agent-Based Simulation Modeling Approach for Dynamic Job-Shop Manufacturing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751 Om Ji Shukla, Gunjan Soni, Rajesh Kumar, A. Sujil and Surya Prakash Risk Analysis of Water Treatment Plant Using Fuzzy-Integrated Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761 Priyank Srivastava, Mohit Agrawal, G. Aditya Narayanan, Manik Tandon, Mridul Narayan Tulsian and Dinesh Khanduja Keyframes and Shot Boundaries: The Attributes of Scene Segmentation and Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 N. Kumar and N. Sukavanam Toward Human-Powered Lower Limb Exoskeletons: A Review . . . . . . 783 Ashish Singla, Saurav Dhand, Ashwin Dhawad and Gurvinder S. Virk An Efficient Bi-Level Discrete PSO Variant for Multiple Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797 Soniya Lalwani, Harish Sharma, M. Krishna Mohan and Kusum Deep System Identification of an Inverted Pendulum Using Adaptive Neural Fuzzy Inference System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809 Ishan Chawla and Ashish Singla Dynamic Modeling of Flexible Robotic Manipulators . . . . . . . . . . . . . . . 819 Ashish Singla and Amardeep Singh Academic Performance Prediction Using Data Mining Techniques: Identification of Influential Factors Effecting the Academic Performance in Undergrad Professional Course . . . . . . . . . . . . . . . . . . . 835 Preet Kamal and Sachin Ahuja An Area IF-Defuzzification Technique and Intuitionistic Fuzzy Reliability Assessment of Nuclear Basic Events of Fault Tree Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845 Mohit Kumar Spotted Hyena Optimizer for Solving Complex and Non-linear Constrained Engineering Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857 Gaurav Dhiman and Vijay Kumar Reconfiguration of PTZ Camera Network with Minimum Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869

xviii

Contents

Sanoj Kumar, Claudio Piciarelli and Harendra Pal Singh Performance Evaluation of Optimization Techniques with Vector Quantization Used for Image Compression . . . . . . . . . . . . . . . . . . . . . . 879 Rausheen Bal, Aditya Bakshi and Sunanda Gupta Single Multiplicative Neuron Model in Reinforcement Learning . . . . . . 889 Shobhit Nigam Analysis of Educational Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . 897 Ravinder Ahuja, Animesh Jha, Rahul Maurya and Rishabh Srivastava A Review on Search-Based Tools and Techniques to Identify Bad Code Smells in Object-Oriented Systems . . . . . . . . . . . . . . . . . . . . . . . . 909 Amandeep Kaur and Gaurav Dhiman Feature Selection Using Metaheuristic Algorithms on Medical Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923 Shivam Mahendru and Shashank Agarwal Improved Mutation-Based Particle Swarm Optimization for Load Balancing in Cloud Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939 Neha Sethi, Surjit Singh and Gurvinder Singh Computational Intelligence Tools for Protein Modeling . . . . . . . . . . . . . 949 Rajesh Kondabala and Vijay Kumar Performance Analysis of Space Time Trellis Codes in Rayleigh Fading Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957 Shakti Raj Chopra, Akhil Gupta and Himanshu Monga Neural Network Based Analysis of Lightweight Block Cipher PRESENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969 Girish Mishra, S. V. S. S. N. V. G. Krishna Murthy and S. K. Pal User Profile Matching and Identification Using TLBO and Clustering Approach Over Social Networks . . . . . . . . . . . . . . . . . . 979 Shruti Garg, Sandeep K. Raghuwanshi and Param Deep Singh Hybrid Metaheuristic Based Scheduling with Job Duplication for Cloud Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989 Rachhpal Singh Total Fuzzy Agility Evaluation Using Fuzzy Methodology: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999 Priyank Srivastava, Dinesh Khanduja, Vishnu P. Agrawal and Neeraj Saini Black-Hole Gbest Differential Evolution Algorithm for Solving Robot Path Planning Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009 Prashant Sharma, Harish Sharma, Sandeep Kumar and Kavita Sharma

Contents

xix

Fibonacci Series-Inspired Local Search in Artificial Bee Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023 Nirmala Sharma, Harish Sharma, Ajay Sharma and Jagdish Chand Bansal Analysis of Lightweight Block Cipher FeW on the Basis of Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041 Aayush Jain and Girish Mishra Analysis of RC4 Crypts Using PSO Based Swarm Technique . . . . . . . . 1049 Maiya Din, Saibal K. Pal and S. K. Muttoo Pipe Size Design Optimization of Water Distribution Networks Using Water Cycle Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057 P. Praneeth, A. Vasan and K. Srinivasa Raju An Improved Authentication and Data Security Approach Over Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069 Ramraj Dangi and Satish Pawar Second Derivative-Free Two-Step Extrapolated Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077 V. B. Kumar Vatti, Ramadevi Sri and M. S. Kumar Mylapalli Review of Deep Learning Techniques for Gender Classification in Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089 Neelam Dwivedi and Dushyant Kumar Singh A Teaching–Learning-Based Optimization Algorithm for the Resource-Constrained Project Scheduling Problem . . . . . . . . . . . . . . . . 1101 Dheeraj Joshi, M. L. Mittal and Manish Kumar A Tabu Search Algorithm for Simultaneous Selection and Scheduling of Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111 Manish Kumar, M. L. Mittal, Gunjan Soni and Dheeraj Joshi A Survey: Image Segmentation Techniques . . . . . . . . . . . . . . . . . . . . . . 1123 Gurbakash Phonsa and K. Manu Analysis and Simulation of the Continuous Stirred Tank Reactor System Using Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 1141 Harsh Goud and Pankaj Swarnkar Fuzzy Logic Controlled Variable Frequency Drives . . . . . . . . . . . . . . . . 1153 Kartik Sharma, Anubhav Agrawal and Shuvabrata Bandopadhaya Butterfly-Fat-Tree Topology-Based Fault-Tolerant Network-on-Chip Design Using Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . 1165 P. Veda Bhanu, Pranav Venkatesh Kulkarni, U. Anil Kumar and J. Soumya

xx

Contents

Big Data Classification Using Scale-Free Binary Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177 Sonu Lal Gupta, Anurag Singh Baghel and Asif Iqbal Face Recognition: Novel Comparison of Various Feature Extraction Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189 Yashoda Makhija and Rama Shankar Sharma Performance Analysis of Hidden Terminal Problem in VANET for Safe Transportation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199 Ranjeet Singh Tomar, Mayank Satya Prakash Sharma, Sudhanshu Jha and Brijesh Kumar Chaurasia Effect of Various Distance Classifiers on the Performance of Bat and CS-Based Face Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . 1209 Preeti and Dinesh Kumar An Improved TLBO Leveraging Group and Experience Learning Concepts for Global Functions . . . . . . . . . . . . . . . . . . . . . . . . 1221 Jatinder Kaur, Surjeet Singh Chauhan and Pavitdeep Singh Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235

About the Editors

Dr. Neha Yadav is an assistant professor in School of Engineering & Technology, BML Munjal University, Gurugram. She worked as research professor in School of Civil, Environmental and Architectural Engineering at Korea University, South Korea. She received her Ph.D. from Motilal Nehru National Institute of Technology, Allahabad, India; M.Sc. in Mathematical Sciences from Banasthali University, Jaipur; and B.Sc. in Mathematics from Dr. R.M.L. Avadh University, Faizabad, in 2013, 2009 and 2007, respectively. Her research interests are real-time flood forecasting, mathematical modelling, numerical analysis, soft computing techniques, differential equations, boundary value problems, mathematical modelling, optimization. She has several journal papers and one book to her credit. Dr. Anupam Yadav is an assistant professor of Mathematics at National Institute of Technology Uttarakhand. His research areas are numerical optimization, high-order graph matching and operations research. He received his Ph.D. from Indian Institute of Technology Roorkee and M.Sc. in Mathematics from Banaras Hindu University, Varanasi, India. He has one book, one chapter, few invited talks, several journal and conference papers to his credit. Dr. Jagdish Chand Bansal is an assistant professor in South Asian University, New Delhi, India. Holding an excellent academic record, he is an excellent researcher in the field of swarm intelligence at national and international levels, having several research papers in journals of national and international repute. Prof. Kusum Deep is working as a full-time professor in the Department of Mathematics at Indian Institute of Technology Roorkee, Roorkee, India. Over the last 25 years, her research is increasingly well cited, making her a central international figure in the areas of nature-inspired optimization techniques, genetic algorithms and particle swarm optimization.

xxi

xxii

About the Editors

Prof. Joong Hoon Kim is associated with School of Civil, Environmental and Architectural Engineering, Korea University, South Korea. His major areas of interest include optimal design and management of water distribution systems, application of optimization techniques to various engineering problems, and development and application of evolutionary algorithms. He has 216 journal publications, 262 conference proceedings, several books/chapters to his credit. His publications include A New Heuristic Optimization Algorithm: Harmony Search, Simulation, February 2001, Vol. 76, pp 60–68, which has been cited over 2,500 times by other journals of diverse research areas.

Privacy Preserving Data Mining: A Review of the State of the Art Shivani Sharma and Sachin Ahuja

Abstract Safeguarding of security in information mining has risen as an outright essential for trading secret data as far as information investigation, approval, and distributing. Constantly raising web phishing postured serious danger on across the board proliferation of delicate data over the web. Then again, the questionable sentiments and conflicts intervened unwillingness of different data suppliers towards the unwavering quality insurance of information from exposure frequently comes about absolute dismissal in information sharing or off base data sharing. This article gives an all-encompassing outline on new point of view and precise translation of a rundown distributed literary works through their fastidious association in subcategories. The crucial ideas of the current protection safeguarding information mining strategies, their benefits, and deficiencies are displayed. The present security protecting information mining methods are ordered in light of contortion, affiliation administer, shroud affiliation control, scientific categorization, bunching, cooperative characterization, outsourced information mining, disseminated, and k-anonymity, where their remarkable points of interest and hindrances are underlined. This watchful investigation uncovers the past improvement, show examine challenges, future patterns, the holes and weaknesses. Promote huge improvements for more powerful security insurance and safeguarding are confirmed to be compulsory. Keywords Association · Classification · Clustering · Data mining · Distortion K-anonymity · Outsourcing · Privacy preserving

S. Sharma (B) · S. Ahuja Chitkara University, Chandigarh, Punjab, India e-mail: [email protected] S. Ahuja e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_1

1

2

S. Sharma and S. Ahuja

1 Introduction Preeminent web security against web ridiculing has turned into a need. The dangers forced due to always expanding trick assaults with cutting-edge disloyalty have turned into another test as far as moderation. Recently, web mocking brought on critical security and financial worries on the clients and endeavors around the world. Variegated correspondence channels through web administrations, for example, webbased business, web managing an account, investigate, and online merchant has misused both human and programming powerlessness experienced enormous budgetary misfortune. So there is an improved need of protection saving information digging strategies for secured and dependable data trade over the web. The expansion of putting away clients’ individual information prompted an enhanced information mining calculation with pointed effect on the data sharing. The security must ensure three mining angles completely that contains affiliation tenets, order, and bunching [47]. The challenging issues of information mining are deliberated in numerous groups [37]. The data sharing for aggregate interests in now possible due to the advancement in distributed computing innovation. Presently, various privacy preservation data mining methods are available. The methods that are available are association rule, classification, clustering, condensation, and cryptographic, distributed privacy preservation, K-anonymity etc. [47]. Privacy-preserving approaches in data mining ensure the information by adjusting them to cover or eradicating the first delicate one to be hidden. Essentially, the strategies depend on the ideas of protection disappointment, the degree to decide the first information gave by the client from the changed one, and estimation of data misfortune and information precision [66]. The fundamental reason for all the current strategies is to contribute a smaller among exactness and security. Different methodologies that make utilization of cryptographic procedures to safeguard the individual data are extremely costly [6]. In some cases, the people are apathetic to share the whole informational collection and may wish to shroud the data utilizing assortments of assertion. The fundamental purpose behind executing such procedures is to keep up people’s protection while removing aggregate outcomes over the whole information [1]. It is critical to secure the information conveyed to different suppliers. For protection, customers’ data should be distinguished before imparting to the doubtful clients who are not specifically permitted to get to the applicable information.

1.1 Privacy Preserving Data Mining (PPDM) Raju et al. [46] graphed the usage for including or copying the tradition based homomorphic encryption close by the surviving thought of automated envelope technique in achieving shared information mining while in the meantime keeping the private information unblemished among the normal social occasions. The proposed strategy presented rich effect on different applications. Ashok and Mukkamala [34] perceived

Privacy Preserving Data Mining: A Review of the State of the Art

3

a plan of soft based mapping procedures as to security saving qualities and the ability to keep up a comparable relationship with various fields. Zong and Qi [43] outlined particular existing techniques of information digging for the confirmation of protection depending upon information transport, mutilation, mining computations, and information or rules stowing without end. About information flow, less counts are starting late used for security confirmation information mining on brought together and dispersed information. Matwin [32] broke down and analyzed the propriety of protection saving information mining techniques. Usage of specific techniques unveiled their ability to block the uneven use of information mining. Vatsalan et al. [58] analyzed ‘Protection Preserving Record Linkage’ (PPRL) system, that empowered the relationship to interface their databases by safeguarding the security. Sachan et al. [47] and Malina and Hajny [31] explored the present protection saving frameworks for cloud organizations, in which the result is portrayed on bleeding edge cryptographic sections. The course of action demonstrated the darken get to, the unlink limit and control of cover of passed on information. At long last, this game plan is done, the trial results are gotten and the execution is perceived.

1.2 Data Distortion Dependent PPDM Three new models were proposed by Kamakshi and Babu [18] that included customers, data focuses, and databases of each site. Since the data focus is totally unconcerned therefore, the customers and the site database part seem interchangeable. Brankovic and Islam [15] presented a strategy that included diverse novel strategies that influenced every one of the components in the database. Test conclusions demonstrated that the outlined system is extremely sufficient in preserving the first examples in a bothered dataset. Kamakshi [17] outlined an imaginative idea to enthusiastic analyze the fragile parts of PPDM. Finding of these perspectives relies on upon the skirt furthest reaches of delicacy of every trademark. It is understood that the data proprietor adjusted the incentive under grouped fragile perspective utilizing swapping system to ensure the data privacy. The data was adjusted in a way, such that it pointed the same underlying properties of the data. A short time later, Zhang et al. (2012a) outlined a recently adorn authentic likelihood based commotion era system called HPNGS. The impersonation conclusion demonstrated that the HPNGS can lessen the quantity of commotion necessities over its arbitrary supplement till 90%. The focus was on the privacy security along with clamor jumble in distributed computing (Zhang et al. 2012b). As an outcome, another affiliation likelihood based commotion era procedure (APNGS) was created. The examination established that the proposed APNGS to some degree enhanced the privacy insurance on clamor tangle including affiliation probabilities at a direct additional cost than ordinary perfect outlines.

4

S. Sharma and S. Ahuja

1.3 Association Rule Based PPDM Aggarwal and Yu [1] highlighted two vital parts including the connection lead mining, for instance, support and conviction. For an association control X > Y , the support is the rate of trades in the dataset which fuses X U Y . The nature of an association run X > Y is the extent of the trades number by X. Furthermore, Belwal et al. [4] reduced the introduction of support and assurance of sensitive precepts without changing the given database. Regardless, suggested adjustment can be executed through starting late including parameters interfacing with database trades and association rules. Display day thought contain Changed support, Altered assurance and Concealing counter. The count associated the importance of support and sureness. In this way, it shrouded the fundamental sensitive connection manage with no horrible. Regardless, it can cover up only the precepts for single delicate thing on the left hand side (LHS). Li and Liu [26] proposed a connection represent digging figuring for security protecting known as DDIL. The introduced framework relies on upon demand constraint and information unsettling impact. The honest to goodness information can be covered up by using DDIL count to upgrade the security beneficially. This is a gainful strategy to make different things from balanced information. Experiential results exhibit that this framework is capable for making agreeable estimations of protection alter with suitable decision of self-assertive parameters. Naeem et al. [35] arranged a computation which separated the limited alliance standards with thorough elimination of the alluded to disagreeable, for instance, the period of undesirable, non-veritable association rules while yielding no “covering” disillusionment. This strategy used fundamental numerical measures in place of common structure, especially measuring method in light of central slant. Vijayarani et al. [59] elucidated the system for quantifiable revelation control gathering, the database gathering, and the cryptography gathering. Less adequacy of information needs high cost. A refreshed mutilation procedure for security safeguarding persistent thing set mining was arranged by Srivastava et al. [51], secured fp & nfp probability guidelines. Upgraded viability is accomplished within the sight of an irrelevant pressure security by modifying the two new parameters. Jain et al. [16] arranged another framework to decrease the support of the left-hand side (LHS) and right-hand side (RHS) oversee thing to cover up or guarantee the association rules. The familiar strategy is found with be helpful as it rolled out less improvement to the information entries to secure a course of action of rules with less CPU usage time than the main work. It is kept to association oversee so to speak.

1.4 Hide Association Rule Based PPDM Weng et al. [63] presented Fast Hiding Sensitive Association Rules (FHSAR) calculation. This guaranteed the delicate affiliation rules (SAR) with less unfavorable, where an approach is intended to avoid concealed disappointments. What’s more, two

Privacy Preserving Data Mining: A Review of the State of the Art

5

heuristic methods were acquainted with upgrading the execution of the framework to take care of the issues. The heuristic capacity is additionally connected to choose the past weight for every particular exchange so that the request of altered exchanges can be chosen successfully. Dehkordi et al. [7] progressed multi-target method for ensuring the delicate affiliation leads in enhancing the security of database. The protection and exactness of dataset progressed in proposed strategy depending on hereditary calculation (GA) idea. Verykios et al. (2009) displayed a correct outskirt based procedure to accomplish an ideal outcome to stow away fragile regular thing sets with least expansion of the underlying database. This strategy applies an augmentation to the underlying database as opposed to modifying the current database. Kasthuri and Meyyappan [20] acquainted another calculation with breaking down the fragile things by disguising the touchy affiliation rules. This system found the basic thing sets and delivered the affiliation rules. Average affiliation rules idea is found the fragile things. Covering the touchy affiliation rules utilizing picked fragile things is discovered valuable. Quoc et al. [44] have presented a heuristic calculation in light of the convergence cross section of regular thing sets to secure the arrangement of secret affiliation rules utilizing bending technique. To bring down the reactions, the heuristic for support and certainty minimization situated crossing point grid (HCSRIL) calculation are utilized.

1.5 Classification Based PPDM Xiong et al. [65] presented storage room neighbor grouping strategy that relies on upon Secure Multiparty Computation (SMC) procedures to settle the protection cons in less laps alongside the pf determination of the protection safeguarding closest neighbor and the classification of protection preserving. This development is uniform in regard of productivity, execution, and protection security. In addition, it is adaptable to the various settings to accomplish distinctive enhancement condition. Singh et al. [52] introduced novel order strategy for smooth and powerful protection for cloud information. The evaluation of the closest neighbors for K-NN arrangement was based on Jaccard comparability measure and the balance test is transported into make sense between two scrambled records. This method encouraged a guaranteed nearby neighbor calculation at every hub in the cloud and arranged the concealed records by means of weighted K-NN order plot. It is essential to focus on authorizing the sturdiness of the outlined calculation with the goal that speculation to various information mining errands can be made, where security and secrecy are craved. Baotou [3] exhibited a successful development based on arbitrary bother network to safeguard security characterization information mining. This technique is polished on unmistakable information of character sort, Boolean sort, grouping sort and numeric sorts. The exploratory unveiled the to a great degree decorated components of this new planned calculation as far as protection security and proficiency of information mining calculation, where the processing technique is exceedingly lessened however

6

S. Sharma and S. Ahuja

at more prominent cost. Vaidya et al. [57] presented vertical apportioned information mining approach. This plan was able to adjust and upgrade distinctive information mining applications as choice trees. Promote powerful arrangements are required to find tight upper bound on the multifaceted nature. Sathiyapriya and Sadasivam [49] looked into the characterization techniques in grouping protection safeguarding strategies and talked about the benefits and restrictions of various strategies.

1.6 Clustering Based PPDM Yi and Zhang [67] sketched out a few points before clarifications to ensure classification of dispersed k-implies grouping and conveyed an inflexible clarification for fairly contributing multiparty convention which implies that grouping is utilized on vertically divided information, albeit every information site contributed k-implies bunching uniformly. As per essential origination, information destinations cooperate to encode k values with a normal general key in each phase of grouping. At that point, it safely looked at k values and yielded the list of the base without showing the transitional qualities. In some setting, this is convenient and more effective than Vaidya–Clifton convention [57].

1.7 Associative Classification Based PPDM Raghuram and Gyani [45] presented an acquainted grouping model contingent upon vertically apportioned datasets. A scalar item based outsider security safeguarding model is received to keep up the protection for information sharing procedure among various clients. The veracity of the given technique is approved on its VCI databases with moving outcomes. Lin and Lo [27] composed an arrangement of calculations comprising of Equal Working Set (EWS), Small Size Working Set (SSWS), Request on Demand (ROD), and the Progressive Size Working Set (PSWS). Harnsamut and Natwichai [13] presented novel heuristic calculation that relies upon Classification Correction Rate (CCR) of a particular database to secure database. The outlined strategy was tried and the exploratory outcomes are approved. The heuristic calculation is observed to be to a great degree compelling and effective. Seisungsittisunti and Natwichai [50] sketched out the issues identified with information change to protect security for information mining strategy and affiliated grouping in an incremental information situation. An incremental polynomial time calculation is intended to adjust the information to keep up a security standard called k-namelessness.

Privacy Preserving Data Mining: A Review of the State of the Art

7

1.8 Privacy Preserving Outsourced Data Mining Giannotti et al. [11] illustrated the issues related to the outsourcing of affiliation control digging assignment for a corporate security saving system. An assault model is composed in light of the foundation information for protection saving outsourced mining depending upon one–one exchange figures of things that contained the false exchanges to share each figure thing. Worku et al. [64] decorated the execution of the above outline by diminishing the computational escalated operations, for example, bilinear mapping. The technique pronounced the outcomes to be more secure and effective after careful examination of security execution. However, the information square inclusion resulted in conspire non-dynamic. Along these lines, the advancement of a total fundamental and secure general investigation technique remains an open test for a cloud framework.

1.9 Distributed Method Based PPDM Ying-hua et al. [68] made it clear that the DPPDM is dependent upon particular essential advancements. Existent methodologies are gathered into three groups named secure multiparty calculation, bother and confined inquiry. Li [25] compared the work of each group by outlining and assessing a symmetric key based security safeguarding configuration to strengthen mining tallies. An allurement study is anticipated to the investigation of the ensured calculation by exhibiting a disagreeable reputation framework in remote system. The planned structure displayed an allure for acting mischievously hubs to carry on legitimately. Exploratory conclusion uncovered the framework proficiency in finding the trespass hubs and enhanced throughput of entire system consistently. Besides, Dev et al. [9] perceived mystery risk associated with information mining on cloud framework and outlined an appropriated system to evacuate such perils. Tassa [56] outlined another plan for secured mining of affiliation standards in on a level plane conveyed database. The planned plan showed benefits over better plans related than execution and security. This plan encased two arrangement of principles. Chan and Keng [5] proposed approaches which rely upon Field and Row-Level scattering of value-based information. The creators planned a conveyed structure to secure outsourcing affiliation mining rules and investigated the achievability of its appropriation. The outlined structure for allotting exchanges to send servers relies on upon the significance of the sorts of protection idea to a client. Xu and Yi [66] inspected the security protecting conveyed information mining that goes through unmistakable stages and proceeded. The creators proposed scientific categorization to insist the consistency and assessment of the conventions effectiveness. Inan and Saygin [14] planned a strategy to assemble disparity designs for flat conveyed information mining. Nanavati and Jinwala [36] illustrated distinctive methodology of co-agent setup for the protection of the specific gatherings world-

8

S. Sharma and S. Ahuja

wide and halfway cycles. The interleaved technique is extended and modified to choose general stage in recurrent affiliation governs privately. Wang et al. [61] presented an upgraded calculation called Privacy Preserving Frequent Data Mining (PPFDM) in reference to the Frequent Data Mining (FDM) to protect the security. Om Kumar et al. [42] utilized WEKA for inspection of examples in a particular cloud. Cloud information merchant was utilized with a secured circulated approach to give productive arrangement that anticipated such mining assaults on cloud. Nix et al. [41] executed two different conventions for the scalar (speck) result of two different vectors utilized as sub-conventions in larger information mining. Keshavamurthy et al. [22] showed that there are two potential focal points in GA approach whereas customary successive example mining calculation has only one. It is found that in frequent design mining, the populace is framed just once. On the other hand, in GA strategy the populace is framed for every era that amplifies the specimen set. Be that as it may, the real disadvantage of GA approach is associated with the replication in its successive eras. For protection safeguarding information mining over conveyed dataset, the key objective is to allow calculation of cumulative measurements for final database with affirmation of the security for private information of the contributing databases.

1.10 K-Anonymity Based PPDM Samarati [48] introduced the concept of k-anonymity. A database is k-anonymity regarding semi-identifier traits (an arrangement of characteristics that can be utilized with certain outside data to recognize a particular individual) if there exist at any rate k exchanges in the database having similar esteems as per the semi identifier qualities. Wang et al. [62] concentrated the information mining as an approach utilized for information veiling called information mining in view of security assurance. After information concealing, the normal information mining strategies are utilized with no adjustment engaging the two key components, quality, and versatility. Loukides and Gkoulalas-Divanis [28] proposed a novel system to anonymize the information by fulfilling the information distributors’ use necessities encountering low data misfortune. Friedman et al. (2008) augmented the meanings of K-anonymity to demonstrate that the information mining model does not disregard the K-anonymity of the customers spoken to in the learning illustrations. To ensure the respondent’s character, the use of K-anonymity additionally consolidated with information mining was proposed by Ciriani et al. [6]. They highlighted the potential dangers to K-anonymity, which are raised by means of the usage of mining to gather information and examinations of two principle systems to join Kanonymity in information mining. Soodejani et al. [53] utilized a rendition of the pursuit named as standard pursue, which put a few limitations on the conditions and compels being certain and conjunctive. This is forthcoming zone for future review in mastering examinations on the relevance of different forms of the pursuit in the strategy. The anonymity guideline of their technique uncovers a few similarities to

Privacy Preserving Data Mining: A Review of the State of the Art

9

the L-diversity security show. Examination of other protection models, for example, t-closeness may give a more dealt security model to the proposed technique with outrageous value. Loukides et al. [29] presented a decision based security display that permitted information distributors to prompt fine-grained insurance requirements for character and sensitive information revelation. They created two anonymization calculations. Karim et al. [19] proposed a numerical strategy to mine maximal continuous examples with protection saving capacity. This strategy demonstrated a productive information change procedure which is novel encoded and compacted cross-section structure and MFPM calculation which diminished both the hunt space and seeking time. Vijayarani et al. [60] considered K-anonymity to be a fascinating way to deal with smaller scale information identified with open or semi-open parts from linking attacks. The possible dangers to K-anonymity approach are depicted in detail. Especially, the issues identified with information and the methodologies are recognized to join K-anonymity with information mining. Nergiz et al. [39] enhanced and augmented the meanings of K-anonymity to complex relations meanings of K-anonymity expression. It is demonstrated that before created strategies either neglected to secure protection or all in all lessened the information use, and information insurance in a numerous relations setting. Tai et al. [55] addressed the issue of secured outsourcing of continuous itemset mining on the multi-cloud situations. In view of the difficulties in huge information examination, they proposed to segment the information into a few sections and outsourced each part freely to various cloud in light of pseudoscientific classification, anonymization strategy, known as KAT. In view of concealment, Deivanai et al. proposed another K-anonymity method called “kactus” [8]. Kactus performs multidimensional concealment. The qualities are smothered to a specific record in view of different properties without utilizing the space progression trees. Another meaning of K-anonymity demonstrate for compelling security insurance of individual consecutive information is presented [33]. Nergiz and Gök [38] and Nergiz et al. [40] played out the speculations, as well as included the instrument for information migration. In information handle, the position of specific cells is changed to some populated indistinct information cells. The outcomes uncovered that few movements could upgrade the utility when contrasted with the heuristic measurements and question noting precision. A hybrid speculations system to migrate the information is presented [38]. In information migration prepare, information cells are moved to certain populated little gatherings of tuples which stayed discernable from each other. Zhang et al. (2013a, 2014a) researched the issues identified with adaptability related to sub-tree anonymization for colossal information stockpiling over the cloud. They developed a crossbreed approach alongside Specialization and Generalization procedures where specialization was top-down and generalization was a bottom-up. In light of the commitments thus, it merits investigating the following stride on adaptable security protection mindful examination and planning on large-scale datasets. Afterward, Zhang et al. (2014b) presented a two-stage TDS method in light of Map Reduce on cloud. In the primary stage, the informational collections are anonymized and divided in parallel for the creation of the middle of the road results. In the second

10

S. Sharma and S. Ahuja

stage, these transitional outcomes were collected for further anonymization to provide reliable K-anonymous datasets. They have exhibited a proficient semi identifier record based method to safeguard the security over incremental datasets on cloud. In the proposed system, QI-gatherings (QI: semi identifier) are recorded utilizing space values in the present speculation level, which permitted the get to just to a little bit of records in any database as opposed to induction to the entire information base (Zhang et al. 2013b, c). Moreover, Ding et al. [10] presented a dispersed anonymization convention for security saving information distributing from different information suppliers in a cloud framework.

2 Shortcomings of PPDM Methods Right now, few information mining systems are accessible to secure the protection. Comprehensively, the security saving procedures are grouped by information conveyance, information bending, information mining calculations, anonymization, information or principles stowing away, and protection insurance. Table 1 abridges distinctive procedures connected to secure information mining protection. Concentrated research discoveries throughout the decades uncovered that the current protection preserving information mining look methodologies are still experiencing the ill effects of real inadequacy including the circulated customers’ information to multi semi-genuine suppliers, the overhead of registering worldwide mining, incremental information security issues in distributed computing, honesty of mining results, utility of information, versatility, and overhead execution. Without a doubt, K-anonymity is a powerful technique for security insurance in information mining. In any case, a few showed that the information handled by this technique frequently neglected to beat a few attacks and are defenseless to web phishing. Thusly, the future security safeguarding information mining based K-anonymity needs a propel information infrastructure to bolster the mix of present information usefulness. This would satisfy the prerequisites of various types of customers and groups.

3 Conclusion A comprehensive outline on PPDM strategies in view of mutilation, cooperative classification, randomization, conveyance, and k-anonymization is introduced. It is set up that PPDM is showed up logically basic because of simple sharing of protection touchy information for investigation. The striking favorable circumstances and evident inconveniences of current reviews are stressed. By and by, Big Data are frequently shared crosswise over areas, for example, well-being, Business-toBusinesses, and Government-to-Government. Therefore, conservation for security across divulgence is basically needed. A few major associations and governments worldwide being absolutely reliant on data correspondences by means of web com-

Privacy Preserving Data Mining: A Review of the State of the Art

11

Table 1 Explanation of PPDM methods PPDM techniques Explanation Data distribution

May contain vertical or a level plane apportioned information

Data distortion

Contains blocking, accumulation or consolidating, swapping and inspecting

Data mining algorithms

Encases grouping mining, affiliation govern mining, bunching, and Bayesian networks and so on

Data or rules hidden

Signifies to shroud principle information or standards of creative information Accomplish anonymization

K-anonymity L-diverse

Keeps the minimum gathering size K, and keeps up the diversity of delicate qualities

Taxonomy tree

Assigns the speculation to constrain the data leakage

Randomization

An unrefined and important method to shroud the individual information in PPDM Ensures the security, it ought to adjust information painstakingly to achieve ideal information utility

Privacy protection

municated grave worries over protection issues. Thus, the fast improvement of new technologies confronted lot of difficulties. Information mining has been the capacity to concentrate and mine immense ocean of fascinating examples or knowledge from a gigantic measure of information requires outright security. The fundamental thought of PPDM is to join the conventional information mining methods in changing the information to mask delicate data. The real test is to effectively change the information and recuperate its mining result from the changed one. Moreover, the inadequacy of past reviews demonstrated constrained us to take part in a broad investigation of the issues of conveyed and distributed information. Subsequently, the overhead for worldwide mining processing, saving security of developing information, the trustworthiness of mining result, the utility of information, the adaptability and overhead execution with regards to PPDM are analyzed. There is an earnest need to build up a solid, effective, and adaptable techniques to vanish issues. The crevices and flaw of existent literary works has been recognized and investigated issues for critical upgrades, vigorous security insurance, and protection. is thorough and instructive audit article is would have liked to fill in as scientific categorization for exploring and understanding the exploration progressions towards PPDM. As none of the current PPDM calculations can beat all the others as for every one of the criteria, we talked about the significance of specific measurements for every particular kind of PPDM calculations, and furthermore called attention to the objective of a decent metric. There are a few future research bearings en route of measuring the PPDM calculations and its techniques. There is a need to build up a new technique which can access different PPDM algorithms. It is additionally vital to plan great measurements that can better mirror the properties of a PPDM calculation, and to develop benchmark databases for testing a wide range of PPDM calculations.

12

S. Sharma and S. Ahuja

References 1. Aggarwal, C.C., Yu, P.S.: A general survey of privacy-preserving data mining models and algorithms. In: Privacy Preserving Data Mining (Chap. 2), pp. 11–52. Springer, New York (2008) 2. Arunadevi, M., Anuradha, R.: Privacy preserving outsourcing for frequent item set mining. Int. J. Innov. Res. Comp. Commun. Eng. 2(1), 3867–3873 (2014) 3. Baotou, T.: Research on privacy preserving classification data mining based on random perturbation. In: X. Zhang, H. Bi pp. 1–6 (2010) 4. Belwal, R., Varshney, J., Khan, S.: Hiding sensitive association rules efficiently by introducing new variable hiding counter. In: IEEE International Conference on Service Operations and Logistics, and Informatics, vol. 1, pp. 130–134, IEEE/SOLI 2008 (2013) 5. Chan, J., Keng, J.: Privacy protection in outsourced association rule mining using distributed servers and its privacy notions, pp. 1–5 (2013) 6. Ciriani, V., Vimercati, S.D.C., Foresti, S., Samarati, P.: k-anonymous data mining: a survey. In: Privacy-Preserving Data Mining, pp. 105–136. Springer, New York (2008) 7. Dehkordi, M.N.M., Badie, K., Zadeh, A.K.A.: A novel method for privacy preserving in association rule mining based on genetic algorithms. J. Softw. 4(6), 555–562 (2009) 8. Deivanai, P., Nayahi, J., Kavitha, V.: A hybrid data anonymization integrated with suppression for preserving privacy in mining multi party data. In: IEEE International Conference on Recent Trends in Information Technology (ICRTIT) (2011) 9. Dev, H., Sen, T., Basak, M., Ali, M.E.: An approach to protect the privacy of cloud data from data mining based attacks. In: IEEE 2012 SC Companion High Performance Computing, Networking, Storage and Analysis (SCC) (2012) 10. Ding, X., Yu, Q., Li, J., Liu, J., Jin, H.: Distributed anonymization for multiple data providers in a cloud system. In: Database Systems for Advanced Applications. Springer, Berlin (2013) 11. Giannotti, F., Lakshmanan, L.V.S., Monreale, A., Pedreschi, D., Wang, H.: Privacy-preserving mining of association rules from outsourced transaction databases. IEEE Syst. J. 7(3), 385–395 (2013) 12. Gkoulalas-Divanis, A., Verykios, V.S.: Exact knowledge hiding through database extension. IEEE Trans. Knowl. Data Eng. 21(5), 699–713 (2009) 13. Harnsamut, N., Natwichai, J.: A novel heuristic algorithm for privacy preserving of associative classification. In: PRICAI 2008: Trends in Artificial Intelligence, pp. 273–283. Springer, Berlin (2008) 14. Inan, A., Saygin, Y.: Privacy preserving spatio-temporal clustering on horizontally partitioned data. In: Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6202 LNAI, pp. 187–198 (2010) 15. Islam, M.Z., Brankovic, L.: Privacy preserving data mining: a noise addition framework using a novel clustering technique. Knowl Based Syst. 24(8), 1214–1223 (2011) 16. Jain, Y.Y.K., Yadav, V.K.V.V.K., Panday, G.G.S.: An efficient association rule hiding algorithm for privacy preserving data mining. Int. J. Comp. Sci Eng. 3(7), 2792–2798 (2011) 17. Kamakshi, P.: Automatic detection of sensitive attribute in PPDM. In: IEEE International Conference on Computational Intelligence & Computing Research (ICCIC) (2012) 18. Kamakshi, P., Babu, A.V.: Preserving privacy and sharing the data in distributed environment using cryptographic technique on perturbed data 2(4) (2010) 19. Karim, R., Rashid, M., Jeong, B., Choi, H.: Transactional databases, pp. 303–319 (2012) 20. Kasthuri, S., Meyyappan, T.: Detection of sensitive items in market basket database using association rule mining for privacy preserving. In: IEEE International Conference on Pattern Recognition, Informatics and Mobile Engineering (PRIME) (2013) 21. Kerschbaum, F., Julien, V.: Privacy-preserving data analytics as an outsourced service. In: Proceedings of the 2008 ACM Workshop on Secure Web Services. ACM (2008) 22. Keshavamurthy, B.N., Khan, A.M., Toshniwal, D.: Privacy preserving association rule mining over distributed databases using genetic algorithm. Neural Comput. Appl. pp. 351–364 (2013)

Privacy Preserving Data Mining: A Review of the State of the Art

13

23. Kumbhar, M.N., Kharat, R.: Privacy preserving mining of association rules on horizontally and vertically partitioned data: a review paper. In: 12th IEEE International Conference on Hybrid Intelligent Systems (HIS), pp. 231–235 (2012) 24. Lai, J., Li, Y., Deng, R.H., Weng, J., Guan, C., Yan, Q.: Towards semantically secure outsourcing of association rule mining on categorical data. Inf. Sci. (NY) 267, 267–286 (2014) 25. Li, Y.: Privacy-Preserving and Reputation System in Distributed Computing with Untrusted Parties, no. July. Pro- Quest LLC (2013). Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved (2013) 26. Li, W., Liu, J.: Privacy preserving association rules mining based on data disturbance and inquiry limitation. In: 2009 Fourth International Conference on Internet Computer Science Engineering, pp. 24–29 (2009) 27. Lin, K.W., Lo, Y.-C.: Efficient algorithms for frequent pattern mining in many-task computing environments. Knowl Based Syst. 49, 10–21 (2013) 28. Loukides, G., Gkoulalas-divanis, A.: Expert systems with applications utility-preserving transaction data anonymization with low information loss. Expert Syst. Appl. 39(10), 9764–9777 (2012) 29. Loukides, G., Gkoulalas-Divanis, A., Shao, J.: Efficient and flexible anonymization of transaction data. Knowl. Inf. Syst. 36(1), 153–210 (2012) 30. Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: l-diversity: privacy beyond k-anonymity. ACM Trans. Knowl. Discov. Data 1(1), 3 (2007) 31. Malina, L., Hajny, J.: Efficient security solution for privacy-preserving cloud services. In: 36th International Conference on Telecommunications and Signal Processing (TSP), pp. 23–27 (2013) 32. Matwin, S.: Privacy-preserving data mining techniques: survey and challenges. In: Discrimination & Privacy in the Information Society, pp. 209–221. Springer, Berlin (2013) 33. Monreale, A., Pedreschi, D., Pensa, R.G., Pinelli, F.: Anonymity preserving sequential pattern mining. Artif. Intell. Law 22(2), 141–173 (2014) 34. Mukkamala, R., Ashok, V.G.: Fuzzy-based methods for privacy-preserving data mining. In: IEEE Eighth International Conference on Information Technology: New Generations (ITNG) (2011) 35. Naeem, M., Asghar, S., Fong, S.: Hiding sensitive association rules using central tendency. In: 6th International Conference on Advanced Information Management and Service (IMS), pp. 478–484 (2010) 36. Nanavati, N., Jinwala, D.: Privacy preservation for global cyclic associations in distributed databases. Procedia Technol. 6, 962–969 (2012) 37. Nayak, G., Devi, S.: A survey on privacy preserving data mining: approaches and techniques. Int. J. Eng. Sci. Tech. 3(3), 2117–2133 (2011) 38. Nergiz, M.E., Gök, M.Z.: Hybrid k-anonymity. Comput. Secur. 44, 51–63 (2014) 39. Nergiz, M.E., Christopher, C., Ahmet, E.N.: Multirelational k-anonymity. IEEE Trans. Knowl. Data Eng. 21(8), 1104–1117 (2009) 40. Nergiz, M.E., Gök, M.Z., Özkanlı, U.: Preservation of utility through hybrid k-anonymization. In: Trust, Privacy, and Security in Digital Business, pp. 97–111. Springer, Berlin (2013) 41. Nix, R., Kantarcioglu, M., Han, K.J.: Approximate privacy-preserving data mining on vertically partitioned data. In: Data and Applications Security and Privacy XXVI, pp. 129–144. Springer, Berlin (2012) 42. Om Kumar, C.U., Tejaswi, K., Bhargavi, P.: A distributed cloud—prevents attacks and preserves user privacy. In: 15th International Conference on Advanced Computing Technologies, ICACT (2013) 43. Qi, X., Zong, M.: An overview of privacy preserving data mining. Procedia Environ. Sci. 12(Icese 2011), 1341–1347 (2012) 44. Quoc, H., Arch-int, S., Xuan, H., Arch-int, N.: Computers in industry association rule hiding in risk management for retail supply chain collaboration. Comput. Ind. 64(7), 776–784 (2013) 45. Raghuram, B., Gyani, J.: Privacy preserving associative classification on vertically partitioned databases. In: IEEE International Conference on Advanced Communication Control and Computing Technologies (ICACCCT), pp. 188–192 (2012)

14

S. Sharma and S. Ahuja

46. Raju, R., Komalavalli, R., Kesavakumar, V.: Privacy maintenance collaborative data mining: a practical approach. In: 2nd International Conference on Emerging Trends in Engineering and Technology (ICETET), pp. 307–311 (2009) 47. Sachan, A., Roy, D., Arun, P.V.: An analysis of privacy preservation techniques in data mining. In: Advances in Computing and Information Technology, vol. 3, pp. 119–128. Springer, Berlin (2013) 48. Samarati, P.: Protecting respondents’ identities in microdata release. IEEE Trans. Knowl. Data Eng. (TKDE) 13(6), 1010–1027 (2001) 49. Sathiyapriya, K., Sadasivam, G.S.: A survey on privacy preserving association rule mining. Int. J. Data Min Knowl. Manag. Process 3(2), 119–131 (2013) 50. Seisungsittisunti, B., Natwichai, J.: Achieving k-anonymity for associative classification in incremental-data scenarios. In: Security-Enriched Urban Computing and Smart Grid, pp. 54–63. Springer, Berlin (2011) 51. Shrivastava, R., Awasthy, R., Solanki, B.: New improved algorithm for mining privacy—preserving frequent itemsets. Int. J. Comp. Sci. Inform. 1, 1–7 (2011) 52. Singh, M.D., Krishna, P.R., Saxena, A.: A cryptography based privacy preserving solution to mine cloud data. In: Proceedings of Third Annual ACM Bangalore Conference. ACM (2010) 53. Soodejani, A.T., Hadavi, M.A., Jalili, R.: k-anonymity-based horizontal fragmentation to preserve privacy in data outsourcing. In: Data and Applications Security and Privacy XXVI, pp. 263–273. Springer, Berlin (2012) 54. Sweeney, L.: Achieving k-anonymity privacy protection using generalization and suppression. Int. J. Uncertain Fuzziness Knowl. Based Syst. 10(5), 571–588 (2002) 55. Tai, C.-H., Huang, J.-W., Chung, M.-H.: Privacy preserving frequent pattern mining on multicloud environment. In: 2013 International Symposium on Biometrics and Security Technologies (ISBAST) (2013) 56. Tassa, T.: Secure mining of association rules in horizontally distributed databases. IEEE Trans. Knowl. Data Eng. 26(4), 970–983 (2014) 57. Vaidya, J., Clifton, C., Kantarcioglu, M., Patterson, A.S.: Privacy-preserving decision trees over vertically partitioned data. ACM Trans. Knowl. Discov. Data 2(3), 1–27 (2008) 58. Vatsalan, D., Christen, P., Verykios, V.S.: A taxonomy of privacy-preserving record linkage techniques. Inf. Syst. 38(6), 946–969 (2013) 59. Vijayarani, S., Tamilarasi, A., Seethalakshmi, R.: Privacy preserving data mining based on association rule: a survey. In: IEEE International Conference on Communication and Computational Intelligence (INCOCCI) (2010a) 60. Vijayarani, S., Tamilarasi, A., Sampoorna, M.: Analysis of privacy preserving k-anonymity methods and techniques. In: IEEE International Conference on Communication and Computational Intelligence (INCOCCI), pp. 540–545 (2010b) 61. Wang, H., Hu, C., Liu, J.: Distributed mining of association rules based on privacy- preserved method. In: International Symposium on Information Science & Engineering (ISISE), pp. 494–497 (2010) 62. Wang, K., Yu, P.S., Chakraborty, S.: Bottom-up generalization: a data mining solution to privacy protection. In: IEEE Fourth International Conference on Data Mining (ICDM’04) (2004) 63. Weng, C., Chen, S., Lo, H.: A novel algorithm for completely hiding sensitive association rules. In: Eighth International Conference on Intelligent Systems Design and Applications, ISDA’08, vol. 3, pp. 202–208 (2008) 64. Worku, S.G., Xu, C., Zhao, J., He, X.: Secure and efficient privacy-preserving public auditing scheme for cloud storage. Comput. Electr. Eng. 40(5), 1703–1713 (2014) 65. Xiong, L., Chitti, S., Liu, L.: k nearest neighbor classification across. In: Proceedings of the 15th ACM International Conference on Information & Knowledge Management CIKM’06, pp. 840–841 (2006) 66. Xu, Z., Yi, X.: Classification of privacy-preserving distributed data mining protocols. In: Sixth International Conference on Digital Information Management, pp. 337–342 (2011) 67. Yi, X., Zhang, Y.: Equally contributory privacy-preserving k-means clustering over vertically partitioned data. Inf. Syst. 38(1), 97–107 (2013)

Privacy Preserving Data Mining: A Review of the State of the Art

15

68. Ying-hua, L., Bing-ru, Y., Dan-yang, C., Nan, M.: State-of-the-art in distributed privacy preserving data mining. In: IEEE 3rd International Conference Communication Software and Networks, pp. 545–549 (2011)

An MCDM-Based Approach for Selecting the Best State for Tourism in India Rashmi Rashmi, Rohit Singh, Mukesh Chand and Shwetank Avikal

Abstract In today’s era, there are many fastest growing industries in this world and Tourism is one of them. Tourism plays a crucial role in the economic growth and advance development of a Nation. India is one of the most well-liked tourist destinations in Asia. Tourism is one of the major sources of foreign exchange. It helps in the development of an international understanding of our culture and heritage. Every year thousands of foreigners come to India as a result of which we earn a lot of foreign exchange. Selection of best place for traveling is a decision-making problem based on a number of criteria that reflects the preferences of the traveler. In presented work, a Fuzzy-AHP and TOPSIS approach has been proposed to solve above discussed problem. In this work., Fuzzy-AHP approach helps to evaluate the weight of different criteria and TOPSIS method helps to recognize the most favourable tourist place (states) all over the India and ranks each state accordingly. Keywords Tourism · MCDM · Fuzzy-AHP · TOPSIS

1 Introduction Tourism denotes to individual’s temporary motion from their abidance to a destination and tourism industry provided each facility or services affiliated with the destination to the tourists [1]. In 2005, The Indian Tourism Development Corporation (ITDC) begins a movement that is known as “Incredible India” to enhance the growth of tourism in India. The tourism industry provides a job to the large number of individuals, whether skilled or not. Tourism industry is very beneficial for the growth of hostels, travel agencies, transport including airlines. Tourism encourages national and international understanding. A productive tourism industry can increase regional R. Rashmi · R. Singh · S. Avikal (B) Department of Mechanical Engineering, Graphic Era Hill University, Dehradun, India e-mail: [email protected] M. Chand Department of Supply Chain Management, Infosys, Mangalore, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_2

17

18

R. Rashmi et al.

economic growth and developing a source of valuable foreign exchange income [2, 3]. Nowadays, there are many nations and areas where, tourism can be consider as one of the leading growth industries [4]. According to Dellaert et al. [5], decision taken by tourists are complex miscellaneous decisions, where the alternatives for various factors are interconnected. In presented work, to solve the above-discussed MCDM problem, a Fuzzy-AHP and TOPSIS based approach has been proposed. In this work, the best state for traveling in India has been selected with the help of above MCDM-based approach. The presented work contains 30 several states of India, with five several criteria. At the end of the work best state is selected among these states. This paper has been prepared as follows: the second section represents the literature review; the third section represents methodology, while fourth section represents problem definition, fifth section represent calculations and result, and the final section represents the conclusion.

2 Literature Review Mohamad et al. [6] have shown an evaluation of the censorious factors that affecting the preferred destination chosen by local tourist’s in Kedha and used Fuzzy Hierarchical TOPSIS (FHTOPSIS) method, for resolve the tourists’ strong liking for destinations with respect to these factors. Liu et al. [7] have applied the hybrid MCDM method to analyze the dependent relationship within different properties and criteria of tourism policies and finally, to propose a most favorable development plan for Taiwan tourism policy. Hsu et al. [8] have recognized those factors which affect the preferred destination selected by tourists and determine the right choice of tourism for preferred destination. Cracolici et al. [4] have evaluated the comparative attractiveness of challenging tourist destinations with the help of perceptual experience of individual visitors about holiday destination. Chen et al. [9] have recognized those factors that affecting lake surroundings and to determine a multi-criteria evaluation configuration for tourist. Alptekin et al. [10] have suggested an intelligent framework for travel authority that is based on Web which provides quick and trustable response service to the people in a smaller amount. The suggested framework incorporates case-based reasoning (CBR) system with a widely known critical decision making (MCDM) method, that is Analytic Hierarchy Process to increase the accuracy and fastness in case similar in tourist destination planning. Stauvermann et al. [11] have suggested a model for the tourism requirements in the circumstance of a fast growing country. The model parameters are a tourist region describe through noncompetitive, while the primary element of production is human capital and hostels have market power.

An MCDM-Based Approach for Selecting the Best State …

19

3 Methodology 3.1 AHP and Fuzzy AHP Approach Satty [12, 13] proposed a method namely, Analytic Hierarchy Process (AHP) that has been effectively used in many areas like evaluation, selection, ranking, forecasting, etc. AHP is a organized method to determine the final importance using pairwise comparisons among various attributes [14]. Regardless of its beneficial features and liked by many people, it is also finding faults for its failure to effectively manage inherent doubt and vagueness of evaluated things. To solve this type of uncertainty AHP is integrated with fuzzy set theory proposed by [15]. Fuzzy-AHP method has been mostly used by several researchers and turn out to be one of the best methods to solve decision-making problems. In this study, Fuzzy-AHP proposed by Avikal et al. [16] has been used as reference.

3.2 TOPSIS Method (Technique for Order Performance by Similarity to Ideal Solution) TOPSIS is a multi-criteria decision-making technique used to rank a finite set of alternatives. By TOPSIS, the best alternatives should have the shortest distance from the positive ideal solution (PIS) and the farthest distance from the negative ideal solution (NIS). The PIS is formed as a composite of the best performance values exhibited by any alternative for each criterion. The NIS is the composite of the worst performance values [17]. The ranking of alternatives is based on the relative similarity to the ideal solution that avoids the conditions of the alternative having the same similarity to both PIS and NIS. In this study, TOPSIS proposed by Avikal et al. [16] has been used as reference.

4 Problem Definition The main objective of this work is to solve the problem of tourist for the selection of best tourist place in India according to their needs, and help them where to travel and where not to. During holidays mostly people make plans for trip and get confused by thinking that which place is best for them and are not able to select a suitable tourist place. For this study, five several criteria have been selected and discussed in Table 1. A survey has been conducted with the help of these selected criteria among tourist experts. On the basis of this survey, the weight of each criterion has been computed using Fuzzy-AHP and the computed weights have been used for further calculation and eventually TOPSIS method for ranking the states according to selected criteria.

20

R. Rashmi et al.

Table 1 Several criteria and their definition No. Criteria Definition C1

Visual value

There are definite attractive things that have a power to attract the tourist and appeal to them, some are natural, cultural or historical

C2 C3

No. of attractions Ease of access

Amount of tourist attractions. For example, no. of natural and cultural attractions Access to tourist terminus. To reach the expected terminus either by car, taxi, train or plane

C4

Security

Security for women, night security, and crime rates in tourism places and also the current existence of police forces to provide security

C5

Enviromental impact

Enviromental affect like waste disposal system, noise pollution, enviromental pollution

Table 2 Results obtained with fuzzy AHP

Criteria

Weights

ňmax, CI, RI

C1 C2

0.1566 0.0485

ňmax  5.4176

C3 C4 C5

0.4479 0.2545 0.0923

CI  0.1044 RI  1.12

CR

CR  0.0932

5 Calculation Singh et al. [18] have presented the rating of all the 30 states. The weight calculated by Singh et al. using Fuzzy AHP has been used for further study and has been presented in Table 2. Finally, each state has been ranked by means of TOPSIS method. The steps of TOPSIS method have been presented in the following Tables 3 and 4. Step 1 has been solved in the table in 3, Step 2 has been solved in the table in 4, Step 3 has been solved in Table 5, Step 4 has been solved in Tables 6 and 7, and final ranking has been shown in Table 8.

6 Conclusion This work shows an MCDM-based technique and has been proposed to determine the most prestigious tourist state for tourism in India. Fuzzy-AHP method has been used to calculate the weight of each criterion and TOPSIS method has been used to rank all states. The result shows that Maharashtra is the most preferred state for tourism and Telangana is the least preferred states for tourism. Maharashtra is most preferred because of its prominent rating in all criteria and Telangana is least preferred because of its least rating in all criteria.

An MCDM-Based Approach for Selecting the Best State … Table 3 Data normalization for TOPSIS States C1 C2

21

C3

C4

C5

Uttarakhand Madhya Pradesh Maharashtra Kerala Jammu Kashmir Delhi Andhra Pradesh Arunachal Pradesh Assam Bihar Chhattisgarh

1 0.8

0.5 0.166667

0.666667 1

0.769231 0.384615

0.25 0.75

1 0.8 1

1 0.75 0

1.333333 0.666667 0.333333

0.769231 0.576923 0.384615

0.75 0.5 0.25

0.8 0.6

0.416667 0.083333

1 0.666667

0.384615 0.461538

0.5 0.8

0.8

0.25

1

0.769231

0.5

0.4 0 0

0.166667 0 0.166667

1 0.333333 0

0.615385 0.846154 0.884615

0.5 1.5 0

Goa Haryana

1 0.2

0.333333 0

1 0.666667

0.576923 0.692308

0.5 0.7

Himachal Pradesh Jharkhand Karnataka Gujarat

1

0.5

0.666667

0.769231

0.8

0.4 0.8 1

0.416667 0.333333 0.833333

0.666667 1 1

0.576923 0.769231 0.384615

1 1 1

Manipur

0.8

0.666667

0.333333

0.769231

0.25

Meghalaya

0.8

0.583333

0

0.769231

0.5

Mizoram Nagaland

0.6 1

0.75 0.75

0.333333 0

0.692308 0.846154

1.25 0.4

Odisha Punjab

0.52 0.8

0.333333 0.333333

0.666667 0.666667

0.384615 0.307692

1 1

Rajasthan

0.6

0.583333

0.666667

0.384615

0.75

Sikkim Tamil Nadu Telangana

0.8 0.4 0.2

0.25 0.75 0

0.333333 1 0

0.461538 0.346154 0

0.5 0.5 1

Tripura

0.4

0.166667

0

0.423077

1

Uttar Pradesh West Bengal

0.6 0.4

0.583333 0.083333

1 1

1 0.692308

0.25 0.5

22

R. Rashmi et al.

Table 4 Weight decision matrix States C1 C2

C3

C4

C5

Uttarakhand Madhya Pradesh Maharashtra Kerala Jammu Kashmir Delhi Andhra Pradesh Arunachal Pradesh Assam Bihar Chhattisgarh

0.1566 0.12528

0.02425 0.008083

0.2986 0.4479

0.195769 0.097885

0.023075 0.069225

0.1566 0.12528 0.1566

0.0485 0.036375 0

0.5972 0.2986 0.1493

0.195769 0.146827 0.097885

0.069225 0.04615 0.023075

0.12528 0.09396

0.020208 0.004042

0.4479 0.2986

0.097885 0.117462

0.04615 0.07384

0.12528

0.012125

0.4479

0.195769

0.04615

0.06264 0 0

0.008083 0 0.008083

0.4479 0.1493 0

0.156615 0.215346 0.225135

0.04615 0.13845 0

Goa Haryana

0.1566 0.03132

0.016167 0

0.4479 0.2986

0.146827 0.176192

0.04615 0.06461

Himachal Pradesh Jharkhand Karnataka Gujarat

0.1566

0.02425

0.2986

0.195769

0.07384

0.06264 0.12528 0.1566

0.020208 0.016167 0.040417

0.2986 0.4479 0.4479

0.146827 0.195769 0.097885

0.0923 0.0923 0.0923

Manipur

0.12528

0.032333

0.1493

0.195769

0.023075

Meghalaya

0.12528

0.028292

0

0.195769

0.04615

Mizoram Nagaland

0.09396 0.1566

0.036375 0.036375

0.1493 0

0.176192 0.215346

0.115375 0.03692

Odisha Punjab

0.081432 0.12528

0.016167 0.016167

0.2986 0.2986

0.097885 0.078308

0.0923 0.0923

Rajasthan

0.09396

0.028292

0.2986

0.097885

0.069225

Sikkim Tamil Nadu Telangana

0.12528 0.06264 0.03132

0.012125 0.036375 0

0.1493 0.4479 0

0.117462 0.088096 0

0.04615 0.04615 0.0923

Tripura

0.06264

0.008083

0

0.107673

0.0923

Uttar Pradesh West Bengal

0.09396 0.06264

0.028292 0.004042

0.4479 0.4479

0.2545 0.176192

0.023075 0.04615

MAX MIN

0.1566 0

0.0485 0

0.5972 0

0.2545 0

0 0.13845

Table 5 Positive ideal solution (PIS) and negative ideal solution (NIS) PIS NIS

0.1566 0

0.0485 0

0.5972 0

0.2545 0

0 0.13845

An MCDM-Based Approach for Selecting the Best State …

23

Table 6 Separation distance of alternative from positive idle solution (K+) States C1 C2 C3 C4 C5 SUM

K+

Uttarakhand Madhya Pradesh Maharashtra Kerala Jammu Kashmir Delhi Andhra Pradesh Arunachal Pradesh Assam Bihar Chhattisgarh

0.1566 0.12528

0.02425 0.2986 0.008083 0.4479

0.195769 0.023075 0.097885 0.069225

0.093732 0.054225

0.306156 0.232864

0.1566 0.12528 0.1566

0.0485 0.5972 0.036375 0.2986 0 0.1493

0.195769 0.069225 0.146827 0.04615 0.097885 0.023075

0.008241 0.104013 0.228027

0.090782 0.322511 0.477522

0.12528 0.09396

0.020208 0.4479 0.004042 0.2986

0.097885 0.04615 0.117462 0.07384

0.05073 0.119294

0.225233 0.34539

0.12528

0.012125 0.4479

0.195769 0.04615

0.030174

0.173706

0.06264 0 0

0.008083 0.4479 0 0.1493 0.008083 0

0.156615 0.04615 0.215346 0.13845 0.225135 0

0.044464 0.248192 0.383667

0.210864 0.498188 0.619409

Goa Haryana

0.1566 0.03132

0.016167 0.4479 0 0.2986

0.146827 0.04615 0.176192 0.06461

0.037059 0.117516

0.192508 0.342806

Himachal Pradesh Jharkhand Karnataka Gujarat

0.1566

0.02425

0.195769 0.07384

0.098652

0.314089

0.06264 0.12528 0.1566

0.020208 0.2986 0.016167 0.4479 0.040417 0.4479

0.146827 0.0923 0.195769 0.0923 0.097885 0.0923

0.118904 0.036285 0.055403

0.344824 0.190487 0.235379

0.2986

Manipur

0.12528

0.032333 0.1493

0.195769 0.023075

0.205838

0.453694

Meghalaya

0.12528

0.028292 0

0.195769 0.04615

0.363616

0.603006

Mizoram Nagaland

0.09396 0.1566

0.036375 0.1493 0.036375 0

0.176192 0.115375 0.215346 0.03692

0.224129 0.359691

0.473422 0.599742

Odisha Punjab

0.081432 0.016167 0.2986 0.12528 0.016167 0.2986

0.097885 0.0923 0.078308 0.0923

0.128905 0.130751

0.359034 0.361596

Rajasthan

0.09396

0.028292 0.2986

0.097885 0.069225

0.122815

0.350449

Sikkim 0.12528 Tamil Nadu 0.06264 Telangana 0.03132

0.012125 0.1493 0.036375 0.4479 0 0

0.117462 0.04615 0.088096 0.04615 0 0.0923

0.223828 0.061086 0.447985

0.473104 0.247156 0.669317

Tripura

0.06264

0.008083 0

0.107673 0.0923

0.397187

0.630228

Uttar Pradesh West Bengal

0.09396

0.028292 0.4479

0.2545

0.027155

0.164788

0.06264

0.004042 0.4479

0.176192 0.04615

0.041357

0.203365

0.023075

24

R. Rashmi et al.

Table 7 Separation distance of alternative from negative idle solution (K−) States C1 C2 C3 C4 C5 SUM

K−

Uttarakhand Madhya Pradesh Maharashtra Kerala Jammu Kashmir Delhi Andhra Pradesh Arunachal Pradesh Assam Bihar Chhattisgarh

0.024524 0.000588 0.089162 0.038326 0.013311 0.165911 0.015695 6.53E−05 0.200614 0.009581 0.004792 0.230748

0.407321 0.480363

0.024524 0.002352 0.015695 0.001323 0.024524 0

0.356648 0.038326 0.004792 0.426641 0.089162 0.021558 0.008519 0.136258 0.02229 0.009581 0.013311 0.069707

0.653178 0.369131 0.264021

0.015695 0.000408 0.200614 0.009581 0.008519 0.234819 0.008828 1.63E−05 0.089162 0.013797 0.004174 0.115978

0.484581 0.340556

0.015695 0.000147

0.200614 0.038326 0.008519 0.263301

0.513129

0.003924 6.53E−05 0.200614 0.024528 0.008519 0.237651 0 0 0.02229 0.046374 0 0.068664 0 6.53E−05 0 0.050686 0.019168 0.069919

0.487495 0.262039 0.264423

Goa Haryana

0.024524 0.000261 0.000981 0

0.200614 0.021558 0.008519 0.255477 0.089162 0.031044 0.005452 0.126639

0.505447 0.355864

Himachal Pradesh Jharkhand Karnataka Gujarat

0.024524 0.000588

0.089162 0.038326 0.004174 0.156774

0.395946

0.003924 0.000408 0.015695 0.000261 0.024524 0.001634

0.089162 0.021558 0.00213 0.200614 0.038326 0.00213 0.200614 0.009581 0.00213

0.117182 0.257026 0.238483

0.342319 0.506978 0.488347

Manipur

0.015695 0.001045

0.02229

0.038326 0.013311 0.090668

0.301111

Meghalaya

0.015695 0.0008

0

0.038326 0.008519 0.06334

0.251675

Mizoram Nagaland

0.008828 0.001323 0.024524 0.001323

0.02229 0

0.031044 0.000532 0.064018 0.046374 0.010308 0.082529

0.253018 0.287279

Odisha Punjab

0.006631 0.000261 0.015695 0.000261

0.089162 0.009581 0.00213 0.089162 0.006132 0.00213

0.107766 0.11338

0.328277 0.33672

Rajasthan

0.008828 0.0008

0.089162 0.009581 0.004792 0.113164

0.336399

0.02229 0.013797 0.008519 0.060449 0.200614 0.007761 0.008519 0.222142 0 0 0.00213 0.003111

0.245864 0.471319 0.055774

Sikkim 0.015695 0.000147 Tamil Nadu 0.003924 0.001323 Telangana 0.000981 0 Tripura

0.003924 6.53E−05 0

Uttar Pradesh West Bengal

0.008828 0.0008

0.011593 0.00213

0.017712

0.133088

0.013311 0.288325

0.536959

0.003924 1.63E−05 0.200614 0.031044 0.008519 0.244118

0.494083

0.200614 0.06477

An MCDM-Based Approach for Selecting the Best State … Table 8 Final rank of states in India States K+

25

K−

SCORE

RANK

Uttarakhand Madhya Pradesh

0.306156 0.232864

0.407321 0.480363

0.570896 0.673507

12 10

Maharashtra Kerala Jammu Kashmir Delhi Andhra Pradesh Arunachal Pradesh Assam Bihar Chhattisgarh

0.090782 0.322511 0.477522 0.225233 0.34539 0.173706

0.653178 0.369131 0.264021 0.484581 0.340556 0.513129

0.877974 0.533702 0.356042 0.682687 0.496477 0.747092

1 14 22 8 17 3

0.210864 0.498188 0.619409

0.487495 0.262039 0.264423

0.698058 0.344685 0.299178

7 24 27

Goa Haryana

0.192508 0.342806

0.505447 0.355864

0.724183 0.509345

5 15

Himachal Pradesh Jharkhand Karnataka Gujarat

0.314089

0.395946

0.557644

13

0.344824 0.190487 0.235379

0.342319 0.506978 0.488347

0.498177 0.726886 0.674767

16 4 9

Manipur

0.453694

0.301111

0.398926

21

Meghalaya

0.603006

0.251675

0.294467

28

Mizoram Nagaland

0.473422 0.599742

0.253018 0.287279

0.348299 0.323869

23 25

Odisha Punjab

0.359034 0.361596

0.328277 0.33672

0.477625 0.482189

20 19

Rajasthan

0.350449

0.336399

0.489772

18

Sikkim Tamil Nadu Telangana

0.473104 0.247156 0.669317

0.245864 0.471319 0.055774

0.341968 0.655999 0.07692

26 11 30

Tripura

0.630228

0.133088

0.174355

29

Uttar Pradesh West Bengal

0.164788 0.203365

0.536959 0.494083

0.765175 0.708415

2 6

SCORE denotes the relative closeness to the ideal solution for each competitive design alternative RANK denotes the ranking of all the states according to relative closeness

26

R. Rashmi et al.

References 1. Matheison, A., Wall, G.: Tourism: Economic, Physical and Social Impacts. Longman, New York (1982) 2. Chang, J.R., Chang, B.: The development of a tourism attraction model by using fuzzy theory. Math. Prob. Eng. pp. 1–10 (2015) 3. Wu, W.-W.: Beyond travel & tourism competitiveness ranking using DEA, GST, ANN and Borda count. Expert Syst. Appl. 38(10), 12974–12982 (2011) 4. Cracolici, M.F., Nijkamp, P.: The attractiveness and competitiveness of tourist destinations: a study of Southern Italian regions. Tour. Manag. 30, 336–344 (2008) 5. Dellaert, B.G.C., Etterma, F., Lindh, C.: Multi-faceted tourist travel decisions: a constraintbased conceptual framework to describe tourist sequential choice of travel components. Tour. Manag. 19(4), 313–320 (1998) 6. Mohamad, D., Jamil, R.M.: A preference analysis model for selecting tourist destinations based on motivational factors: a case study in Kedah, Malaysia. Procedia—Social Behav. Sci. 65, 20–25 (2012) 7. Liu, C.H., Tzeng, G.H., Lee, M.H.: Improving tourism policy implementation—The use of hybrid MCDM models. Tour. Manag. 33, 413–426 (2012) 8. Hsu, T.K., Tsai, Y.F., Wu, H.H.: The preference analysis for tourist choice of destination: a case study of Taiwan. Tour. Manag. 30, 288–297 (2009) 9. Chen, C.L., Bau, Y.P.: Establishing a multi-criteria evaluation structure for tourist beaches in Taiwan: a foundation for sustainable beach tourism. Ocean Coast. Manag. 121, 88–96 (2016) 10. Alptekin, G.I., Buyukozkan, G.: An integrated case-based reasoning and MCDM system for web based tourism destination planning. Expert Syst. Appl. 38, 2125–2132 (2011) 11. Stauvermann, P.J., Kumar, R.R.: Productivity growth and income in the tourism sector: role of tourism demand and human capital investment. Tour. Manag. 61, 426–433 (2017) 12. Saaty, T.L.: A scaling method for priorities in hierarchical structures. J. Math. Psychol. 15(3), 234–281 (1977) 13. Saaty, T.L.: The Analytic Hierarchy Process. McGraw-Hill, New York (1980) 14. Ng, C.Y.: Evidential reasoning-based fuzzy AHP approach for the evaluation of design alternatives environmental performances. Appl. Soft Comput. 46, 381–397 (2016) 15. Zadeh, L.: Fuzzy sets. Inf. Control 8, 338–353 (1965) 16. Avikal, S., Jain, R., Mishra, P.K.: A Kano model, AHP and M- TOPSIS method-based technique for disassembly line balancing problems under fuzzy environment. Appl. Soft Comput. pp. 519–525 (2014) 17. Terol, A.B., Parra, M.A., Fernández, V.C., Ibias, J.A.: Using TOPSIS for assessing the sustainability of government bond funds. Int. J. Manag. Sci. (2014) 18. Singh, S., Mundepi, V., Hatwal, D., Raturi, V., Chand, M., Rashmi, Sharma, S., Avikal, S.: Selection of best state for tourism in India by fuzzy approach. Adv. Comput. Comput. Sci. pp. 549–557 (2017)

Gravitational Search Algorithm: A State-of-the-Art Review Indu Bala and Anupam Yadav

Abstract Gravitational search algorithm (GSA) is a recent algorithm introduced in 2009 by Rashedi et al. It is a heuristic optimization algorithm based on Newton’s laws of motion and law of Gravitation. Till now, a lot of changes have been done in original GSA to improve its speed of convergence and its quality of solution; also this algorithm is still exploring in many fields. Therefore, this article is intended to provide the current state of algorithm, modifications, advantages, disadvantages, and its future possibilities of research. Keywords Gravitational search algorithm (GSA) · Applications · Hybridization Modification GSA · Evolutionally optimization · Nature inspired computational search

1 Introduction Gravitational search algorithm (GSA) is a heuristic technique in the field of numerical optimization. It is scholastic and swarm-based search for hard combinational problems. GSA is based on law of gravity and law of motion [1]. It is comprised of masses (agents) in which heavier masses consider as a prominent solution of the problem. Due to gravitational law of motion, each mass attracts towards each other that cause a global movement. Also lighter masses attract towards the heavier mass which gives an optimal solution. Every heuristics algorithm follows exportation and exploitation criteria. Similar in GSA, algorithm first explores the search region then laps of iterations; it converges to a solution called the exploitation step. GSA is a very fast growing algorithm which helps to find optimal or near-optimal solutions. GSA has been used by many applications of several problems. It can apply on conI. Bala (B) Northcap University, Gurgaon 122017, India e-mail: [email protected] A. Yadav National Institute of Technology Uttarakhand, Srinagar 246174, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_3

27

28

I. Bala and A. Yadav

tinuous as well as binary search space [2]. The various versions of GSA have been developed which helped to improve the efficiency of exploration and exploitation. Study of development of GSA is necessary to know about, how far its development, advantage, disadvantages and how much has been used to solve an optimization problem. This article will describe about its advantages, disadvantages, and modification that have been made till now to make a conclusive remark on the ability of GSA. In Sect. 2, standard GSA is described, Sect. 3 covers modification in GSA till then. Section 4 describes all hybrid form of heuristic algorithm with GSA, Sect. 5 tells it advantages and disadvantages with its criticism and in Sect. 6 we wrap our work and discuss conclusion and future scope.

2 Standard GSA GSA was first introduced in 2009 by Rashedi et al. [1]. The aim of this algorithm is to solve hard combinational optimization problem with reasonable cost. GSA simulates a set of agents that work as point masses in a N dimensional space. xi represents position and m i represents the mass of agent i. In GSA, positions are considered as candidate solutions and masses are correlated with the quality of the candidate solutions; means if quality is high then mass would be large. Due to gravitational law, a force of attraction between mass i and j at time step t and dimension d is given as: Fidj (t)  G(t) ·

Mi (t) × M j (t) d (x j (t) − xid (t)) Ri j (t) + ε

(1)

G(t) is gravitational constant which controls the process using variable α and decreases with time as G(t)  G 0 × exp(−α × iter/max iter)

(2)

ε is a small constant, and Rik (t) is Euclidian distance between agents i and k. Hence, the total force of mass i at time t is given as Fid (t) 

N 

randk Fikd (t)

(3)

k∈K best,ki

where randk represents random number in the interval [0, 1], Kbest is the set of best fitness value of first K agents. If all the agents attract each other that cause a global movement of object towards the heavy masses, hence position of particle influenced by the velocity (veli ) and accelerations aci as:

Gravitational Search Algorithm: A State-of-the-Art Review

29

velid (t + 1)  randi × velid (t) + acid (t) xid (t

+ 1) 

xid (t)

+

velid (t

(4)

+ 1)

(5)

By Newton’s Law of motion, the acceleration of object i in dth dimension is given as acid (t) 

Fid (t) Mi (t)

(6)

By the help of velocity and position equations, we can update the position of agents; it helps to move masses towards the heavier mass which considered as a prominent solution. After running prescribes iterations or lapse of time, all masses converge to heavier mass that follow optimal solution. We can update the agents as m i (t) 

fiti (t) − worst(t) , best(t) − worst(t)

m i (t) Mi (t)   N j1 m j (t)

(7)

where fiti (t) represents the fitness value of the objects i and best(t) and worst(t) are given for maximization case as best(t) 

max fiti (t) worst(t) 

i∈(1,2...N )

min

i(1,2...N )

fiti (t)

3 Modification of GSA The modifications of GSA consist of three categories: modification and extension of parameters, extension of searching space, and hybridization with another technique. The modification in GSA can improve its speed and performance. 1. Binary GSA [2] GSA helps to solve optimization problems in real continuous space, while BGSA [2] can solve it for discrete space. In discrete binary space, every dimension can take only 0 or 1 or vice versa. In this algorithm, force, acceleration and velocity   are calculated same as GSA continuous algorithm, its only update position xid by switching bit between 0 and 1. The position update   carried in a manner that the current bit value is changed with a probability S vid , which is calculated by mass velocity as      S vid (t)  tanh vid (t)  Once probability of mass velocity is calculated, then the objects will move as     if rand < S vid (t + 1) ; then xid (t + 1)  complement xid (t) ; else xid (t + 1)  xid (t)

(8)

30

I. Bala and A. Yadav

Probability of changing position must be near zero when velocity is ‘small’ and hence got optimum solution. • A small absolute value of the velocity must provide a small probability of changing the position. In other words, a zero value of the velocity represents that the mass position is good and must not be changed. 2. Single Objective GSA [3] A large number of GSA variables can be found to locate single solutions. They were specially developed to find single solution in continuous unconstrained optimization problems. Most of these algorithms can also be applied to other types of problem. 3. Multi-objective GSA (MOGSA) [4] This algorithm helps to find multiple non-dominating solutions. It is also called as niching algorithm. In MOGSA [4], for all iterations, a randomly selected object is considered as leader and other objects seek following it. For better exploration criteria, a grid structure has been created and stored in “achieve”. The grid structure is created as follows: Divided each dimension in the objective space into 2ni equal k ni i1 divisions and hence for a k-objects 2 , where i denotes dimension index. As long as the archive is not full, new non-dominated solutions are added to it. Hence laps of time, it converges to a solution. Also during the time gravitational constant must decrease which implies a finer search of optima in last iteration. 4. Piecewise-based GSA (PFGSA) [5] PFGSA [5] improves the searching ability of GSA. It is more flexible to control the decreasing rate of gravitational constant (G). It divides G into three stages: Coarse, moderate, and fine search stage. In coarse stage, G decreases to a larger rate and reduces the search space quickly. In moderate stage, decreasing rate of G becomes slow and gradually it comes close to global optima. In fine stage, G is quite small due to low decreasing rate of G and it searches the global optima in a meticulous way. 5. Disruption operator with GSA or Improved GSA (IGSA) [6] A nature-inspired operator named “Disruption” is introduced to improve the performance of standard GSA. It helps to improve the ability of exploration and exploitation in search space. All masses (solutions) converge towards an optimum solution; a new operator D [6] is introduced   Ri j · U (−0.5, 0.5) if Ri,best ≥ 1 D (9) 1 + ρ · U (−0.5, 0.5) otherwise U (−0.5, 0.5) is uniform distributed pseudo-random number in. (−0.5, 0.5). Exploration and exploitation process depends upon operator D, if Ri,best ≥ 1, D explores search space and if Ri,best < 1, D converges to the best solution.Ri,best is the distance between mass i and best solution so far.

Gravitational Search Algorithm: A State-of-the-Art Review

31

6. Quantum-Based GSA (QGSA) [7] QGSA [7] is based on dynamic of quantum. In this algorithm, each object has quantum behavior, which means each object is expressed by a wave function. In standard GSA, kbest set contains all prominent solution of the problem whereas in QGSA each kbest member is the center of an attractive potential field which is called the delta potential well. In which each agent chooses a kbest member by probabilistic mechanism. It guarantees to limit the quantum boundary of the object. 7. Adaptive GSA [8] In QGSA [7], selection process of kbest member for delta potential well was not properly defined and hence exploration process can be uncontrolled. But adaptive GSA helps to overcome this problem. This algorithm reduces parametric sensitivity by the help of fuzzy controller. It uses two depreciation laws of the gravitational constant G and also it considers a parameter in the weighted sum of all forces exerted from the other agents to the iteration index. This algorithm controls the searching ability of GSA and gives high convergence rate 8. Fast Discrete GSA (FDGSA) [9] GSA was originally introduced for continuous-valued space. Many problems are however defined for discrete value and then binary GSA came into existence. The main difference between the fast discrete GSA and binary GSA is that position of masses is updated by its direction and velocity. Both the direction and velocity determine the candidates of integer values for the position update of the masses and then selection process is completed randomly. FDGSA [10] converges faster as compared to BGSA. 9. Synchronous versus Asynchronous GSA (A-GSA) [11] In standard GSA, the velocity and position of whole population are updated after evaluating the agent’s performance and then worst and best performing agents are recognized. This updating method is classified as synchronous update. But in A-GSA [12], agent’s velocity and position are updated parallel with agent’s performance, without waiting for the entire population to be evaluated. Therefore, the best and worst agents are recognized using mix information of previous and current iterations. This updating method encourages agent’s more exploration in search space. 10. Modified GSA (MGSA) [12] This algorithm contributes effective modification in standard GSA. This algorithm modifies maximum velocity constraints which help to control exploration process of standard GSA. MGSA searching criteria are based on two factors: minimum factor of safety and minimum reliability index. It increases the convergence rate and helps to obtain a solution with a lower number of iterations.

32

I. Bala and A. Yadav

11. Improved QGSA (IQGSA) [13] It is a new version of QGSA. The proposed algorithm improves the efficiency of QGSA by replacing fitness function of QGSA into new fitness function. It has given better result than original GSA and QGSA. 12. Multi-agents based GSA [14] In proposed algorithm, operations have implemented in parallel way, not sequentially. This algorithm has ability to solve non-convex objective function in optimization problem. It also reduces parametric sensitivity and performs very well. 13. Fuzzy GSA [10] In this algorithm, a fuzzy logic controller is (FLC) introduced which improves the convergence rate and give better result. FLC controls GSA’s parameters G and α, and also balances the exploration and exploitation search process. 14. Groping GSA (GGSA) [15] This algorithm is introduced for data clustering problem, this refers to the process of grouping a set of data objects into clusters in which the data of a cluster must have great similarity and data of different clusters have high dissimilarity. The performance of GGSA evaluated through many benchmark datasets from the well-known UCI machine learning repository and found good convergence rate. 15. Adaptive Centric GSA (AC-GSA) [16] This algorithm introduces velocity update formula and weight function to improve standard GSA efficiency. Also Kbest can be found as

  iteration ∗ 100 − finalper Kbest  finalper + 1 − max itri where finalper is the particle which can attribute other in the last generation. 16. Locally Informed GSA (LIGSA) [17] In LIGSA, each agent learns from its unique neighborhood formed by k local neighbor and gbest formed from the kbest group. It avoids premature convergence and explores the search space quickly. Also gbest agent accelerates the convergence speed. 17. Fitness Based GSA (FBGSA) [18] In FBGSA, new velocity function is introduced, in which new velocity depends upon the previous velocity and acceleration based on the fitness of the solutions. Also, the high fit solution converges to promising search region where low fit solution explores the search space.

Gravitational Search Algorithm: A State-of-the-Art Review

33

4 Hybrid Versions of GSA The hybridization of an algorithm makes the algorithm more effective and improves the ability of an algorithm. Due to hybridization, exploring area of algorithm can be enhanced and can solve more problems. Hybridizations with GSA are given below: 1. Hybrid Particle swarm optimization and GSA (PSOGSA) [19] This hybridization [12] is based on PSO and GSA function optimization problem. PSO algorithm is based on natural phenomena of birds flocking. This algorithm introduced global best, i.e., gbest concept of PSO in GSA which gives best current position of agents, and most of the function provides faster convergence speed. 2. Modified PSO and GSA (MPSOGSA) [20] Standard PSO has features of saving previous local optimum and global optimum solutions which are referred as memory of PSO. In this hybridization, PSO puts particle memory in GSA. The particle memory in GSA is revised its own global and local optimum solutions in the updating process. MPSOGSA [21] gives better performance and high accuracy of selection process. 3. Genetic Algorithm and GSA (GAGSA) [21] GA is based on the fact “Fitness for survival” which considers three operators: Natural selection, crossover, and mutation. In GAGSA [22], mutation and crossover operators of GA support to find the global optimum solution in GSA and also improve GSA’s speed displacement formula. This algorithm makes the convergence faster and it is comparable to PSO and GSA. 4. Gravitational Particle Swarm (GPS) [22] This algorithm is the hybridization of PSO and GSA. In GPS [23], agents update their corresponding positions with PSO’s velocity and GSA’s acceleration. It is applied on 23 benchmark functions and better performance was obtained. 5. Artificial Bee Colony and GSA (ABCGSA) [23] Artificial bee colony algorithm inspired by foraging behavior of honey bee. This algorithm divides searching process in three steps; first employed bees go to food source in her memory and evaluate its nectar amount then onlooker bees provide a better source of this food and scouts discovered the new food sources and replace abandoned food source into a new one. ABCGSA [24] combines the search mechanism of the three steps of ABC with the moving method of GSA and obtained better results. 6. K-Mean and GSA (GSA-KM) [25]

34

I. Bala and A. Yadav

GSA-KM [26] gives another approach to generate initial population and supports K-Mean algorithm to escape from the local optima. K-Mean algorithm generates appropriate initial population in GSA which provides solution in least possible iteration. It encourages the quality of solution and convergence speed. 7. Hybrid Neural Network and GSA (HNNGSA) [24] GSA techniques are applied to a multilayer artificial neural network. It is used to stimulate the adaptable parameters and an approximate solution of Wessinger’s equation is also obtained . The performance of HNNGSA [24] is compared with R-K, Euler and improved Euler methods and obtained better results. 8. K Harmonic Mean and Improved GSA (IGSAKHM) [26] This hybrid form was introduced to solve clustering problems in data mining. The proposed algorithm [26] is improved version of GSA into KHM. It provided better result than Standard KHM and PSOKHM. 9. Differential Evolution and GSA (DE-GSA) [27] In this algorithm, two strategies are used for update the agent’s search: DE strategy and GSA strategy, for the avoidance of local minima on boundary, it restricts the searching speed first and if objects move outside the boundary, algorithm scatters them in a feasible region away from the boundary, instead of stopping them on the boundary. The performance of DE-GSA is evaluated through several benchmark functions and gets better results.

5 Advantages and Disadvantages of GSA GSA is a recently developed algorithm which solves many complex nonlinear optimization problems. It has the ability to solve complex problem while somewhere it takes more time to execute some iterations. It contains some advantages and disadvantages, these are:

5.1 Advantages • GSA could produce result with high accuracy [10]. • GSA has good local minima avoidance as compared to other heuristic techniques like PSO and DE [19]. • GSA generates better quality solution and gives stable convergence.

Gravitational Search Algorithm: A State-of-the-Art Review

35

Publications 60

26

20 0

47

46

40

4

33

34

8

33

Publications

11

2009 2010 2011 2012 2013 2014 2015 2016 2017

Fig. 1 Year-wise publication of articles on GSA by leading international journals

5.2 Disadvantages • GSA uses complex operator and long computational time, and it suffers from slow searching speed in last few iteration [19]. • Selection of gravitational constant parameter G is not appropriate. Although G controls the search accuracy but still does not guarantee a global solution at alltime [8]. • It is not flexible, if premature convergence happens, there will not be any recovery for this algorithm. In other words, after becoming converged, the algorithm loses their ability to explore and become inactive [6]. • GSA is memoryless algorithm, only the current position of agents plays a role in update procedure [1].

5.3 Criticism of GSA Apart from merits and applications of GSA, it has also faced the criticism on its fundamental idea. Gauci et al. [28] had claimed that GSA does not take the distance between solutions into account and therefore it cannot be considered to be based on the law of gravity.

6 Conclusion and Future Scope In this paper, the development of GSA has been presented, Also year-wise publication of GSA’s papers till June, 2017 in leading journals is presented in Fig. 1. Although GSA is a newly developed algorithm but it has been applied in many areas in such a short time. This shows the promising future of GSA. It has been applied in many areas like clustering, image processing, neural network training, controller design, and filter modeling and so on. But still there are many areas like finance, military,

36

I. Bala and A. Yadav

economics are not yet penetrating. More studies can also be done in these areas. More development could be done to the structure of GSA and a lot of possible hybrid techniques could be explored such as hybridization of GSA with ACO, Artificial Fish School, Artificial Immune System, etc. GSA is an open problem, and it is expected to produce new techniques of GSA with better performance in future. Acknowledgements This research is supported by National Institute of Technology Uttarakhand and North-cap university (NCU) Gurgaon.

References 1. Rashedi, E., Nezamabadi, H-pour, Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009) 2. Rashedi, E., Nezamabadi, H-pour, Saryazdi, S.: BGSA: binary gravitational search algorithm. Nat. Comput. 9(3) (2009) 3. Amoozegar, M., Nezamabadi, H.-pour: Software performance optimization based on constrained GSA. In: The 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP), pp. 134–139 (2012) 4. Hassanzadeh, H.R., Rouhani, M.: MOGSA: multi objective gravitational search algorithm. In: 2nd International Conference of Computational Intelligence, Communication System and Networks (2010) 5. Li, C., Li, H., Kou, P.: Piecewise function based gravitational search algorithm and its application on parameter identification of AVR system. Neurocomputing 124, 139–148 (2014) 6. Sarafrazi, S., H-pour, Nezamabadi, Saryazdi, S.: Disruption: a new operator in gravitational search algorithm. Scientia Iranica 18(3), 539–548 (2011) 7. Soleimanpour, M., Nezamabadi, H-pour, Farsangi, M.M.: A quantum behaved gravitational search algorithm. In: Proceeding of International Conference on Computational Intelligence and Software Engineering, Wuhan, China (2011) 8. David, R.-C., Precup, R.-E., Petriu, E., Rdac, M.-B., Purcaru, C, Dragos, C.-A., Preitl, S.: Adaptive gravitational search algorithm for PI-fuzzy controller tuning. In: Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics, pp. 136–141 (2012) 9. Shamsudin, H.C., Irawan, A., Ibrahim, Z., Abidin, A.F.Z., Wahyudi, S., Rahim, M.A.A., Khalil, K.: A fast discrete gravitational search algorithm. In: 2012 Fourth International Conference on Computational Intelligence, Modelling and Simulation (CIMSiM), pp. 24–28 (2012) 10. Precup, R.M., David, R.C., Petriu, E.M., Preitl, S., Paul, A.S.: Gravitational search algorithmbased tuning of fuzzy control systems with a reduced parametric sensitivity. Adv. Intell. Soft Comput. 96, 141–150 (2011) 11. Azlina, N., Ibrahim, Z., Nawawi, S.W.: Synchronous versus asynchronous gravitational search algorithm. In: First International Conference on Artificial Intelligence, Modelling & Simulation (2013) 12. Khajehzadeh, M., Taha, M.R., El-Shafie, A., Eslami, M.: A modified gravitational search algorithm for slope stability analysis. Eng. Appl. Artif. Intell. 25(8), 1589–1597 (2012) 13. Soleimanpour moghadam M., Nezamabadi, H- pour: An improved quantum behaved gravitational search algorithm. In: Proceeding of 20th Iranian Conference on Electrical Engineering, (ICEE2012), pp. 711–715 (2012) 14. Nanji, H.R., Mina, S., Rashedi, E.: A high-speed, performance-optimization algorithm based on a gravitational approach. J. Comput. Sci. Eng. 14(5), 56–62 (2012) 15. Dowlatshahi, Bagher, M., Nezamabadi, H-pour: GGSA: a grouping gravitational search algorithm for data clustering. Eng. Appl. Artif. Intell. 36, 114–121 (2014)

Gravitational Search Algorithm: A State-of-the-Art Review

37

16. Wu, Z., Hu, D., Tec, R.: An adaptive centric gravitational search algorithm for complex multimodel problems. Tec. Ing. Univ. 39, 123–134 (2016) 17. Sun, G., Zhang, A., Wang, Z., Yao, Y., Ma, J.: Locally informed gravitational search algorithm. Knowl. Based Syst. 104, 134–144 (2016) 18. Gupta, A., Sharma, N., Sharma, H.: Fitness based gravitational search algorithm. Comput. Commun. Autom. IEEE (2017) 19. Mirjalili, S., Hashim, S.Z., Sardroudi, H.M.: Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl. Math. Comput. 218(22), 11125–11137 (2012) 20. Jiang, S., Ji, Z., Shen, Y.: A novel hybrid particle swarm optimization and gravitational search algorithm for solving economic emission load dispatch problems with various practical constraints. Int. J. Electr. Power Energy Syst. 55, 628–644 (2014) 21. Sun, G., Zhang, A.: A hybrid genetic algorithm and gravitational using multilevel thresholding. Pattern Recognit. Image Anal. 7887, 707–714 (2013) 22. Tsai, H.C., Tyan, Y.-Y., Wu, Y.-W., Lin, Y.-H.: Gravitational particle swarm. Appl. Math. Comput. 219(17), 9106–9117 (2013) 23. Guo, Z.: A hybrid optimization algorithm based on artificial bee colony and gravitational search algorithm. Int. J. Digital Content Technol. Appl. 6(17), 620–626 (2012) 24. Ghalambaz, M., Noghrehabadi, A.R., Behrang, M.A., Assareh, E., Ghanbarzadeh, A., Hedayat, N.: A hybrid neural network and gravitational search algorithm (HNNGSA) method to solve well known Wessinger’s equation. World Acad. Sci. Eng. Technol. pp. 803–807 (2011) 25. Hatamlou, A., Abdullah, S., H-pour, Nezamabadi: A combined approach for clustering based on K-means and gravitational search algorithms. Swarm Evol. Comput. 6, 47–55 (2012) 26. Yin, M., Hu, Y., Yang, F., Li, X., Gu, W.: A novel hybrid K-harmonic means and gravitational search algorithm approach for clustering. Expert Syst. Appl. 38(8), 9319–9324 (2011) 27. Xiangtao, L., Yin, M., Ma, Z.: Hybrid differential evolution and gravitation search algorithm for unconstrained optimization. Int. J. Phys. Sci. 6(25), 5961–5981 (2011) 28. Gauci, M., Dodd, T.J, Groß, R.: Why ‘GSA: A Gravitational Search Algorithm’ is Not Genuinely Based on the Law of Gravity. Springer Science & Business Media, Berlin (2012)

Investigating the Role of Gate Operation in Real-Time Flood Control of Urban Drainage Systems Fatemeh Jafari, S. Jamshid Mousavi, Jafar Yazdi and Joong Hoon Kim

Abstract Flooding is a potential risk to human beings life and assets, and the environment in urban areas. To mitigate such a phenomenon and related damages, structural and nonstructural options can be considered. This study investigates the effect of gate operation on flood mitigation during intense rainfall events. A prototype network, consisting of a detention reservoir located in a portion of Tehran, the capital city of Iran, is considered. Different operational scenarios are examined using an optimal real-time operation model. An SWMM model of the system, simulating rainfall–runoff and hydraulic routing processes, is built and is linked to the harmony search optimization algorithm, evaluating the system operation performance for different scenarios. Results demonstrate that there is stillroom to increase the potential flood regulation capacity of the studied system by equipping it with more controllable apparatus. Keywords Urban drainage system · Flood control · Real-time optimization Detention reservoir

1 Introduction Climate change and exponential growth of impervious surfaces in urban regions due to excessive development of man-made structures, such as buildings, squares, and F. Jafari · S. J. Mousavi (B) Department of Civil and Environmental Engineering, Amirkabir University of Technology, Tehran, Iran e-mail: [email protected] F. Jafari e-mail: [email protected] J. Yazdi College of Engineering, Shahid Beheshti University, Tehran, Iran J. H. Kim School of Civil and Architectural Engineering, Korea University, Seoul, Korea © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_4

39

40

F. Jafari et al.

roads, have led to a remarkable rise in the rate and volume of surface runoff and flooding. During severe rainfall events, the urban drainage system (UDS) becomes overloaded, causing flood occurrence [1]. To prevent flooding in urban areas, offline storage installations are applied which temporarily store stormwater volume. This solution is often expensive due to the high costs of construction and maintenance [1]. On the contrary, nonstructural approaches which are utilized to manage flood with the existing facilities are avoiding large investments [2–4]. Real-time control (RTC) is among the nonstructural approaches that broadly used to manage UDSs. This method let the network to be real-time monitored and regulated so as to compatibly work in various situations and different rainfall events [5]. In RTC procedure, controllable elements, such as gates and pumps, are regulated using operation policies that come from an optimization strategy so as to obtain the desired UDS working behavior [1]. According to the literature, many research studies focus on RTC approach to manage UDSs. In the study of Pleau et al. [6], a global optimal control (GOC) system, consisting of a nonlinear hydrologic–hydraulic model and a nonlinear programming optimization algorithm, was applied to the Quebec westerly sewer network. The objectives of optimization problem were the minimization of set points variations in real time and minimization of the frequency and volume of sewer overflows discharged into the basin’s rivers. To adjust flows and inline storages in combined sewer systems, Darsono and Labadie [7] proposed a neural-optimal algorithm-based RTC approach to regulate flows and inline storages in combined sewer systems. Beeneken et al. [4] also applied a global RTC approach to the combined sewer system of Dresden city, Germany with the hydrodynamic pollution load calculations module to improve the efficiency of the sewer system. The performances of two real-time models for pump station operation, namely a historical adaptive-network-based fuzzy inference system (ANFIS–His) and an optimized ANFIS (ANFIS-Opt), were compared to find optimal operational policies for flood mitigation in urban areas [8]. Yazdi and Kim [9] suggested a harmony search algorithm-based predictive control model to obtain optimal operational policies. They considered the coordinated operation of drainage facilities in a river diversion and an urban drainage system. Using a gossip-based algorithm linked to the SWMM hydrodynamic simulation model, Garofalo et al. [1] developed a distributed RTC model. In this study, an online RTC model is applied to a portion of urban drainage system consisting of a detention reservoir with controllable and uncontrollable gates and openings. The way how the online real-time optimal operation model is applied to the studied system and practical suggestion to improve system’s performance is discussed in the following sections.

2 Methodology According to Fig. 1, suppose a detention reservoir with an outflow gate located at B meters above the surface. If the maximum depth of the reservoir is discretized into n levels, the operation model’s aim is to reduce flood inundation at downstream of

Investigating the Role of Gate Operation in Real-Time …

41

Fig. 1 Discrete water level in the detention reservoir G1

G2

G3



Gj



Gm

Fig. 2 Decision variable vector

the system. In this case, the outflow gate plays a significant role in flood control. Therefore, the optimization problem of system’s operation performance is solved by considering decision variables as a policy on how to regulate   gate openings. In other words, the decision variable vector includes variables G j each of which represents the percentage of gate opening corresponding to water levels within the interval [d j , d j+1 ) (Fig. 2). The number of decision variables is dependent on height B since the gate starts working as the water level exceeds the bottom edge elevation of the gate. Obviously, adding more gates to the system will increase the number of decision variables. Extracting operation policy (percentage of gate openings) for evacuating water out of the system can be obtained via an optimal real-time operation model.

2.1 Optimal Real-Time Operation (RTOP) Model In the RTOP model, the operation policies of regulators are updated periodically, so that the time horizon D is divided into a number of decision time intervals Ti , and a particular control rule Ri is derived for each decision time. As a result, a finite sequence of operating policies R1 , R2 , . . . , Ri , . . . , R H is determined over time horizon D, where each Ri alludes to a vector of optimal policies for gate operation to be applied during the interval Ti . The model formulation of RTOP model is presented below. RTOP model formulation

MIN :

TH 

FT

(1)

T Ti

Subject to:   FT  f R, h t , G j , . . . Ti

(2)

42

F. Jafari et al.

0 ≤ h t ≤ HMAX   h t  f h t−1 , Q in,t , G j Ti  0 if h t ≤ B [G j ]Ti  11 Z 1 Z z × Pz otherwise 11 

Zz  1

(3) (4) (5)

(6)

z1

Note that the formulation represents a multi-period optimization model as it considers the state of the system from the current decision time Ti to the end of horizon time TH in the evaluation of objective function (Eq. 1). However, the found optimal operation rule is applied only for decision time interval Ti . In the above formulation, FTi is the total volume of the flood in the period Ti which is a function of a number of variables such as rainfall amount and characteristics R, the water level at the detention reservoir h t , gate operational policy (decision variables), and other parameters that will be determined using the rainfall–runoff and flow routing simulation model. Equations (2)–(4) represent the SWMM simulation module of the model that must be performed for each objective function. G j is the gate opening percentage corresponding to water levels within an interval [d j , d j+1 ) which is accounted via Eq. (5) in which Pz is an integer variable that takes a value among [0, 10, 20, . . . , 100%], and Z z is a binary variable. h t is the reservoir’s water level at a time t, which is a function of the inflow discharge to the detention reservoir at the time t, Q in,t , the water level at a previous time step, h t−1 , and G j . The popular metaheuristic harmony search (HS) algorithm was used to solve the aforementioned optimization problem. Suitable values of optimization algorithm parameters were determined after some trial runs of the HS algorithm for several flood scenarios as summarized in Table 1. Optimization–simulation models were solved using an Intel Core i7 3.4 GHz system with 8 GB of random access memory (RAM).

Table 1 Parameters used in HS algorithm

Parameter

Value

HM size HMCR PAR FW

100 0.98 0.1 0.02 × variable ranges

Investigating the Role of Gate Operation in Real-Time …

43

3 Study Area The studied system is located in the south part of the main drainage system of Tehran, the capital of Iran. The network covers an area of 156 km2 and includes 42 sub-catchments and 132 conduits. The considered drainage network consists of 116 km underground tunnels approximately 15.6 km of which does not have enough capacity to safely transfer stormwater runoff of a 50-year design rainfall. As shown in Fig. 3, because of lack of hydraulic capacity, a detention reservoir, characteristics of which are presented in Table 2, has been built to temporarily store excess storm runoff. The concrete outlet intake structure of the detention reservoir equipped with three controllable steel sluice gates with 1.6 × 1.6 m2 size. Additionally, eight rectangular openings with 0.6 × 0.9 m2 size at the upper part and a three-diameter octagonal opening on the roof of the structure act as spillway while the water level rises. Therefore, physical characteristics of the system do not provide the ability to control these openings and they automatically work as the water level exceeds their bottom elevation.

Fig. 3 Schematic representation of the studied network and outlet intake structure of the detention reservoir Table 2 Detention reservoir characteristics

Maximum depth (m)

7.5

Area (m2 )

EL 0: 85,000 EL 1: 160,000 EL 7.5: 160,000

44

F. Jafari et al.

Fig. 4 Precipitation hydrograph for investigated events Table 3 Historical storm events studied Event Duration (min) 29/12/1976 26/01/1980 07/12/1984 28/03/2002 04/05/2004 15/07/2012

610 235 230 190 405 270

Accumulation of precipitation (mm) 21.22 15.4 25.71 9.14 8.75 28.7

Six severe historical rainfall events were utilized to examine model performance. The hyetographs of the events and their characteristics are presented in Fig. 4 and Table 3, respectively.

4 Results and Discussion The simulation model of the system is developed using SWMM as shown in Fig. 5 using the aforementioned system’s data, features and characteristics, collected by MG Consulting Engineers (MGCE) [10]. To reduce the executing runtime, the network is separated into two sub-models. In this way, for each decision time, the upstream sub-model is run just for one time and the downstream sub-model is called for

Investigating the Role of Gate Operation in Real-Time …

(a)

45

(b)

Separated node Fig. 5 Simulation sub-models a upstream sub-model b downstream sub-model

each function evaluation. The model is separated into two sub-models based on the assumption that the inflow to the separated node is independent of the gate performance and the gravity flow is formed in the upstream model. Figure 6 confirms the validity of this assumption by comparing the results obtained for integrated (Fig. 3) and separated (Fig. 5a) networks for different events. According to Fig. 3, the reservoir contains three sluice gates and eight openings, operations of which play a significant role in flood reduction. The allowable maximum water depth in the detention reservoir was considered 7.5 m which was divided into 15 discrete values with 0.5 m increments. The decision variables are considered as the percentages of openings corresponding to each discrete water level. According to the previous explanation about the number of decision variables in Sect. 2, the total number of 38 variables is defined with 15, 12, 8, and 3 variables for sluice gates 1, 2, 3, and all openings together (4), respectively. To investigate the importance of each gate operation, three operational scenarios are defined as follows: Scenario 1: In this scenario, all the gates and openings are considered to be fully open all the time without any controlling rule. This is the procedure that currently is in practice.

46

F. Jafari et al.

Fig. 6 Flood hydrographs at separated node validating the gravity flow assumption in the model of upstream network Table 4 Comparison of three scenarios in terms of flood volume

Event 29/12/1976 26/01/1980 07/12/1984 28/03/2002 04/05/2004 15/07/2012

Flooding (1000 m3 ) Scenario 1

Scenario 2

Scenario 3

1059 659.65 1080 339.23 304.25 1120

204.24 0 224.14 0 0 267.84

34.55 0 45 0 0 86.93

Scenario 2: In this scenario, the operation of sluice gates 1, 2, and 3 are controlled using RTOP model, but eight openings at an elevation of 4.5 m are considered to be fully open all the time without any control rule. Scenario 3: In this scenario, all sluice gates and openings are regulated using RTOP model. In other words, all the gates and openings in the system are assumed to be controllable. Table 4 displays the flood volumes resulting from three scenarios for six different rainfall events. It can be inferred that the real-time optimal control approach performs quite well by reducing the negative consequences of flooding and flood volume significantly. Additionally, comparison of the outcomes related to the scenarios 2 and 3 presented in Table 5 demonstrates that controlling the system partially, compared with the case of full regulation of the system components, results in an increase in flood inundation up to 17%. This shows the importance of gate operation in flood management. The ability to regulate all controllable elements in the system leads to an

Investigating the Role of Gate Operation in Real-Time … Table 5 Percentage of flood reduction resulting from scenarios 2 and 3

Event

29/12/1976 26/01/1980 07/12/1984 28/03/2002 04/05/2004 15/07/2012

47

Percentage of flood reduction (%) Scenario 2

Scenario 3

81 100 79 100 100 76

97 100 96 100 100 92

Variation of reduction 16 0 17 0 0 16

Fig. 7 Comparison of different scenarios in terms of reservoir depth

efficient use of the system’s regulation capacity. According to Fig. 7, applying scenario 3 leads to the optimal utilization of the reservoir capacity, where excess water is temporarily stored and is used later for other purposes such as irrigation of urban green landscape.

48

F. Jafari et al.

5 Conclusion Flood is one of the natural disasters that cause damage to human life and assets and the environment. The flood control system including drainage network and detention reservoir is highly dependent on the operation of controllable elements such as gates and pump stations. In this study, the importance of operation of outflow gates of a detention reservoir located in a portion of the urban drainage system of the capital city of Iran was investigated. Three different operational scenarios, representing zero to full utilization of regulating the capacity of the system’s components, were defined and investigated using an optimal real-time operation model. In the real-time operation model, the SWMM simulation model was linked to harmony search optimization algorithm to find a real-time optimal policy for gate operation. Comparing results of different scenarios showed that we can significantly reduce flood inundation using the real-time control approach presented. Moreover, the operation of each individual gate will significantly impact the operation of the whole system as partial control of the system, compared with a fully controlled case, led to 17% increase in flood inundation. Therefore, to utilize the system’s regulation capacity during floods, the studied system may be equipped with more controllable apparatus while taking economic considerations into account.

References 1. Garofalo, G., Giordano, A., Piro, P., Spezzano, G., Vinci, A.: A distributed real-time approach for mitigating CSO and flooding in urban drainage systems. J. Netw. Comput. Appl. 78, 30–42 (2017) 2. Schütze, M., Campisano, A., Colas, H., Schilling, W., Vanrolleghem, P.A.: Real time control of urban wastewater systems—where do we stand today? J. Hydrol. 299(3), 335–348 (2004) 3. Bach, P.M., Rauch, W., Mikkelsen, P.S., McCarthy, D.T., Deletic, A.: A critical review of integrated urban water modelling–Urban drainage and beyond. Environ. Model Softw. 54, 88–107 (2014) 4. Beeneken, T., Erbe, V., Messmer, A., Reder, C., Rohlfing, R., Scheer, M.: Real time control (RTC) of urban drainage systems–a discussion of the additional efforts compared to conventionally operated systems. Urban Water J. 10(5), 293–299 (2013) 5. Dirckx, G., Schütze, M., Kroll, S., Thoeye, C., De Gueldre, G., Van De Steene, B.: RTC versus static solutions to mitigate CSO’s impact. In: 12th International Conference on Urban Drainage, 2011b. Porto Alegre, Brazil (2011, September) 6. Pleau, M., Colas, H., Lavallée, P., Pelletier, G., Bonin, R.: Global optimal real-time control of the Quebec urban drainage system. Environ. Model Softw. 20(4), 401–413 (2005) 7. Darsono, S., Labadie, J.W.: Neural-optimal control algorithm for real-time regulation of in-line storage in combined sewer systems. Environ. Model Softw. 22(9), 1349–1361 (2007) 8. Hsu, N.S., Huang, C.L., Wei, C.C.: Intelligent real-time operation of a pumping station for an urban drainage system. J. Hydrol. 489, 85–97 (2013) 9. Yazdi, J., Kim, J.H.: Intelligent pump operation and river diversion systems for urban storm management. J. Hydrol. Eng. 20(11), 04015031 (2015) 10. MGCE: Tehran Stormwater Management Master Plan, Vol. 4: Existing Main Drainage Network, Part 2: Hydraulic Modeling and Capacity Assessment, December 2011, MG Consultant Engineers, Technical and development deputy of Tehran municipality, Tehran, Iran (2011a)

Molecular Dynamics Simulations of a Protein in Water and in Vacuum to Study the Solvent Effect Nitin Sharma and Madhvi Shakya

Abstract Molecular dynamics simulation shows the motions of distinct molecules in models of liquids, solids, and gases. The motion of a molecule defines how its positions, velocities, and orientations change with time. In this study, an attempt has been made to study the solvent effect on the dynamics of Major Prion protein. By keeping the focus mainly on united motions of the molecule, molecular dynamics simulations of Major Prion protein in vacuum and water are performed up to 100 ps. The results obtained from these two simulations are compared to study the solvent effect on the dynamics of Major Prion protein. Energy minimization and molecular dynamics simulation have been done through GROMACS using OPLS-AA force field. Keywords Molecular dynamics · OPLS-AA force field · RMSD · RMSF MSD

1 Introduction Protein is a molecule made up of amino acids that are needed for the body to function properly. The tertiary structure of protein is the native and functional state of protein [1]. To see the functions and study of protein structure, Molecular dynamics (MD) simulations are extensively used. The outcomes of a given simulation depend on a number of factors, such as the quality of the molecular force field, the behavior of solvent, the time period of the simulation, and the sampling ability of the simulation procedure. There has been massive investment in the basic technology in each of these areas, and the range of application of molecular dynamics simulations has extended N. Sharma (B) Department of Sciences and Humanities, NIT Uttrakhand, 246174 Srinagar, Garhwal, India e-mail: [email protected] M. Shakya Department of Mathematics, MANIT Bhopal, Bhopal 462051, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_5

49

50

N. Sharma and M. Shakya

significantly since the technique was first applied [2]. The initial interpretation of proteins as comparatively firm structures has been swapped by a dynamic model in which the internal motions and succeeding conformational variations play an indispensable role in their function. A solvent plays an imperative part in the study of structure and dynamics of a complex molecule like protein. Numerous approaches have been recommended for the computational simulation of a protein, which can include the effect of a solvent straight or incidentally [3]. The molecular dynamics (MD) or the Newtonian dynamics, that openly contains water molecules and other environmental elements such as ions, is theoretically forthright and has been applied by numerous writers in an effort to replicate the solvent environment [4–7]. Examines in these mechanisms mainly focused on oscillations in the atomic positions to compare on the behavior of the inter and intramolecular hydrogen bonding, or on the conformational dynamics close to energetic location and with the X-ray crystallography [8]. The overview of the process allows us to presume in standard all dynamics and structural aspects of a complex/protein molecule in solution. At minimum as vital are the united motions in proteins, as those modes with low frequencies give vital contributions to the scale of the oscillations of atoms. By a few lowest frequency modes more than half of the magnitude of the root mean square oscillations of atoms can be expressed [1, 6]. Consequently, it is fascinating to understand how the solvent effects this type of low frequency modes. In this work, an attempt has been made to see and study the solvent effects on the dynamics and structure of a complex Major Prion protein molecule mainly by aiming on the united modes. By projecting the molecular dynamics trajectory onto a set of orthogonal principal axes [8], it can be achieved. Projection method has been efficaciously implemented to the study of the united motions of Major Prion protein in vacuum [9]. In the motionless and in the dynamic properties of the protein, the effects of the solvent are embodied when the degrees of freedom of the solvent are projected on those of the protein, In the potential of the mean force for the protein, the static effect is simulated which defines, among others, the transmission of the hydrophobic and electrostatic potential interactions. By Pettitt and Karplus [10], this adjustment of the potential surface because of the solvent has been verified for alanine dipeptide in water by a treatment based on the prolonged RISM theory. Whereas the potential surface has two deep minima in vacuum in the dihedral angle space, there are numerous minima in solvent which are parted by considerably lesser potential barricades. This modification in the potential surface must affect the fluctuation and conformation of the protein ominously.

2 Methodology In the present study, an attempt has been made to study the solvent effect in the motion of Major Prion protein. To see the solvent effects on the dynamics of Major Prion protein, we simulated the above mention protein in water and vacuum. The initial idea for all parameter interpretations, the experimental structure of Major Prion

Molecular Dynamics Simulations of a Protein in Water …

51

protein obtained by Saviano, G., and Tancredi, T. and accessible from the Protein Data Base under the code 2IV4. All computational and simulation were done by using the GROMACS simulation software and OPLS-AA force field, by using SPC water model for which we used the flexible system as mentioned by the GROMACS constraint files. For the first molecular dynamics simulation, one Major Prion protein molecule was equilibrated composed with 1650 water molecules in a cubic box with periodic boundary conditions in an NpT ensemble at a temperature of 300 K and a reference pressure of 1 bar and a simulation of 100 ps was performed for analysis. The Newton equation of motion is integrated using Leap-Frog algorithm using time step of 0.002 ps. For the second molecular dynamics simulation, Major Prion protein was equilibrated in vacuum at unbroken temperature 300 K. The other simulation constraints are alike to those of the first simulation.

3 Results and Discussion First, we simulated the Major Prion protein up to 100 ps in solvent (SPC water model) with 1650 solvent particles after that we simulated it in vacuum for the same duration of time, i.e., up to 100 ps with the same parameter that we used for solvent simulation. After that we determined the important parameters like RMSF (root mean square fluctuation), projection on principal axes for both the simulation, RMSD (Root means square deviation) and then we compared the results to see the change and effect of solvent on Major Prion protein while simulation, and for validation, we calculated and plotted MSD (mean square displacement) for both the simulation. For the molecular dynamics simulation in water for Cα atoms and side chain, it is established that the root mean square fluctuations (RMSF) are much smaller than those in the molecular dynamics in vacuum. This evidence shows from already established facts that potential surface for the protein has reformed in vacuum due to the existence of water solvent [11].

3.1 Atoms Fluctuations In this part, the root mean square fluctuations of atoms are discussed. It is observed that RMSF is much smaller in water than that in vacuum. The RMSF in the molecular dynamics simulation in water and molecular dynamics simulation in vacuum are shown for Cα and side chain in Figs. 1 and 2 respectively, where the black curve is showing the RMSF in solvent and the red curve showing the RMSF in vacuum. (1) First, we have calculated and plotted RMSF for Cα. (2) Now we have calculated and plotted RMSF for side chain.

52

N. Sharma and M. Shakya

Fig. 1 RMSF for Cα

Fig. 2 RMSF for side chain

3.2 Molecular Dynamics Trajectories Projection on the Principal Axes Now we have plotted the projection of the molecular dynamics trajectory on to the principal axes up to 100 ps. Molecular dynamics trajectories projections on to the first three principal axes in vacuum and in water are shown in Figs. 3 and 4, correspondingly. The projections in vacuum are smooth curves while there is significant noise in water and the periodicities do not seem. (1) Molecular dynamics trajectories projection on to the three principal axes in vacuum (2) Molecular dynamics trajectories projection on to the three principal axes in solvent.

Molecular Dynamics Simulations of a Protein in Water …

53

Fig. 3 Projection on to the three principal axes in vacuum

3.3 Root Mean Square Deviation (RMSD) The easiest approach to verify the accuracy of simulation is to define the point to which the motion causes a collapse of the X-ray structure. In vacuum, it takes approximately 20 ps afore the root mean square deviation from initial structure touches a steady point, while in solvent, the structure becomes stable more rapidly around 5 ps and remains closer to the X-ray structure (Fig. 5).

4 Validations Mean square displacement Now we calculated and plotted mean square displacement against time in solvent and vacuum shown in Fig. 6a, b respectively. It is clear from Fig. 6a that mean square displacement for solvent propagates linearly with time and for vacuum from Fig. 6b, it is not growing with time.

54

Fig. 4 Projection on to the three principal axes in solvent Fig. 5 RMSD in vacuum (Red) and RMSD in solvent (Black)

N. Sharma and M. Shakya

Molecular Dynamics Simulations of a Protein in Water …

55

Fig. 6 a Mean square displacement against time in solvent b Mean square displacement against time in vacuum

5 Conclusions In the present work, an attempt has been made to study the solvent effect on the dynamics of Major Prion protein for this we simulated the protein in water and vacuum. It is observed from Figs. 1 and 2 that RMSF for Cα atoms and side chains is smaller than that in vacuum which shows that the potential surface of the protein is transformed due to the existence of water solvent from that in vacuum [8]. It is observed from Figs. 3 and 4 that the projection in vacuum has smooth curves than that of solvent and include heavy noise in solvent due to the presence of water. Also, we showed that (Fig. 5) molecular dynamics simulation of protein motion is more accurate when solvent is incorporated, in that the structure remains nearer to the Xray structure. Finally, we calculated and plotted MSD (Mean Square Displacement) against time up to 100 ps. It is known from already established facts that MSD for solvent simulation should increase linearly with time which is shown in Fig. 6a that the mean square displacement for solvent grows linearly with time [12] and it does not show the same behavior (Fig. 6b) for vacuum simulation which is validating the results.

References 1. Dill, K., et al.: The protein folding problem. Annu. Rev. Biophys. 37, 289–316 (2008) 2. Fan, H.: Comparative Study of Generalized Born Models: Protein Dynamics. PNAS (2005) 3. McCammon, J.A., Harvey, S.C.: Dynamics of Proteins and Nucleic Acids, ch. 4. Cambridge University Press, Cambridge (1987) 4. Brooks III, C.L., Karplus, M.: Solvent effects on protein motion and protein effects on solvent motion. Dynamics of the active site region of lysozyme. J. Mol. Biol. 208, 159 (1989) 5. Go, N.: A theorem on amplitudes of thermal atomic fluctuations in large molecules molecules assuming specific conformations calculated by normal mode analysis. Biophys. Chem. 35, 105 (1990) 6. Go, N., Noguti, T., Nishikawa, T.: Latent dynamics of a protein molecule observed in dihedral angle space. Proc. Natl. Acad. Sci. U.S.A. 80, 3696 (1983)

56

N. Sharma and M. Shakya

7. Jorgensen, W.L., TiradoRives, J.: Chem. Scripta A 29, 191 (1989) 8. Kitao, A., Hirata, F., Go, N.: The effects of solvent on the conformation and the collective motions of protein: normal mode analysis and molecular dynamics simulations of mellittin in water and in vacuum. Chem. Phy. 158(1991), 447–472 9. Horiuchi, T., Go, N.: Projection of Monte Carlo and molecular dynamics trajectories onto the normal mode axes: human lysozyme. Proteins 10, 106–116 (1991) 10. Pettitt, B.M., Karplus, M.: Chem. Phys. Lett. 121, 194 (1985) 11. Levitt, M., Sharon, R.: Accurate simulation of protein dynamics in solution. Proc. Natl. Acad. Sci. U.S.A. 85, 7557–7561 (1988) 12. Leach, AR.: Molecular Modeling Principal and Application, 2nd ed. Prentice Hall (2001)

An Exploiting Neighboring Relationship and Utilizing an Overhearing Concept for Improvement Routing Protocol in Wireless Mesh Network Mohammad Meftah Alrayes, Neeraj Tyagi, Rajeev Tripathi and Arun Kumar Misra Abstract Reduction in control packets and minimization of setting-up time of the route are two challenging issues in wireless mesh networks. Solutions to these two issues are expected to save channel bandwidth and decrease the time delay, which in turn will improve the quality of services. In this paper, a mechanism, based on exploitation of local connectivity and overhearing concept, has been proposed for route discovery and route repair in the well known AODV (i.e., Ad Hoc On-demand Distance Vector) routing protocol. In this proposed work, any neighboring mesh node of the destination mesh node can provide a route to the source mesh node, even if the neighboring node does not have route entry about that destination in routing table. The promiscuous mode (overhearing concept) has been applied to reduce the number of duplicate control packets sent by neighbors of same destination nodes. Simulation results demonstrate how the proposed work outperforms the AODV under routing overhead, end to end delay, throughput, and packet delivery ratio in wireless mesh networks. Keywords AODV · Wireless mesh networks · Promiscuously mode Neighboring table

M. M. Alrayes (B) Applied Research and Development Organization, Tripoli, Libya e-mail: [email protected] N. Tyagi · A. K. Misra Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology, Allahabad 211004, India e-mail: [email protected] A. K. Misra e-mail: [email protected] R. Tripathi Department Electronics and Communication and Engineering, Motilal Nehru National Institute of Technology, Allahabad 211004, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_6

57

58

M. M. Alrayes et al.

Internet Gateway Mesh Routers

Backbone Mesh Routers Border mesh Routers Mesh Clients

Wireless Links Wired Links

Fig. 1 Architecture of wireless mesh network

1 Introduction Wireless mesh networks (WMNs) are one of the promising candidates for next generation wireless networks. These provide cost-effective connectivity solutions, whereas other existing technologies fail to do. Wireless mesh networks have adopted valuable characteristics of ad hoc network and traditional wired and wireless networks. It helps to increase the capacity and coverage area and provides high connectivity to end users in a pervasive manner [1]. Wireless mesh networks as shown in Fig. 1 are composed of mesh routers and mesh clients, such that the mesh clients are mobile in nature and mesh routers are static. Routing in WMNs is a challenging problem and a good routing solution should be fast adaptable for any change in topology as well as changes in wireless link conditions. It should have characteristics of decentralizing, self-organizing, self-healing, scalability, and robustness. The improvement of network layer mechanisms in WMNs is very important and it is challenging issue, and it will offer a better quality of service to different types of traffic. In wireless mesh network, most of the traffic in wireless mesh network travels from gateway to mesh clients and vice versa. The path length between gateway and client mesh is very long and size of network is also large. The control packets take long time and consume a lot of bandwidth for transmitting in case of route discovery, route maintenance and route repair, and the increase in packet overhead in wireless mesh network has significant impact as compared to that in ad hoc networks, because the wireless mesh network supplies backhaul connectivity to different technologies. Thus, the control routing overhead should reduce [2]. Nemours routing protocols have been developed till date, it is basically based on AODV and OLSR (i.e., Optimized

An Exploiting Neighboring Relationship and Utilizing …

59

link state Routing protocol [3]. Some of the prior research works focus on design approach for repairing a route failure [6], the QoS (i.e., quality of services) [4] and security routing [5]. In this paper, the proposed method to improve existing on-demand routing in wireless mesh networks has been presented with following contributions:• The proposed mechanism utilizes local connectivity and overhearing concept of AODV protocol for route discovery and local route repair. • The proposed M-AODV routing protocol based on AODV has been implemented in NS-2 [7] (i.e., network simulator). The rest of the paper is organized as follows: Sect. 2 presents proposed work. The detailed analysis of results and discussions is given in Sect. 3 followed by conclusion and references.

2 Proposed Work In the present work, a mechanism for route discovery and route repairing has been proposed by exploiting overhearing packets, where this concept has been applied to reduce a number of duplicated control packets during constructing the alternative route [6] and for adopting a local route recovery [8]. An overhearing table has been constructed that helps in reducing the number of duplicate route reply packets (RREP) which have been sent by neighbors of same destination nodes. Further, the number of route request packets (RREQ) that are rebroadcasted by neighbors of same destination will also reduce. Overhearing table has been used in recording, the source and destination addresses of data packet and route reply packet (RREP). It uses the following fields in every route entry: 1. Source IP address field consists of the source IP address of RREP packet or data packet. 2. Destination IP address field consists of the destination IP address of RREP packet or data packet. 3. Sequence number field consists of destination sequence number of RREP packet. In the case of data packets, sequence number is nil. 4. Next hop consisting of mesh node sends RREP message, either to an intermediate mesh node or to destination mesh node itself. An example of overhearing table is given in Fig. 2, A mesh node purges a route entry for keeping only a fresh information in overhearing table in the following cases:• If overhearing mesh node has not heard the data packets or any packets for same source/destination for active route timeout. • Overhearing route error packet (i.e., RERR) has been sent to same source and destination.

60

M. M. Alrayes et al.

s

A

B

C

D

F

RREP PACKET E

OVERHEAR ING Source IP address

Destination IP address

Sequence number

D’s IP address

S’s IP address

Seq no of RREP

Next hope IP address F’s IP address

Fig. 2 Mesh node E overhears RREP packet from mesh node F and then create route entry in overhear table

A neighboring table has also been constructed, which is used in tracking the neighboring mesh nodes, even if it still has neighboring relationship or not using the information from the fields of hello message of AODV routing protocol, as follows: 1. Source ID: Source address of hello message. 2. Sequence number: Latest destination sequence number of hello sender. 3. Life time: ALLOWED_HELLO_LOSS * HELLO_INTERVAL. The life time value is updated when a mesh node receives a next hello message from same sender node. When a mesh node receives a first hello message from its neighbor, it checks neighboring table, and if it does not have route entry, it creates route entry into neighboring table, with source address of hello sender, last destination sequence number and life time, otherwise it will update life time, which makes a route entry valid. If a mesh node fails to receive any hello message in ALLOWED_HELLO_LOSS* HELLO_INTERVAL milliseconds or get indication that a link with its neighbor has been broken, then route entry for this neighbor will be deleted from neighboring table. Construction of neighboring table is given in Fig. 3. When an intermediate mesh node receives a new RREQ packet with a new sequence number from the same source or different sources to the same destination or different destinations, it will check the overhearing table, whether a source IP address of RREQ packet is destination IP address field in overhearing table and destination IP address of RREQ packet is source address field in overhearing table. If an intermediate node has route entry for this route in overhearing table, then, it will not rebroadcast RREQ packet, and drop it. This process helps in reducing overhead packets in the network and saves bandwidth consumption. Whenever a route entry is found in overhearing table it means that a route is already established. As defined in Fig. 4, a mesh node F receives a fresh RREQ packet late form source mesh node S for destination D after the route has been established. Data traffic has started exchanges. Mesh node F will not send route reply packet, and it will only append that in overhearing table. If an intermediate mesh node has no route entry in overhearing table, then through a lookup into neighboring table, it will check that whether destination node is neighboring or not. In case the destination is not neighbor of intermediate mesh node, then the intermediate mesh node will rebroadcast RREQ packet and if an intermediate

An Exploiting Neighboring Relationship and Utilizing …

61

B

C Hello message

F A E

Source IP address

Destination IP address

F’s IP address

Life time

Last Sequence number of mesh node

Allowed hello loss*Hello interval

Fig. 3 Mesh node E receive hello packet from mesh node F, then create route entry in neighboring table

S

A

B

C

D

E J

Data Packet RREQ Packet

F

Fig. 4 Mesh node F has received a fresh RREQ of mesh node S as source & destination mesh node D, after data packets has started flowing from source to destination Fig. 5 Mesh node C has send unicast RREP packet on behave mesh node

A

B

C

D

RREQ PACKET RREP PACKET

mesh node is neighbor of destination, it sends RREP packet to the mesh node that has broadcasted the RREQ packet on behalf of destination node. An intermediate mesh node generates a new destination sequence number based on previous destination sequence number of destination node, the sequence number of destination mesh node can be obtained by destination sequence number that is available in last received hello message from destination itself, and available in a neighboring route table. This is done to prevent routing loops and ensuring that fresh route has been generated. After this an intermediate mesh node generates RREP and sends it back to its neighbor from which it received the RREQ. In the present case, we do not need to send a gratuitous RREP to the destination node, because a destination will overhear RREP packet. For example, mesh node C can send route reply packet on behalf of the mesh node D, when it receives RREQ packet from mesh node B, instead of rebroadcasting the RREQ packet, the mesh node C sends route reply packet to mesh node A via mesh node B. The above scenario is shown in Fig. 5. To prevent more than one RREP packet being sent by all neighbors of the same destination which have received same RREQ packet, first mesh node that receives an RREQ packet should only send RREP packet. This will also avoid unnecessary traffic. Once other neighboring nodes that have neighboring relationship with destination and have received same RREQ get overhear route reply, they will make entry in overhearing table and drop RREQ packet. Further, they also not send route reply

62 Fig. 6 Mesh node E and mesh node F has overhearing route replay packet from mesh node C

Fig. 7 Mesh node E will send route reply after receiving RREQ packet. Mesh node A will receive duplicated route reply

M. M. Alrayes et al.

RREP PACKET.

E A

B

C

D F

A

I B

K

C

J

D

E

RREP Packet

F

packet, as shown in Fig. 6. Mesh node A intends for getting a path to mesh node D. Mesh nodes E and F are neighbors of both mesh nodes C and D, once mesh nodes E and F overhear RREP packet that send from mesh node C, mesh node E and F will not send RREP and will drop RREQ packet in case it is received form predecessor nodes. This way helps to eliminate a number of control packets and it can reduce a time delay during route creation, and the amount of decrease is one hop away from a destination. This proposed method suggests that an intermediate mesh node, which is generating a RREP packet for destination neighbor, is not necessary to be stored in a route table for forward route, because destination sequence numbers that keeps a route is fresh and generated by this mesh node itself and destination IP address is that of its neighbor. This in turn slightly reduces the size of route table in comparison with AODV routing protocol and can clearly appear when more than one route is created. Lookup time at the route table in case of data packets being sent to the destination is also reduced. A typical situation can arise when a mesh node is unable to hear RREP from a neighbor of destination, which is also not neighbor of it. But, meanwhile, this mesh node becomes neighbor of destination. In this situation, mesh node will send RREP packet and a source mesh node of route (i.e., source mesh node of RREQ packet) will receive multiple route reply packets and chooses the best one based on least hop count and newest destination sequence number, as shown in Fig. 7. Mesh node E is a neighbor of mesh node F but not neighbor of mesh node D, and is unable to hear route reply packet (i.e., RREP). This packet has been sent by mesh node D for setting up the route between mesh node A and F. Mesh node E has received RREQ packet for mesh node F as destination and mesh node D as source from mesh node J. It thinks that no mesh node has send route reply packet. Mesh node E will send unicast route reply packet. Mesh node A will receive two route reply packets from both mesh nodes D and E. Mesh node A will choose better route based on hop count and fresh destination sequence number. Local route repair is also modified using the same idea of replies from destination neighboring. This destination neighboring mesh node has arrived and makes neighboring relationship after the route was established; whereas this neighboring mesh node belongs to other route. This mesh node can send route reply

An Exploiting Neighboring Relationship and Utilizing … Fig. 8 Proposed modified local rout repair

A

B

63

C

D

E

RREQ Packet Flow of path RREP Packet

N

packet to sender of RREQ packet once it receives RREQ. Figure 8 illustrates a modified local route repairing in our proposed method. When the route is broken between mesh node D and mesh node E, a mesh node N which is a neighbor of destination mesh node E receives RREQ Packet. It establishes alternative route by send route reply packet to mesh node D.

3 Simulation Results and Discussion Our proposed work has been simulated by using NS-2 version 2.33 for evaluating the performance, and we consider packet delivery fraction, an end to end delay, average route overhead, and average throughput [9].

3.1 Simulation Results and Analysis by Varying Number of Mesh Clients In this scenario, the network density has been varied by varying the number of mesh clients from 5 to 65. The simulation results of this scenario are shown in Figs. 9, 10, 11 and 12. It can be observed from Fig. 9 that the proposed method has less delay in comparison to AODV standard over wireless mesh network. Time latency for new route or repairing of route break will be saved at least one hop in the cases of lower as well as higher density of nodes. The proposed scheme also exhibits reduction in an averaging end to end delay over varying number of mesh clients by 14.665% when compared with AODV routing protocol. It can be seen from Fig. 10 that our proposed method has better delivery fraction than AODV standard. With the help of overhearing and neighboring tables, reduction in flooding of RREQ packets helps to increase the chance for other neighbors to exchange data packets. AODV standard has more routing packet overhead than our proposed method in all our experiments as can be seen from Fig. 11. Our proposed method has reduced averaging overhead by 8.095%. From Fig. 12, it can be observed that our proposed method successfully achieves a better throughput than AODV standard. The improvement in throughput is by 3.680%.

64

M. M. Alrayes et al.

Fig. 9 Number of mesh clients versus end to end delay

Fig. 10 Number of mesh clients versus packet delivery fraction

Fig. 11 Number of mesh clients versus routing packet overhead

Fig. 12 Throughput versus number of mesh clients

4 Conclusion The proposed method has suggested the use of advantages of local connectivity (neighboring relationship) and promiscuously mode (overhearing concept). It has aided to enhance routing protocol in route discovery phase and route repair phase.

An Exploiting Neighboring Relationship and Utilizing …

65

The simulation results under different number of mobile mesh clients show us that significant improvement in key performance metrics in terms of delay, throughput, packet delivery fraction, and route packet overhead that have been achieved as compared to that of AODV standard.

References 1. Akyildiz, I., Wang, X., Wang, W.: Wireless mesh networks: a survey. Comput. Netw. 47(4), 445–487(2005). Elsevier 2. Campista, M.E.M., Costa, L.H.M.K., Duarte, O.C.: A routing protocol suitable for backhaul access in wireless mesh networks. Comput. Netw. 56(2), 703–718 (2012) 3. Alotaibi, E., Mukherjee, B.: A survey on routing algorithms for wireless Ad-Hoc and mesh networks. Comput. Netw. 56(2), 940–965 (2012). Elsevier 4. Paris, S., Nita-Rotaru, C., Martignon, F., Capone, A.: Cross-layer metrics for reliable routing in wireless mesh networks. IEEE/ACM Trans. Networking 21, 1003–101 (2013) 5. Khan, S., Loo, J.: Cross layer secure and resource-aware on-demand routing protocol for hybrid wireless mesh networks. Wireless Pers. Commun. 62(1), 201–214(2012). Springer 6. Jeon, J., Lee, K., Kim, C.: Fast route recovery scheme for mobile ad hoc networks. In: IEEE International Conference on Information Networking (ICOIN), pp. 419–423 (2011) 7. The Network Simulator NS, https://www.isi.edu/nsnam/ns 8. Youn, J.-S., Lee, J.-H., Sung, D.-H., Kang, C.-H.: Quick local repair scheme using adaptive promiscuous mode in mobile ad hoc networks. J. Netw. 1, 1–11(2006) 9. Alrayes, MM., Tripathi, R., Tyagi, N., Misra, A.K.: Exploiting neighboring relationship for enhancement of AODV in hybrid wireless mesh network. In: 17th IEEE International Conference on Networks (ICON), pp. 71–76 (2011)

A Comparative Study of Machine Learning Algorithms for Prior Prediction of UFC Fights Hitkul, Karmanya Aggarwal, Neha Yadav and Maheshwar Dwivedy

Abstract Mixed Martial Arts is a rapidly growing combat sport that has a highly multi-dimensional nature. Due to a large number of possible strategies available to each fighter, and multitude of skills and techniques involved, the potential for upset in any fight is very high. That is the chance of a highly skilled, veteran athlete being defeated by an athlete with significantly less experience is possible. This problem is further exacerbated by the lack of a well-defined, time series database of fighter profiles prior to every fight. In this paper, we attempt to develop an efficient model based on the machine learning algorithms for the prior prediction of UFC fights. The efficacy of various machine learning models based on Perceptron, Random Forests, Decision Trees classifier, Stochastic Gradient Descent (SGD) classifier, Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) classifiers is tested on a time series set of a fighter’s data before each fight. Keywords Machine learning algorithms · Mixed martial arts · Classifiers

1 Introduction Mixed Martial Arts (MMA) is currently one of the fastest growing sports in the world. The UFC or Ultimate Fighting Championship is currently the largest fight promotion in the mixed martial arts world. Between 2013 and 2017, the promotion Hitkul · K. Aggarwal · N. Yadav (B) · M. Dwivedy School of Engineering and Technology, BML Munjal University, Gurugram 122413, Haryana, India e-mail: [email protected] Hitkul e-mail: [email protected] K. Aggarwal e-mail: [email protected] M. Dwivedy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_7

67

68

Hitkul et al.

had presented over 1400 fights and counting, with an event being held bi-monthly and having multiple fights per event. We attempted to evaluate the accuracy of multiple machine learning algorithms in order to determine which method is best suited to predict fight results given both competitors’ records prior to the fight. Though several works have been published that seek to forecast performance of an MMA fighter prior to the fight [1], we attempted to create a dataset that reflects each fighter’s statistical record prior to each fight and build a predictive model. Thus, we should ideally be able to predict a fighter’s performance. Intuitively, an experienced fighter would most certainly have an advantage over a novice provided the age difference is not large enough to affect athletic performance. We evaluated many different machine learning models, and charted their performance over the dataset. It was found that the Random Forests and SVM gave the best results in terms of prediction accuracy. For a brief background of a UFC event, the UFC is a fighting promotion. MMA employs various techniques from an ensemble of different martial arts such as Jiu Jitsu, Boxing, Taekwondo, and Wrestling. This allows for a wide variety of strikes and tactics to be employed by the fighters depending on their expertise in each art. A typical UFC event has multiple fights on a particular day—these events take place roughly once every 2 weeks. Each fight typically lasts three rounds of 5 min each. However, major fights will last five rounds. The two fighters are denoted red and blue side, with the better known fighter being allocated the red side. There are multiple ways to win a fight, via Knockout/Technical Knockout wherein the fighter overwhelms his opponent with strikes until he is unable to continue, via submission; wherein a fighter cedes victory, or finally by decision, when the fight reaches the end of the allotted time for the fight and the fighters are judged by a panel of three judges on factors such as damage inflicted, aggression, and ring control. Decision victories are the most common, however these are the hardest to judge, as the judging process tends to be rather opaque [2, 3]. Today, statistical modeling and its applications in the UFC are in its infancy [4–6]. No thoroughly rigorous statistical models have been published till date to predict the UFC fights previously. In this paper, we attempt to correct this imbalance—while there remains insufficient data available to build fighter specific models (with the UFC publishing granular fight data only since 2013 and each fighter fighting less than 10 times every year). We have attempted to build a model to predict which fighter is more likely to emerge victorious. In order to create the dataset, we retrieved each fighter’s current statistics and subtracted their per fight statistics in order to create a sort of time-dependent dataset—reflecting what each fighter’s statistics were prior to each fight, in terms of strikes, takedowns, styles, etc. In an ideal world, this model can be used to create matchups where both fighters are equally likely to win, as having this sort of equity in winning chance will most likely correlate with more exciting fights, as well as equalizing betting odds for fighters prior to each fight. The organization of the paper is as follows: brief description of the models used is given in Sect. 2. Section 3 describes about the data exploration and feature manipulation. Statistical models along with the results are given in Sect. 4. Further Sect. 5 continues with results and discussion and finally Sect. 6 concludes the study.

A Comparative Study of Machine Learning Algorithms for Prior …

69

2 Models Used 2.1 Random Forests Random Forests is an ensemble classification technique consisting of a collection of tree-structured classifiers where random vectors are distributed independently and each tree casts a unit vote for the most popular class for a particular input [7].

2.2 Support Vector Machine (SVM) SVMs are set of related supervised learning methods used for classification and regression. The input vector is mapped to a higher dimensional space where a maximal separating hyperplane is constructed [8].

2.3 K-Nearest Neighbors (KNN) KNN is a classification technique that assigns points in our input set to the dominant class amongst its nearest neighbors, as determined by some distance metric [9].

2.4 Decision Tree Decision trees are sequential models, which logically combine a sequence of simple tests. Each test compares a numeric attribute against a threshold value or a nominal attribute against a set of possible values [10].

2.5 Naive Bayes A Naive Bayes classifier is a simple probabilistic classifier based on applying Bayes theorem (from Bayesian statistics) with strong independence assumptions. An advantage of the naive Bayes classifier is that it only requires a small amount of training data to estimate the parameters necessary for classification [11].

70

Hitkul et al.

2.6 Perceptron A Perceptron is composed of several layers of neurons: an input layer, possibly one or several hidden layers and an output layer. Each neuron’s input is connected with the output of the previous layer’s neurons whereas the neurons of the output layer determine the class of the input feature vector [9].

2.7 Stochastic Gradient Descent (SGD) SGD, also known as incremental gradient descent, is a stochastic approximation of the gradient descent optimization method for minimizing an objective function that is written as a sum of differentiable functions [12].

3 Data Exploration and Feature Manipulation Granular fight data is available for UFC fighters by FightMetric LLC. Highly granular data is only available post 2013, thus an assumption has been made that all fighters from that period and beyond start at 0. By collecting and summing statistics per fight, we were able to assemble a tabulation of each fighter’s statistics prior to each fight. From this set, we can see that we have a total of 895 columns and one dependent variable. The columns themselves have 13 integer types (Streaks, Previous Wins, etc.), 9 object types (Names, Winner, Winby, etc.) and 873 Float types. The features for data set are represented by Figs. 1, 2 and 3. Some quick observations from the raw dataset1. 2. 3. 4.

Red side seems to win slightly more than blue (867/1477  58.7%). There are more fighters fighting debut fights. Most fights are won by decision, and 2015 had the most fights. The features seek to accommodate different fighter’s styles (including both attempted strikes/takedowns versus significant or landed strikes/takedowns in an effort to quantify strike/takedown volume as a meaningful statistic.

We then filled all the Null values in our dataset with 0 values and assigned numeric codes to all categorical values. As one can see from Fig. 1 that the highest correlations are with Round 4 and Round 5 features, since most fights do not have Round 4 and Round 5. To deal with this sparsity, we summed the respective features of each round. Finally, we then attempt to half the number of features again, by taking the ratio of features from red and blue side fighters.

A Comparative Study of Machine Learning Algorithms for Prior …

71

Fig. 1 A heatmap of the highest 10 correlations with our target variable

4 Modeling Performance of multiple machine learning models on this dataset is then evaluated and explored by a variety of statistical methods described in Sect. 2. Table 1 describes the performance of our chosen models on the raw dataset. Table 2 describes the performance of the same models after we summed respective round features and Table 3 describes the performance of the models post taking the ratio of red and blue side fighters’ respective features (Figs. 4, 5 and 6).

5 Results and Discussion From Fig. 7, it is evident that random Forests and SVM showed the most consistent results against the dataset. Models like Naive Bayes and simple decision trees showed

72

Hitkul et al.

Fig. 2 A heatmap of linear correlations between our target variable, post feature reduction by summing rounds Table 1 Prediction accuracy of our machine learning models on the data set before any feature manipulation

Model

Prediction accuracy

KNN Decision tree SGD classifier Random forests SVM Bayes

0.554054054054 0.533783783784 0.530405405405 0.581081081081 0.628378378378 0.35472972973

Perceptron

0.537162162162

very poor results does not show good result. The dataset itself has much room for improvement, and the assumption that all fighters start from 0 in 2013 coupled with the rise in debut fights for new fighters means that our dataset is very sparse. However, from simply examining the dataset, one can easily see that factors such as fighter age are very relevant to the eventual winner of the fight. Moreover, the Red Side Fighter

A Comparative Study of Machine Learning Algorithms for Prior …

73

Fig. 3 Correlation matrix heatmap post feature reduction by taking the ratio of features amongst red and blue fighters Table 2 Prediction accuracy for each of our models upon the dataset with summed features

Model

Prediction accuracy

KNN Decision tree SGD classifier Random forests SVM Bayes

0.557432432432 0.516891891892 0.550675675676 0.584459459459 0.577702702703 0.202702702703

Perceptron

0.557432432432

tends to win more frequently. Depending on the model and feature, we exhibit about a 3–6% increase in prediction accuracy from zeroR policy. Our best predictive model is SVM by far—using hyperparameter optimization we were able to get very consistent results with a predictive accuracy of 61% and a best observed accuracy of 62.8%.

74 Table 3 Prediction accuracy of each model on the data post ratio of features

Hitkul et al. Model

Prediction accuracy

KNN Decision tree SGD classifier Random forests SVM Bayes

0.543918918919 0.503378378378 0.543918918919 0.597972972973 0.611486486486 0.212837837838

Perceptron

0.560810810811

Fig. 4 Confusion matrices for each model on the dataset

Fig. 5 Confusion matrix for each predictor post feature reduction by summing

A Comparative Study of Machine Learning Algorithms for Prior …

75

Fig. 6 Confusion matrix for each predictor after all the feature manipulations

Fig. 7 A bar graph of prediction accuracy of each model over all three sets of data instances, the baseline, the summed rounds and the ratio of features

Moreover, the robustness of SVM can be validated by the drop in prediction accuracy as the features were reduced.

76

Hitkul et al.

6 Conclusion In conclusion, SVM proved to be the most resilient of machine learning models for this type of dataset or problem domain, while we did perform some small amount of hyperparameter optimization and feature engineering, it is worth noting that SVM with the RBF kernel performed very well on the dataset straight out of the box. Thus, for sports where a lot of statistical data is not available, it might be a very valuable classifier. In the future, one can also employ some sort of feature selection mechanism to reduce the overfitting in the dataset.

References 1. Johnson, J.D.: Predicting outcomes of mixed martial arts fights with novel fight variables. Master Thesis, University of Georgia, Athens, Georgia (2012) 2. Gift, P.: Performance evaluation and favoritism: evidence from mixed martial arts. J. Sports Econ. (2014). https://doi.org/10.1177/1527002517702422 3. Collier, T., Johnson, A., Ruggiero, J.: Aggression in Mixed Martial Arts: An Analysis of the Likelihood of Winning a Decision. Violence and Aggression in Sporting Contests: Economics, History and Policy, pp. 97–109 (2012) 4. Betting on UFC Fights—A Statistical Data Analysis, https://partyondata.com/2011/09/21/bet ting-on-ufc-fights-a-statistical-data-analysis, last accessed 12 June 2017 5. Goel, E., Abhilasha, E.: Random forest: a review. Int. J. Adv. Res. Comput. Sci. Software Eng. 7(1), 251–257 (2017) 6. Bhavsar, H., Panchal, M.H.: A review on support vector machine for data classification. Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 1(10), 185–189 (2012) 7. Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B.: A review of classification algorithms for EEG-based brain–computer interfaces. J. Neural Eng. 4(2), R1 (2007) 8. Kotsiantis, S.B.: Decision trees: a recent review. Artif. Intell. Rev. 39(4), 261–283 (2013) 9. Kaur, G., Oberai, N.: A review article on Naïve Bayes classifier with various smoothing techniques. Int. J. Comput. Sci. Mobile Comput. 3(10), 864–868 (2014) 10. Lessmann, S., Sung, M., Johnson, J.E.: Alternative methods of predicting competitive events: an application in horserace betting markets. Int. J. Forecast. 26(3), 518–536 (2010) 11. Lock, D., Nettleton, D.: Using random forests to estimate win probability before each play of an NFL game. J. Quant. Anal. Sports 10(2), 197–205 (2014) 12. Bottou, L.: Large scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010. Physica-Verlag HD, pp. 177–186 (2010)

Detection of a Real Sinusoid in Noise using Differential Evolution Algorithm Gayathri Narayanan and Dhanesh G. Kurup

Abstract Detection of sinusoidal signals embedded in noise is a pertinent problem in applications such as radar and sonar, communication systems and defense, to name a few. This paper, describes the detection of a real sinusoid in additive white Gaussian noise (AWGN) using the Differential Evolution Algorithm (DE). The performance of DE is evaluated for different sampling rates and also for different signal-to-noise ratios (SNR). The proposed DE which combines two DE strategies enhances the detection performance compared to the original DE algorithm. We show that the detection performance of the proposed algorithm is superior to previously reported methods, especially at low SNR. Keywords Differential Evolution (DE) · Fast Fourier Transform (FFT) Cramer-Rao Lower Bound (CRLB) · Detection

1 Introduction Detection of sinusoidal signals in noise has numerous applications such as sonar, radar, communication systems, spectroscopy, image analysis, and instrumentation systems. Although the DFT-based method is a simple and fundamental method in this regard, the frequency resolution one can detect using Discrete Fourier Transform (DFT) is limited by the sampling frequency. One of the popular approach to overcome this problem, is by using three samples around the maximum absolute value as described in [1]. Based on the method detailed in [2], where in the author estimates the frequency with an arbitrary number of DFT coefficients, Candan [3] proposed an G. Narayanan (B) Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India e-mail: [email protected]; [email protected] D. G. Kurup Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Bangalore, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_8

77

78

G. Narayanan and D. G. Kurup

approach for frequency with reduced complexity. In [4], an estimator with improved bias performance using Fourier Coefficient representations is presented. Candan’s estimators in [4, 5] approach the theoretical Cramer-Rao Lower Bound (CRLB). In [6], a fine estimation frequency estimator for a real sinusoid is presented based on estimators developed for complex sinusoids and a filtering method. These methods of estimation are more conceptual and do not necessarily provide the optimum value of the frequency estimated. Optimization algorithms which are inspired from natural evolution have been successfully applied to many engineering disciplines. Some of these methods include Genetic Algorithm and Particle Swarm Optimization algorithm (PSO) [7, 8]. The primary advantage of these evolutionary algorithms is that it has the ability to find the global minimum or maximum of an objective function with single or multiple parameters. Of all the algorithms under the category of Genetic Algorithms, Differential evolution (DE) is one among the most powerful and simple stochastic real parameter optimization method which is available today [9]. In this paper, we apply a variant of Differential Evolution algorithm (DE) which incorporates multiple strategies for evolution of population for the problem of sinusoid detection in noise. This version of DE, which we refer to as the Modified Differential Evolution Algorithm (MDE) hereafter, is described in [10] and applied for optimizing antenna arrays. We compare the results obtained using MDE with other frequency estimation methods as well as Cramer-Rao Lower Bound (CRLB).

2 Proposed Method Figure 1 illustrates the steps involved in applying the Modified Differential Evolution Algorithm (MDE) as described in [10] for detecting the sinusoid signal embedded in noise. As can be seen in Fig. 1, the first step is to initialize a parent population p¯ i , where i ∈ [1, N p ], in the parameter space. In the case of sinusoid detection, the parameter space spans the frequencies around frequency f max corresponding to the maximum FFT bin as,   Fs Fs : f max + (1) p¯ i = f max − N N In the Modified Differential Evolution Algorithm [10], as can be seen from the following equations, each strategy can be expressed as the linear combination of the differences of vectors, which are a subset of the parent population, along with the parent entries p¯i and p¯b . It is to be noted that the size of the population in the subset will be significantly smaller than the size of the parent population. This would imply that the number of parents who are partaking in the evolution process could be more than two, unlike the GA, which typically makes use of only two entries from the parent

Detection of a Real Sinusoid in Noise using Differential …

79

Fig. 1 Modified differential evolution algorithm

population for the point and the uniform crossovers. For the MDE as described in [10], applied to the sinusoid detection problem, the vector transformations of parent population are as follows: t¯1 = p¯ b + F ( p¯ i − p¯r ) (2) t¯2 = p¯r + F( p¯i − p¯s )

(3)

In the above equations, F is a constant which controls the differential variations ( p¯i − p¯r ) and ( p¯i − p¯s ). The members p¯r and p¯s constitute the subset of parent population. It should satisfy the condition that the indexes r and s (r = s) are different and that they are also different from the running index i. As shown in Fig. 1, once the children corresponding to the next generation are obtained as described above, these children constitute the parent population for computing the set for the following generation [10].

80

G. Narayanan and D. G. Kurup

The results that are obtained using MDE at different SNR levels have been compared with other existing frequency estimation methods as well as with the CramerRao Lower Bound (CRLB). An approximation of CRLB is given by the following expression [4]: 2 = σCRLB

6 2π 2 N (N 2 − 1)SNR

(4)

where N denotes the number of samples and SNR is the signal-to-noise ratio.

3 Results In the simulations, real sinusoidal signals are generated randomly according to, f (i) = [0.1Fs : 0.4Fs ], where Fs is the sampling frequency. Noise, according to normal distribution as per the AWGN assumption, and for different SNR, is added to the signal and the noisy data is applied to the algorithm. In order to assess the performance of the MDE algorithm, FFT-based estimation, and other frequency estimation methods have been implemented [4]. The FFT-based estimation method locates the frequency corresponding to the peak absolute value of FFT. The Mean-Squared Error (MSE) is calculated for each method as follows:

MSE =

Ne −1 1  (i) 2 | f (i) − f est | Ne n=0

(5)

(i) are the actual and where Ne is the number of Monte Carlo experiments and f (i) , f est the estimated frequency for ith experiment. Simulations are performed for different resolutions corresponding to N = [64, 128, 256, 512]. Figure 2 shows the mean-squared error (MSE) for frequency resolution corresponding to N = 64. In order to compare MDE with other standard estimation techniques, the results using FFT and Candan’s method [4] are added along with CRLB. From, Fig. 2, we can conclude that the performance of MDE is better than FFT as well as Candan’s method, especially for low SNR values. Figures 3, 4 and 5 show the mean-squared error (MSE) for frequency resolution corresponding to N = [128, 256, 512] respectively. Similar to earlier results, to compare MDE with other standard estimation techniques, the results using FFT and Candan’s method [4] have been included along with the CRLB. From the results, we can conclude that the performance of MDE is better than FFT as well as Candan’s method, especially for low SNR values.

Detection of a Real Sinusoid in Noise using Differential …

81

0.1 0.01 0.001

MSE

0.0001 1e-05 1e-06 1e-07 1e-08 1e-09 -20

CRLB FFT Candan MDE

-15

-10

-5

0

5

10

15

20

SNR(dB)

Fig. 2 Comparison of MSE (mean-squared error) for MDE, Candan [4] and FFT along with CRLB for frequency resolution corresponding to N = 64

0.1 0.01 0.001

MSE

0.0001 1e-05 1e-06 1e-07 1e-08 1e-09 1e-10 -20

CRLB FFT Candan MDE

-15

-10

-5

0

5

10

15

20

SNR(dB)

Fig. 3 Comparison of MSE (mean-squared error) for MDE, Candan [4] and FFT along with CRLB for frequency resolution corresponding to N = 128

82

G. Narayanan and D. G. Kurup 0.1 0.01 0.001 0.0001

MSE

1e-05 1e-06 1e-07 1e-08 1e-09 1e-10 1e-11 -20

CRLB FFT Candan MDE

-15

-10

-5

0 SNR(dB)

5

10

15

20

Fig. 4 Comparison of MSE (mean-squared error) for MDE, Candan [4] and FFT along with CRLB for frequency resolution corresponding to N = 256

0.1 0.01 0.001 0.0001

MSE

1e-05 1e-06 1e-07 1e-08 1e-09 1e-10 1e-11 -20

CRLB FFT Candan MDE

-15

-10

-5

0 SNR(dB)

5

10

15

20

Fig. 5 Comparison of MSE (mean-squared error) for MDE, Candan [4] and FFT along with CRLB for frequency resolution corresponding to N = 512

Detection of a Real Sinusoid in Noise using Differential …

83

4 Conclusion Through this work, we show that the performance of Modified Differential Evolution Algorithm outperforms other detection strategies, especially at low SNR values. It is also seen that at high SNR, the Mean-Squared Error (MSE) closely approaches Cramer-Rao Lower Bound (CRLB). The proposed method has the potential to be applied to real-world sinusoid detection applications.

References 1. Quinn, B.G.: Recent advances in rapid frequency estimation. Digital Signal Proc. 19, 942–948 (2009) 2. Jacobsen, E., Kootsookos, P.: Fast accurate frequency estimators [DSP Tips & Tricks]. IEEE Signal Proc. Mag. 24, 123–125 (2007) 3. Candan, C.: A method for fine resolution frequency estimation from three DFT sample. IEEE Signal Proc. Lett. 18, 351–354 (2011) 4. Candan, C.: Analysis and further improvement of fine resolution frequency estimation method from three DFT samples. IEEE Signal Proc. Lett. 20(9), 913–916 (2013) 5. Orguner, U., Candan, C.: A fine resolution frequency estimator using an arbitrary number of DFT coefficients. Signal Proc. 105, 17–21 (2014) 6. Djukanovic, S.: An accurate method for frequency estimation of a real sinusoid. IEEE Signal Proc. Lett. 23 (2016) 7. Man, K.F., Tang, K.S., Kwong, S.: Genetic algorithms: concepts and applications. IEEE Trans. Ind. Electron. 43 (1996) 8. Das, S., Konar, A., Chakraborty, U.K.: Improving particle swarm optimization with differentially perturbed velocity. In: Proceedings on Genetic and Evolutionary Computation Conference, pp. 177–184 (2005) 9. Price, K., Storn, R., Lampinen, J.: Differential Evolution A Practical Approach to Global Optimization. Springer, Berlin, Germany (2005) 10. Dhanesh, D.G., Himdi, M., Rydberg, A.: Synthesis of uniform amplitude unequally spaced antenna arrays using the differential evolution algorithm. IEEE Trans. Antennas Propogation 51, 2210–2217 (2003)

Inherited Competitive Swarm Optimizer for Large-Scale Optimization Problems Prabhujit Mohapatra , Kedar Nath Das

and Santanu Roy

Abstract In this paper, a new Inherited Competitive Swarm Optimizer (ICSO) is proposed for solving large-scale global optimization (LSGO) problems. The algorithm is basically motivated by both the human learning principles and the mechanism of competitive swarm optimizer (CSO). In human learning principle, characters pass on from parents to the offspring due to the ‘process of inheritance’. This concept of inheritance is integrated with CSO for faster convergence where the particles in the swarm undergo through a tri-competitive mechanism based on their fitness differences. The particles are thus divided into three groups namely winner, superior loser, and inferior loser group. In each instances, the particles in the loser group are guided by the winner particles in a cascade manner. The performance of ICSO has been tested over CEC2008 benchmark problems. The statistical analysis of the empirical results confirms the superiority of ICSO over many state-of-the-art algorithms including the basic CSO. Keywords Competitive swarm optimizer · Evolutionary algorithms Large-scale global optimization · Particle swarm optimization Swarm intelligence

1 Introduction Particle swarm optimization (PSO), proposed by Eberhart and Kennedy [1] is a stochastic and population-based self-adaptive global optimization technique inspired from social and competitive behavior of bird flocking and fish schooling. The PSO simulates the swarm behavior to steer the particles in locating the global optimal solution. Particles tune their path in search space dynamically using the personal best (pbest) position and the global best (gbest) position of the whole swarm. Due to its simplicity and ease implementation, PSO has gained wide popularity over the P. Mohapatra (B) · K. N. Das · S. Roy National Institute of Technology, Silchar 788001, Assam, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_9

85

86

P. Mohapatra et al.

past few decades. However, while solving multimodal functions, PSO gets trapped into local optima, resulting with a premature convergence [2, 3]. Over the time, researchers attempted to face the challenge of reforming PSO to get rid of it. As a result, numerous PSO variants are developed in the literature [4–8]. Since these variants use the modified mechanisms with new operators, they mostly become computationally expensive. Moreover, the existence of ‘gbest’ operator in PSO helps in faster convergence, but mostly leads to premature convergence. Hence, Liang [9] suggested a new PSO variant, deprived of the gbest terms and the update approach relies only on pbest position. Later, some other alternate techniques are proposed in which the concept of neither gbest nor pbest is employed. In 2013, an effort was made with a multi-swarm structure built on a feedback mechanism [10], where particles are rationalized by a pairwise competition among particles of two unlike swarms. Similar approaches have been proposed by other researchers [11–13] too. This idea of competitive mechanism predominantly marks two significances. Firstly, as a convergence approach, the weak solutions get an opportunity to learn from the stronger ones of the other swarm. Secondly, as a mutation scheme the stronger particles self-inspired by the earlier experiences to yield improved results. These tactics collectively impacts on retaining proper balance between exploration and exploitation. Using this concept, another algorithm namely competitive swarm optimizer (CSO) [14] is suggested in the recent literature. In CSO, after each pairwise competition between particles, the loser particle learns from the winner one, instead of from pbest or gbest. The working principle of CSO algorithm is very simple, yet influential to solve LSGO problems. Although the CSO mechanism has established many success milestones in the recent evolutionary world, the improved quality in the solution and greater rate of convergence [15, 16] are yet to be addressed. In this paper, a new CSO algorithm inspired by human learning principles has been proposed. The proposed algorithm employs the process of inheritance that allows the particles to improve the search capabilities by utilizing the experience of more efficient particles. The basic idea is to allow the average and below average solutions to converge towards good solutions in a cascade manner. As a result, an improved rate of convergence is expected through a better rate of exploration in the search space. The paper is structured as follows. The related works to large-scale optimization problem are being reviewed in Sect. 2. The motivation behind the proposition and the proposed algorithm are outlined in Sect. 3. In Sect. 4, the comparative studies of the experimental results are carried out. Lastly, the conclusion of the paper is drawn in Sect. 5.

2 Large-Scale Optimization Problems The real-world problems arise around is mostly complex structured due to the presence of a large number of decision variables and hence takes huge time for getting solved. Such problems are usually called as Large-Scale Global Optimization

Inherited Competitive Swarm Optimizer for Large-Scale …

87

(LSGO) problems. In fact, proposing an efficient algorithm for solving LSGO is a greater challenge among researchers. Hence over the time, quite a large number of metaheuristic algorithms are proposed in the literature to solve LSGO. Based on the decomposition of the problem dimension, such algorithms could be categorized into two kinds. The first kind is ‘Decomposition based algorithms’ which are also known as Cooperative Coevolution (CC) [17–19] algorithms. In such kind, the highdimensional problems are decomposed into subproblems of low dimension. Each subproblem is being solved initially by some traditional optimization algorithm for a fixed number of generations in a round-robin approach. Then the solution from each sub-problem is combined to form an n-dimensional solution. However, Yang et al. [20] integrated a DE- based CC method called DECC-G [21] which inspires with the notion of random grouping of decision variables to solve LSGO problems of 500 and 1000 dimensions. Later, it has been modified to a multilevel CC algorithm (MLCC) [22] that uses a decomposer pool. It works with dynamic group size of variables that rely on the past performance of the decomposer. Gradually, similar algorithms namely CCPSO2 [23] and CC-CMA-ES [24] are being proposed to solve LSGO problems. The second kind is the ‘Non-Decomposition based algorithms’. Instead of the divide-and-conquer approach, it uses different effective approaches in order to improve the performance. These methods are mainly categorized as local search based [25, 26], evolutionary computation based [27, 28] and swarm intelligence based methods [29]. This present work proposes a modified CSO [14, 30] namely Inherited CSO (ICSO) based on human learning principle. Both CSO and ICSO here belong to the swarm intelligence approach. The motivation behind proposing such an algorithm is as follows.

3 Motivation and Proposition 3.1 Motivation Human beings have good social cognizance and are most intelligent creature in the society. Probably for this reason, the algorithms inspired by human thoughts are superior to those inspired by other creatures [31, 32]. In a family; the beliefs, the ideas, the customs and the cultures usually inherited from one generation to the other. The most experienced person acts as a guide and others attempt to learn from him directly or indirectly. A son learns from his father and father from the grandfather. Sometimes, son used to learn from the grandfather too. This process is known as ‘method of inheritance’. This concept of inheritance is presented in Fig. 1, which became the major motivation of the proposed algorithm.

88

P. Mohapatra et al.

Fig. 1 Graphical illustration of the concept of inheritance

3.2 Proposition In CSO, only half of the swarm gets the opportunity to improve their solution, which results high diversity and slow rate of convergence. In order to balance the exploration and exploitation a new tri-competitive scenario along with the method of inheritance is introduced here. The tri-competitive scenario allows 2/3rd of the swarms to participate in the upgradation process whereas it passes the rest 1/3rd directly to the next generation to retain the necessity of swarm diversity [33]. Further, the learning abilities of the collaborators are again more strengthened through the method of inheritance. In this process, the offspring continuously learn from their parents. This healthy learning process effectively passes the good qualities of the elders to the younger ones. As a result, the self and social cognizance of human thoughts leads towards better solution over the search space. Selection Strategy In a swarm of size m, three randomly selected particles are undergone through a tri-competition in terms of their fitness values, resulting with one winner and two losers. The superior loser is symbolized as l1 and the inferior as l2 . Eventually, through this selection process, there will be a K ( m/3) number of distinct competitions possible. Therefore, three distinct groups namely winner group, superior loser group and inferior loser group will be formed, each of size K . Let X w,k (t), X l1 ,k (t), X l2 ,k (t) and Vw,k (t), Vl1 ,k (t), Vl2 ,k (t) represents the position and velocity of the winner and two losers respectively in the k-th round of competition (k  1, 2, . . . , K ) at iteration t. The selection strategy of particles under tri-competition along with their inherited learning strategy is presented in Fig. 2.

Inherited Competitive Swarm Optimizer for Large-Scale …

89

Fig. 2 Swarm’s tri-competition mechanism in ICSO and the upgradation of winners and losers

Inherited Learning Strategy The particles in each distinct group formed by selection strategy learn through different inherited learning strategies as discussed below, which are mainly motivated by the concept of inheritance. Winner group: Since it includes the top performing particles (viz. the winner of each tri-competition), they act as a guide (the most experienced person like grandfather in a family) for the loser particles. These particles, being the best individuals in the swarm, need least attention for improvement. Therefore, the particles in the winner group are directly allowed to transfer to the next generation without any alteration. Superior loser group: The particles in this group are the average individuals. They are assigned to perform two tasks. Firstly they improve themselves by learning from the winner and secondly they guide the inferior loser to improve their performance (like father simultaneously learns from grandfather and teaches to the son). The velocity and position of superior loser (l1 ) are updated by (1) and (2) respectively as follows.   vl1,k (t + 1)  R1 (k, t)vl1,k (t) + R2 (k, t) xw,k (t) − xl1,k (t)   + ϕ1 R3 (k, t) xk (t) − xl1,k (t) (1) xl1,k (t + 1)  xl1,k (t) + vl1,k (t + 1)

(2)

Here X k (t) is the mean position of the whole swarm. The factor ϕ1 governs the effect of the mean position in maintaining the diversity that helps in escaping from getting

90

P. Mohapatra et al.

trapped into the local optima. Moreover, R1 (k, t), R2 (k, t) and R3 (k, t) represent three randomly generated vectors at the k-th round competition in generation t. Inferior loser group: The particles in this group are the least efficient and hence require special guidance for performance improvement. These particles do not have any other social responsibilities except improving themselves. Thus, it utilizes the experience of superior loser as well as the winner (like son learns from the father and the grandfather as well). Here superior loser acts as a primary mentor, which is reflected in the middle term of (3). Since the inexperienced individuals also need to be guided by the experienced individuals, the inferior losers are additionally allowed to learn from the mean of the winners as given in the last term of (3). The velocity and position of inferior loser (l2 ) are updated by (3) and (4) respectively as follows.   vl2,k (t + 1)  R4 (k, t)vl2,k (t) + R5 (k, t) xl1,k (t) − xl2,k (t)   + ϕ2 R6 (k, t) xw (t) − xl2,k (t) (3) xl2,k (t + 1)  xl2,k (t) + vl2,k (t + 1)

(4)

Here ϕ2 is the factor that helps in governing the effect of X w (t). R4 (k, t), R5 (k, t), and R6 (k, t) are three randomly generated vectors at the k-th round competition in generation t. The above strategies are incorporated to construct the proposed ICSO algorithm. The entire working mechanism of ICSO algorithm is presented through a flow diagram in Fig. 3.

4 Experimental Results and Discussions 4.1 Experimental Setup In order to evaluate the performance of the proposed algorithm ICSO, a set of 7 benchmark functions of CEC2008 are considered. The reason behind considering such sets is to test the efficiency of ICSO in solving problems of different taste. The ICSO algorithm is implemented in Matlab R2013a on a PC with a Dual Core i7 2.00 GHz CPU having 4 GB RAM, Microsoft Windows 7, and 32-bit operating system. The experiments were conducted 25 times and in each run the maximum function evaluations (FEs) for CEC2008 are fixed using (5). Maximum_FEs  3000 ∗ Dimension of the problem

(5)

The benchmark functions are all scalable i.e. the dimension can be user-defined. In this study, the dimension of all the benchmark functions is fixed at 1000. The parameters ϕ1 , ϕ2 and m in ICSO are considered here as reported in [14].

Inherited Competitive Swarm Optimizer for Large-Scale …

91

Fig. 3 Flowchart of ICSO algorithm

4.2 Performance Comparison In this section, ICSO is deployed to solve CEC2008 LSGO problems under the parameter setting recommended in the last section. The optimum solutions achieved by ICSO are compared with CSO [14] and some other state-of-the-art algorithms like CCPSO2 [23], multilevel cooperative co-evolution (MLCC) [22], separable covariance matrix adaption strategy (sep-CMA-ES) [34], efficient population utilization strategy for particle swarm optimizer (EPUS-PSO) [29] and DMS-PSO [25]. Statistical Tests Mean, Standard Deviation and t-test: To analyze and investigate the results, three types of statistical measures are considered. The experimental outcomes in terms of

92

P. Mohapatra et al.

mean, standard deviation (Std) and t-values of errors are reported in the Table 1. The overall best mean and least Std are emphasized with boldface letters. To confirm the existence of significant differences between ICSO and other algorithms, t-test with a significance level α  0.05 has been carried out. ICSO is significantly better over another algorithm if the equivalent t-value is boldfaced. In case of a tie, the values are tinted with bold italic. Further, the last column of each of these tables under the heading w/t/l denotes the win, tie and loss totals of ICSO over that specific algorithm in the sense of t-values. The algorithms with high win values are again emphasized with bold letters. From the last column it is observed that the win total of ICSO is maximum. Average Ranking test according to Friedman test: Due to Friedman Test, the average ranking of ‘n’ different algorithm in solving ‘m’ different functions can be calculated through following steps. a. First, each of ‘n’ algorithm is used to solve all m functions to form an m− tuple vectors of solutions for a particular algorithm. b. Against each function, the relative ranking (from 1 to n) is made. c. The average ranks for each function over all algorithms will be calculated through the mean value of relative rankings. The average ranking comparison of ICSO with all rest algorithms is reflected in Table 1. It is observed from Table 1 that ICSO attains the best ranking and supersedes others including CSO. Best Count test: The ‘Best Count’ of an algorithm is the number of functions for which the algorithm provides the best results as compared to the rest algorithms. For each algorithm the Best Count is reported just right to the average ranking in Table 1. The highest count of ICSO indicates that it outperforms over others everywhere. Convergence Analysis: Convergence comparison of ICSO is made with its immediate competitor CSO by allowing both of them to run from the same seed in order to ensure a fair comparison. The seven benchmark functions of CEC2008 are taken into consideration and the convergence graphs are pictured in Fig. 4, in which each subfigure is responsible for one function. From this figure it can be concluded that sooner or later, ICSO converges closer towards optimal solution as compared with CSO. In few cases where ICSO initially could not beat CSO, gradually could do it later.

5 Conclusion In this paper, a new inherited competitive swarm optimizer (namely ICSO) is proposed. The synergy of ‘method of inheritance’ in human learning principle along with CSO beautifies the strength of the proposed algorithm. Unlike CSO, ICSO updates 2/3rd of the population strings using an inherited technique in a cascade manner. It is especially designed to handle large-scale global optimization problems. The experimental results and statistical analysis concludes that ICSO delivers the supreme results and outclasses many state-of-the-art algorithms including CSO in terms of

1.52E−15 −2.57E+01 5.53E+02 2.86E+01 −9.67E+01 0.00E+00 0.00E+00 2.13E+02

Std. t-Values EPUS-PSO Mean Std. t-Values DMS-PSO Mean Std. t-Values

Sep-CMAES

MLCC

CCPSO2

CSO

9.50E−25 2.23E−26 – 1.66e−22 1.18e−23 −6.99E+01 5.18E−13 9.61E−14 −2.70E+01 8.45E−13 5.00E−14 −8.44E+01 7.81E−15

Mean Std. t-Values Mean Std. t-Values Mean Std. t-Values Mean Std. t-Values Mean

ICSO

f1

9.02E+00 −1.92E+02 4.66E+01 4.00E−01 −2.05E+02 9.15 E+01 7.13E−01 −4.01E+02

1.76E+01 5.84E−01 – 3.76e+01 1.18e+00 −7.63E+01 7.82E+01 4.25E+01 −7.13E+00 1.087E+02 4.754E+00 −9.51E+01 3.65E+02

f2

4.54E+01 7.38E+00 8.37E+05 1.52E+05 −2.75E+01 8.98E+09 4.38E+08 −1.02E+02

9.77E+02 9.23E−02 – 9.81E+02 6.49E−01 −2.98E+01 1.33E+03 2.63E+02 −6.71E+00 1.79E+03 1.58E+02 −2.60E+01 9.10E+02

f3

2.48E+02 −9.86E+01 7.58E+03 1.51E+02 −2.37E+02 3.83E+03 1.70E+02 −1.00E+02

4.14E+02 1.03E+01 – 5.21e+02 2.95e+01 −1.72E+01 1.99E−01 4.06E−01 2.01E+02 1.37E−10 3.37E−10 2.01E+02 5.31E+03

f4

1.97E−03 −1.00E+00 5.89E+00 3.91E−01 −7.53E+01 0.00E+00 0.00E+00 7.85E+84

2.22E−16 0.00e+00 – 2.22e−16 0.00e+00 0.00E+00 1.18E−03 3.27E−03 −1.80E+00 4.18E−13 2.78E−14 −7.51E+01 3.94E−04

f5

Table 1 Comparison of ICSO versus others in solving CEC 2008 benchmark problems

3.19E−01 −3.37E+02 1.89E+01 2.49E+00 −3.80E+01 7.75E+00 8.92E−02 −4.35E+02

7.81E−14 3.41E−15 – 8.306e−13 1.673e−14 −2.20E+02 1.02E−12 1.68E−13 −2.80E+01 1.06E−12 7.68E−14 −6.39E+01 2.15E+01

f6

5/0/2

5/0/2

5/2/0



w/t/l

9.36E+01 5/1/1 −6.67E+01 −6.62E+03 3.18E+01 7/0/0 −5.28E+02 −7.50E+03 1.63E+01 5/0/2 −5.04E+02

−1.40E+04 6.23E+01 – −1.38e+04 3.37e+02 −1.60E+00 −1.43E+04 8.27E+01 1.45E+01 −1.47E+04 1.51E+01 5.48E+01 −1.25E+04

f7

1

0

2

6

4.28

2

0

0

2

Best count

4.85

3.71

3.57

2.85

2

Average ranking

Inherited Competitive Swarm Optimizer for Large-Scale … 93

94

P. Mohapatra et al.

Fig. 4 The convergence profiles during 5 × 106 Fitness Evaluations (FEs) of CSO and ICSO on 1000D CEC2008 benchmark functions

both the solution excellence and the rate of convergence. The Inherited mechanism indeed shapes the proposed algorithm to become more robust and effective.

References 1. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995) 2. Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: Proceedings of International Conference on Machine Learning, pp. 412–420. Morgan Kaufmann Publishers (1997) 3. Chen, W.N., Zhang, J., Lin, Y., Chen, E.: Particle swarm optimization with an aging leader and challengers. IEEE Trans. Evol. Comput. 17(2), 241–258 (2013) 4. Ratnaweera, A., Halgamuge, S.K., Watson, H.C.: Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 8(3), 240–255 (2004) 5. Zhan Z.-H., Zhang, J., Li, Y., Chung, H.-H.: Adaptive particle swarm optimization. IEEE Trans. Systems Man Cybern. B Cybern. 39(6), 1362–1381 (2009) 6. Hu, M., Wu, T., Weir, J.D.: An adaptive particle swarm optimization with multiple adaptive methods. IEEE Trans. Evol. Comput. 17(5), 705–720 (2013) 7. Juang C.-F.: A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans. Systems Man Cybern. B Cybern. 34(2), 997–1006 (2004) 8. Mendes, R., Kennedy, J., Neves, J.: The fully informed particle swarm: simpler, maybe better. IEEE Trans. Evol. Comput. 8(3), 204–210 (2004) 9. Liang, J.J., Qin, A., Suganthan, P.N., Baskar, S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3), 281–295 (2006) 10. Cheng, R., Sun, C., Jin, Y.: A multi-swarm evolutionary framework based on a feedback mechanism. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 718–724. IEEE (2013) 11. Goh, C., Tan, K., Liu, D., Chiam, S.: A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design. Eur. J. Oper. Res. 202(1), 42–54 (2010)

Inherited Competitive Swarm Optimizer for Large-Scale …

95

12. Hartmann, S.: A competitive genetic algorithm for resource-constrained project scheduling. Naval Res. Logistics (NRL) 45(7), 733–750 (1998) 13. Whitehead, B., Choate, T.: Cooperative-competitive genetic evolution of radial basis function centers and widths for time series prediction. IEEE Trans. Neural Netw. 7(4), 869–880 (1996) 14. Ran, C., Yaochu, J.: A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 45(2), 191–204 (2015) 15. Clerc, M., Kennedy, J.: The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002) 16. Trelea, I.C.: The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf. Process. Lett. 85(6), 317–325 (2003) 17. Tseng, L.-Y., Chen, C.: Multiple trajectory search for large scale global optimization. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 3052–3059. IEEE (2008) 18. LaTorre, A., Muelas, S., Pena, J.M.: Large scale global optimization: experimental results with MOS-based hybrid algorithms. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1–8. IEEE (2013) 19. Potter, M.A., Jong, K.A.D.: A cooperative coevolutionary approach to function optimization. In: Proceedings of the International Conference on Evolutionary Computation, pp. 249–257 (1994) 20. Yang, Z., Tang, K., Yao, X.: Differential evolution for high-dimensional function optimization. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 3523–3530. IEEE (2007) 21. Yang, Z., Tang, K., Yao, X.: Large scale evolutionary optimization using cooperative coevolution. Inf. Sci. 178(15), 2985–2999 (2008) 22. Yang, Z., Tang, K., Yao, X.: Multilevel cooperative coevolution for large scale optimization. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1663–1670. IEEE (2008) 23. Li, X., Yao, Y.: Cooperatively coevolving particle swarms for large scale optimization. IEEE Trans. Evol. Comput. 16(2), 1–15 (2011) 24. Liu, J., Tang, K.: Scaling up covariance matrix adaptation evolution strategy using cooperative coevolution. In: Proceedings of International Conference on Intelligent Data Engineering and Automated Learning, pp. 350–357. Springer (2013) 25. Liang, J., Suganthan, P.: Dynamic multi-swarm particle swarm optimizer. In: Proceedings of IEEE Swarm Intelligence Symposium, pp. 124–129. IEEE (2005) 26. LaTorre, A., Muelas, S., Peña, J.-M.: A MOS-based dynamic memetic differential evolution algorithm for continuous optimization: a scalability test. Soft. Comput. 15(11), 2187–2199 (2011) 27. Yang, Z., Tang, K., Yao, X.: Scalability of generalized adaptive differential evolution for largescale continuous optimization. Soft. Comput. 15, 2141–2155 (2011) 28. Brest, J., Maucec, M.S.: Self-adaptive differential evolution algorithm using population size reduction and three strategies. Soft. Comput. 15(11), 2157–2174 (2011) 29. Hsieh, S.-T., Sun, T.-Y., Liu, C.-C., Tsai, S.-J.: Solving large scale global optimization using improved particle swarm optimizer. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1777–1784. IEEE (2008) 30. Mohapatra, P., Das, K.N., Roy, S.: A modified competitive swarm optimizer for large scale optimization problems. Appl. Soft Comput. 59, 340–362 (2017) 31. Tanweer, M.R., Suresh, S., Sundararajan, N.: Human meta-cognition inspired collaborative search algorithm for optimization. In: Proceedings of the IEEE International Conference on Multisensor Fusion and Information Integration for Intelligent Systems, pp. 1–6. IEEE (2014) 32. Shi, Y.: Brain storm optimization algorithm. An optimization algorithm based on brainstorming process. Int. J. Swarm Intell. Res. (IJSIR) 2(4), 35–62 (2011) 33. Olorunda, O., Engelbrecht, A.P.: Measuring exploration/exploitation in particle swarms using swarm diversity. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1128–34. IEEE (2008) 34. Ros, R., Hansen, N.: A simple modification in cma-es achieving linear time and space complexity. In: Parallel Problem Solving from Nature–PPSN X, pp. 296–305 (2008)

Performance Comparison of Metaheuristic Optimization Algorithms Using Water Distribution System Design Benchmarks Ho Min Lee, Donghwi Jung , Ali Sadollah , Eui Hoon Lee and Joong Hoon Kim Abstract Various metaheuristic optimization algorithms are being developed and applied to find optimal solutions of real-world problems. Engineering benchmark problems have been often used for the performance comparison among metaheuristic algorithms, and water distribution system (WDS) design problem is one of the widely used benchmarks. However, only few traditional WDS design problems have been considered in the research community. Thus, it is very challenging to identify an algorithm’s better performance over other algorithms with such limited set of traditional benchmark problems of unknown characteristics. This study proposes an approach to generate WDS design benchmarks by changing five problem characteristic factors which are used to compare the performance of metaheuristic algorithms. Obtained optimization results show that WDS design benchmark problems generated with specific characteristic under control help identify the strength and weakness of reported algorithms. Finally, guidelines on the selection of a proper algorithm for WDS design problems are derived. Keywords Metaheuristic optimization algorithms · Performance measurement Water distribution systems

H. M. Lee · D. Jung · E. H. Lee Research Center for Disaster Prevention Science and Technology, Korea University, Seoul, South Korea A. Sadollah Department of Mechanical Engineering, Sharif University of Technology, Tehran, Iran J. H. Kim (B) School of Civil Environmental and Architectural Engineering, Korea University, Seoul, South Korea e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_10

97

98

H. M. Lee et al.

1 Introduction Optimization can be defined as the process to find the solution having the best fitness satisfying a set of constraints. Various metaheuristic optimization algorithms are being developed and applied to real-world engineering problems such as truss structure design, dam operation, parameter estimation, and traffic engineering. Mathematical benchmark problems have been used for performance comparison among metaheuristic algorithms, however, engineering optimization problems have their own characteristics. Good performance on the mathematical benchmark problems does not guarantee good performance in real-world engineering problems. Therefore, to evaluate the performance of the metaheuristic algorithm for the real problem, it should be verified by applying the engineering problem with specific characteristics. The water distribution system (WDS) design problem is one of the widely used engineering benchmark problems. Several metaheuristic optimization algorithms have been applied to optimal design of WDSs with various characteristics. Simpson et al. [1] applied genetic algorithms (GAs), and Maier et al. [2] applied ant colony optimization (ACO) for optimal design of WDS. The particle swarm optimization (PSO) and the harmony search (HS) were applied by Montalvo et al. [3] and Geem [4] respectively. More recently, the water cycle algorithm (WCA) and the mine blast algorithm (MBA) were applied by Sadollah et al. [5, 6] to find an optimal design of WDS. However, only few traditional WDS design problems (e.g., New York tunnels, Hanoi, and Balerma network) have been considered in the research community [7–9] Therefore, it is very challenging to identify a metaheuristic algorithm’s better performance over others with such limited set of traditional WDS design problems of unknown characteristics. Thus, in this study, engineering design problems are generated by modifications of existing WDS design benchmark and applied to performance measurement of metaheuristic algorithms.

2 WDS Design Benchmark Generation WDS is one of the most critical infrastructures for human activity. The main purpose of WDSs is to supply the required quantity of water from source to users while ensuring appropriate water quality and pressure [10]. The object of optimal design of WDS is finding the most cost-effective design among various alternative designs with satisfying hydraulic requirements. The objective function for the least-cost design of WDSs with nodal pressure constraint is calculated from diameter and length of pipes, as shown in Eq. (1): Min.Cost 

N  i1

Cc (Di ) × L i +

M  j1

Pj

(1)

Performance Comparison of Metaheuristic Optimization Algorithms …

99

where, C c (Di ) is the construction cost according to pipe diameter per unit length; L i is the pipe length; Di is the pipe diameter; Pj is the penalty function for ensuring pressure constraints are satisfied; N is the number of pipes; M is the number of nodes. If a design solution does not meet the pressure nodal pressure requirements, the penalty function is added into the objective function, as shown in Eq. (2): P j  α(h min − h j ) + β

if h j < h min

(2)

where hj is the nodal pressure at node j; hmin is the minimum pressure requirement at node j; α and β are constants in penalty function. In this study, the GoYang network design problem first introduced by Kim et al. [11] is used as reference benchmark problem to generate WDS design benchmarks. The GoYang network in South Korea is one of the well-known benchmark WDSs. It consists of 21 demand nodes, one zero demand node, 30 pipes, one constant pump of 4.52 kW in the downstream of a single reservoir adding a constant head gain of 71 m and nine loops as shown in Fig. 1. Total eight commercial pipes with internal diameters from 80 to 350 mm have to be selected for the GoYang network in the original design problem. Therefore, the number of candidate designs of whole network is 830 . The WDS design benchmarks in this study are generated by modifying five individual characteristics based on the GoYang network design problem. The number of pipes (n) and the number of candidate pipe diameter options (m) are used as problem size modification factors. The pressure constraint (p), the roughness coefficient (c) and the nodal demand multiplier (d) are also considered as problem complexity modification factors. The default problem characteristic factors are set as bold numbers given in Table 1. In addition, four values are considered for each problem characteristic factor and 20 benchmark problems are generated in this study.

Fig. 1 Layout of the GoYang network

100

H. M. Lee et al.

3 Performance Measurement Results In this study, we compared four algorithms: random search (RS), genetic algorithms (GAs) [12], simulated annealing (SA) [13], harmony search (HS) [14], and water cycle algorithm [15]. Each metaheuristic algorithm is tested with 20 independent runs for each of 20 cases shown in Table 1 and the maximum number of function evaluations is set to 20,000 considered as stopping criterion. The ratio of an optimal solution cost obtained from an algorithm to the known worst solution cost is defined as the improvement ratio, because the global optimal solution of WDS design problems is generally unknown, and the global optimal solution changes as the problem characteristics changes. The RS founds feasible solution in 99% of the total cases, and the other algorithms found feasible solutions in all individual runs of each case. Figures 2 and 3 show the average and standard deviation of the average improvement ratios. Note that, average and standard deviation are calculated from feasible solutions. First, when average values of average improvement ratio are compared, it is found that the RS has the lowest performance to search optimal design in 20 design benchmarks. The RS shows the smallest standard deviation among applied metaheuristic algorithms in terms of variation of roughness coefficient and nodal demand multiplier. However, it is found that the RS searches the solution with low fitness, and the reliability of its performance is low. Even though the SA finds feasible design solutions in all cases, the SA shows second worst results in terms of average of average improvement ratio. The GAs shows average performance among applied metaheuristic algorithms in the average and standard deviation. The GAs obtained better optimal solutions compared with the SA, however, it shows lower performance and reliability with variation of number of pipes and nodal demand multiplier to compare with the HS and the WCA. The HS and WCA show similar performance and reliability in the modified GoYang network design problems. The HS and WCA have the lowest performance and reliability with variation of nodal demand multiplier to compare with variation of the other factors. Meanwhile, the metaheuristic algorithms have its own strength and weaknesses. Furthermore, as the complexity and the difficulty of design benchmarks are increased, the performance and reliability of applied algorithms are weakened consistently. Thus, it is important to select proper design algorithm for a given engineering prob-

Table 1 Applied factors for benchmark generation

Factors

Used values

n

30, 60, 90, 120

m

8, 10, 12, 14

p

15, 17, 19, 21

c

100, 90, 80, 70

d

1.00, 1.25, 1.50, 1.75

Performance Comparison of Metaheuristic Optimization Algorithms …

(a) RS

(b) GAs

(c) SA

(d) HS

101

(e) WCA Fig. 2 Performance of metaheuristic algorithms (average of average improvement ratio)

lem, and to improve existing algorithms by enhancement of optimization process with considering problem characteristics.

102

H. M. Lee et al.

(a) RS

(b) GAs

(c) SA

(d) HS

(e) WCA Fig. 3 Performance of metaheuristic algorithms (standard deviation of average improvement ratio)

4 Conclusions Engineering benchmark problems can be used for performance comparison among metaheuristic algorithms and the water distribution system (WDS) design problem is one of the widely used benchmarks. However, the traditional WDS design problems have limitation in set of problem characteristics.

Performance Comparison of Metaheuristic Optimization Algorithms …

103

Therefore, engineering design problems are generated by modifications of existing WDS design benchmarks and applied to performance measurement of metaheuristic algorithms in this study. Each applied algorithm shows its own strength and weakness, and the performances of algorithms are weakened as the size and the complexity of problems are increased. It implies that finding optimal solutions for engineering problems using a metaheuristic algorithm requires an efficient approach considering characteristics of the problem. In addition, the cost minimization is selected as an objective function, and the nodal pressure requirement is used as a hydraulic constraint. However, there exist several objectives (e.g., system reliability and greenhouse gas emission) and constraints (e.g., water flow velocity limitation and water quality requirement) in the WDS design. Therefore, various combinations of objectives and constraints will be considered, and also other problem modification factors can be used to benchmark problem generation in future studies. Acknowledgements This work was supported by a grant from The National Research Foundation (NRF) of Korea, funded by the Korean government (MSIP) (No. 2016R1A2A1A05005306).

References 1. Simpson, A.R., Dandy, G.C., Murphy, L.J.: Genetic algorithms compared to other techniques for pipe optimization. J. Water Resour. Plan. Manag. 120(4), 423–443 (1994) 2. Maier, H.R., Simpson, A.R., Zecchin, A.C., Foong, W.K., Phang, K.Y., Seah, H.Y., Tan, C.L.: Ant colony optimization for design of water distribution systems. J. Water Resour. Plan. Manag. 129(3), 200–209 (2003) 3. Montalvo, I., Izquierdo, J., Pérez, R., Tung, M.M.: Particle swarm optimization applied to the design of water supply systems. Comput. Math Appl. 56(3), 769–776 (2008) 4. Geem, Z.W.: Optimal cost design of water distribution networks using harmony search. Eng. Optim. 38(03), 259–277 (2006) 5. Sadollah, A., Yoo, D.G., Yazdi, J., Kim, J.H., Choi, Y.: Application of water cycle algorithm for optimal cost design of water distribution systems. In: International Conference on Hydroinformatics (2014) 6. Sadollah, A., Yoo, D.G., Kim, J.H.: Improved mine blast algorithm for optimal cost design of water distribution systems. Eng. Optim. 47(12), 1602–1618 (2015) 7. Schaake, J.C., Lai, F.H.: Linear programming and dynamic programming application to water distribution network design. MIT Hydrodynamics Laboratory (1969) 8. Fujiwara, O., Khang, D.B.: A two-phase decomposition method for optimal design of looped water distribution networks. Water Resour. Res. 26(4), 539–549 (1990) 9. Reca, J., Martínez, J.: Genetic algorithms for the design of looped irrigation water distribution networks. Water Resour. Res. 42(5) (2006) 10. Lee, H.M., Yoo, D.G., Sadollah, A., Kim, J.H.: Optimal cost design of water distribution networks using a decomposition approach. Eng. Optim. 48(12), 2141–2156 (2016) 11. Kim, J.H., Kim, T.G., Kim, J.H., Yoon, Y.N.: A study on the pipe network system design using non-linear programming. J. Korean Water Resour. Assoc. 27(4), 59–67 (1994) 12. Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Mach. Learn. 3(2), 95–99 (1988) 13. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. science 220(4598), 671–680 (1983)

104

H. M. Lee et al.

14. Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. Simulation 76(2), 60–68 (2001) 15. Eskandar, H., Sadollah, A., Bahreininejad, A., Hamdi, M.: Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110, 151–166 (2012)

Comparison of Parameter-Setting-Free and Self-adaptive Harmony Search Young Hwan Choi , Sajjad Eghdami , Thi Thuy Ngo , Sachchida Nand Chaurasia and Joong Hoon Kim

Abstract This study compares the performance of all parameter-setting-free and self-adaptive harmony search algorithms proposed in the previous studies, which do not ask for the user to set the algorithm parameter values. Those algorithms are parameter-setting-free harmony search, Almost-parameter-free harmony search, novel self-adaptive harmony search, self-adaptive global-based harmony search algorithm, parameter adaptive harmony search, and adaptive harmony search, each of which has a distinctively different mechanism to adaptively control the parameters over iterations. Conventional mathematical benchmark problems of various dimensions and characteristics and water distribution network design problems are used for the comparison. The best, worst, and average values of final solutions are used as performance indices. Computational results show that the performance of each algorithm has a different performance indicator depending on the characteristics of optimization problems such as search space size. Conclusions derived in this study are expected to be beneficial to future research works on the development of a new optimization algorithm with adaptive parameter control. It can be considered to improve the algorithm performance based on the problem’s characteristic in a much simpler way. Keywords Harmony search · Parameter-setting-free · Self-adaptive

Y. H. Choi Department of Civil, Environmental and Architectural Engineering, Korea University, Seoul 136-713, South Korea S. Eghdami · T. T. Ngo · S. N. Chaurasia Research Center for the Disaster and Science Technology, Korea University, Seoul 136-713, South Korea J. H. Kim (B) School of Civil, Environmental and Architectural Engineering, Korea University, Seoul 136-713, Korea e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_11

105

106

Y. H. Choi et al.

1 Introduction Optimization problems involve in various fields such as mathematical and engineering problems. Many optimization algorithms have been developed and applied to solve these optimization problems. However, the performances of these optimization algorithms such as exploitation and exploration ability have much variance depend on the algorithm own parameters setting which are required adaptable algorithm parameter values. These algorithms depend on the parameter and their values directly affect the performance of the algorithm in hand. Finding the best set of parameter values is itself a challenging task. To overcome the drawback, many studies with relative parameters control method such as parameter-setting-free and self-adaptive approach have proposed. Likewise, Harmony Search (HS) performs these improvements in order to enhance the performance of the algorithm. HS is proposed by [1] and [2]. It is a method to find the solution using musical improvisations. HS’s improvisation method is inspired by the musical improvisation technique. Previous studies state that HS takes fewer mathematical operations compared to other optimization algorithms and can be easily adapted for solving various kind of optimization [3, 4]. However, the HS algorithm is that three parameters are constant values and it is hard to decide the values rely on the different problems. To improve the performance of HS algorithm, a variety of parameter-setting-free and self-adaptive HS have been proposed as below. The proposed parameter-setting-free and self-adaptive HS algorithm improve operation and dynamic of three parameters [i.e., harmony memory considering rate (HMCR), pitch adjusting rate (PAR), and bandwidth (Bw)] and applied in various fields (i.e., mathematics, multidisciplinary engineering problems). The parameter-setting-free (PSF) method automatically updates the parameters values after each iteration by using structure formation [5]. The study introduced a new operating type memory (OTM) and this memory updates considering number of operating type such as harmony memory considering or pitch adjusting. Shivaie et al. [6] developed a self-adaptive global best harmony search algorithm (SGHSA) inspired by global harmony search algorithm [4]. SGHSA employed a new improvisation scheme about an adaptive bandwidth, depends on the rate of generation compared to the total number of iterations. In the early generation (less than the half of iteration), the bandwidth is calculated by dynamic bandwidth formulation and above the half of iteration, the bandwidth used lower boundary bandwidth value. Luo [7] developed the novel self-adaptive harmony search (NSHS) algorithm that considered HMCR, PAR, and bandwidth for suitable parameters setting. HMCR sets a constant value according to the number of decision variables. The PAR procedure is replaced considering the variance of fitness and new solutions are generated by boundary condition of decision variable. NSHS applies a dynamic bandwidth which the bandwidth value decreases gradually by increasing number of iterations and increases as the range of boundary condition expands.

Comparison of Parameter-Setting-Free and Self-adaptive …

107

The previous PSF approach considered only HMCR and PAR. However, almostparameter-setting-free Harmony Search (APS-HS) proposed by [8] includes the Bw. It is controlled by min/max decision variable value. In this study, we compare the PSF and self-adaptive harmony search method developed for improving the quality of the solution and it is applied on mathematical benchmark problems. In addition, various performance indexes are used to compare the quantitative performance of each algorithm. It is expected to benefit future research work which formulates the approaches. Especially the performance of the newly proposed algorithms can be rigorously tested in a much simpler way.

2 Parameter-Setting-Free and Self-adaptive Harmony Search 2.1 Harmony Search HS can be explained as the improvisation process by a musician. The technique to search for an optimum harmony in music is equivalent to the optimum solution. When many different musicians play their instruments, all the various sounds generate one single harmony. The musicians may change gradually to a suitable harmony, and finally find an aesthetically pleasing harmony. In other words, HS is an approach which finds the optimum harmony in the music. In the HS, four parameters are used to search for the optimum solution [i.e., harmony memory (HM), harmony memory considering rate (HMCR), pitch adjusting rate (PAR), and bandwidth (Bw)]. These parameters set a constant value. A search space for instrument is limited to some memory space and is described as harmony memory (HM), where harmony memory size (HMS) represents the maximum number of harmonies to be saved in the memory space. The main operators of HS are random selection (RS), memory consideration (MC), and pitch adjustment (PA), to find better solutions among the HM.

2.2 Parameter-Setting-Free Harmony Search The parameter-setting-free harmony search (PSF-HS) was developed to reduce the suitable parameter setting [5]. PSF-HS modifies the improvisation step of HS by updating the HMCR and PAR on the every iteration for each decision variable. This study introduced operation type memory (OTM) to update the parameters. It was a memory that is used to generate a new solution among HS operators (i.e., RS, MC, and PA) and the parameters (i.e., HMCR and PAR) are updated using the number of selected operators. As the number of iterations increases, the HMCR generally increases, but

108

Y. H. Choi et al.

the PAR decreases. And, this trend can excess HMCR to 1 and PAR to 0. To prevent this problem, noise value is used to control the HMCR and PAR between 0 and 1.

2.3 Almost-Parameter-Free Harmony Search Almost-parameter-free harmony search (APS-HS) is the modified version of original PSF-HS [5] that is additionally considered dynamic Bw including automatic HMCR and PAR setting. It also applied OTM to calculate the adopted HMCR and PAR by using the same formulation. In the APS-HS, Bw is dynamically updated according to the maximum and minimum values in the HM.

2.4 Novel Self-adaptive Harmony Search Novel self-adaptive harmony search (NSHS) is the modified process of determining HMCR, PAR, and Bw from constant values [7]. HMCR is set according to the dimensions of the problem and it is analogous for example complex problem has large HMCR. In the original HS, setting the Bw is important to convergence of optimal solution. Therefore, NSHS used dynamic Bw to do fine-tune and the tuning range is wider in the beginning and narrower at the end of simulation. PA is replaced with considering the variance of fitness and a new solution is generated by boundary condition of decision variable.

2.5 Self-adaptive Global-Based Harmony Search Algorithm Self-adaptive global-based harmony search (SGHSA) to find a better solution and more effective parameter tuning [6]. SGHSA changed pitch adjustment rule to avoid falling into a local optimum solution. The value of the Bw parameter is dynamically reduced by subsequent generations.

2.6 Parameter Adaptive Harmony Search Kumar et al. [9] proposed a dynamic change in the values of HMCR and PAR, consequently modifying the improve version of harmony search called parameter adaptive harmony search (PAHS). PAHS keeps the value of HMCR small so as to make the algorithm explores each solution. The best obtained solutions are stored in HM as the algorithm proceeds with the increase in number of generations. During the final generations, the value of HMCR increases to make the search restricted to HM

Comparison of Parameter-Setting-Free and Self-adaptive …

109

that the solutions could be obtained from within HM only. Similarly, PAR has high value during earlier generations that it makes the algorithm to modify the solutions either stored in HM or from the feasible range.

3 Application Results To evaluate parameter-setting-free and self-adaptive harmony search, the mathematical benchmark problems are a used measure the performance. The individual simulation is repeated 50 times, and each simulation performs 50,000 function evaluations (NFEs) for each problem. To eliminate the influence of initial condition, same initial solution is used for all of the initial solutions by a random generation. In application of single-objective optimization problems, 30 (=5 benchmark functions × 6 kinds of decision variables: 2, 5, 10, 30, 50, 100) case of simulations are employed to compare the performance of these approaches in Table 1. The performance measures for quantifying the performance of the compared algorithms in this section are best, mean, and worst using 50 times individual run and to fair comparison the initial solution of five algorithms used same value generated by random. Based on the Appendix 1–5 for relative low dimensional cases (DV  2, 5, 10), the SGHSA outperforms comparing with the other algorithm. Among the 20 cases [i.e., three kinds of decisions variable (2, 5, and 10) × 4 benchmark problems (i.e., Rosenbrock, Rastrigin, Griewank, and Ackley)], SGHSA achieved first rank in 9 cases. In case of large benchmark problems (DV  30, 50, 100), the NSHS is shown the best performance.

Table 1 Test problem for single-objective optimization Name Dimensions Search domain

Global optimum

∞]n

0

Rosenbrock function (Valley-shaped)

[−30, 30]n

0

Rastrigin function (Many local optimum)

[−5.12, 5.12]n

0

Griewank function (Many local optimum)

[−600, 600]n

0

Ackley function (Distinction global optimum)

[−32.768, 32.768]n

0

Sphere function (Bowl-shaped)

2, 5, 10, 30, 50, 100

[−∞,

110

Y. H. Choi et al.

4 Discussion This study presents a comparison of parameter-setting-free with self-adaptive harmony search to show the effect of their own optimization operators and improving the performance of finding the best solution. It applied on famous mathematical benchmark problems and evaluates fairly using statistical analysis, using same initial solutions. As a result, among these parameters control optimization algorithms, in most of the cases NSHS shows the best performance, especially, in the higher dimension benchmark problems. NSHS is an improved harmony search method that modifies the harmony memory considering and dynamic bandwidth. This approach has the ability of avoid being stuck into local optimal by using fstd (standard deviation of fitness) of the decision variable(s). NSHS has several boundary conditions (min/max decision variable range and bandwidth range) and uses a method to take into the gap of decision variables. Therefore, this algorithm can find a good solution for continuous problems, but the performance of detecting for discrete problems is not vilified. So, a discrete problem is worth simulating to evaluate its detecting ability. The algorithm can be extended for discrete optimization problems. This study shows special feature between the finding operator and problem characteristics. It would be helpful for proposing the new algorithms for testing much simpler way. In the future study, by considering the characteristic of these parameter control optimization algorithm, the new self-adaptive algorithm can be developed and applied on various benchmark problems (e.g., continuity and discrete mathematical problem, real-world engineering problem). Acknowledgements This work was supported by a grant from The National Research Foundation (NRF) of Korea, funded by the Korean government (MSIP) (No. 2016R1A2A1A05005306).

Appendix See Tables 2, 3, 4, 5 and 6.

Table 2 The Sphere function optimization results (Average) Algorithm Dimension 2

5

10

30

50

100

Simple HS

7.53.E−09

2.30.E−07

1.34.E−06

3.43.E−03

1.86.E−02

5.42.E−02

PSF-HS APF-HS SGHSA NSHS PAHS

8.20.E−09 4.69.E−09 8.44.E−09 2.17.E−12 2.61.E−05

3.76.E−05 2.41.E−05 1.24.E−08 6.15.E−10 1.82.E−03

1.18.E−03 7.73.E−04 7.58.E−07 4.90.E−09 9.50.E−03

2.09.E−02 2.23.E−02 6.34.E−03 6.76.E−08 5.66.E−02

4.72.E−02 5.60.E−02 2.72.E−02 2.03.E−07 1.17.E−01

1.26.E−01 1.45.E−01 8.80.E−02 9.43.E−07 2.60.E−01

Comparison of Parameter-Setting-Free and Self-adaptive …

111

Table 3 The Rosenbrock function optimization results Algorithm Dimension 2

5

10

30

50

100

Simple HS

5.10.E−08

1.70.E−05

2.09.E−05

2.19.E−02

5.69.E−02

1.59.E−01

PSF-HS APF-HS SGHSA NSHS PAHS

3.21.E−08 3.78.E−08 0.00.E+00 0.00.E+00 4.77.E−03

3.12.E−05 1.83.E−05 1.36.E−05 1.61.E−05 6.37.E−03

2.21.E−03 1.70.E−03 7.94.E−05 1.95.E−05 8.65.E−03

4.35.E−02 3.53.E−02 2.20.E−02 2.13.E−05 2.05.E−02

1.06.E−01 1.01.E−01 6.52.E−02 1.82.E−05 3.29.E−02

2.80.E−01 2.93.E−01 2.36.E−01 2.60.E−05 5.91.E−02

Table 4 The Rastrigin function optimization results Algorithm Dimension 2

5

10

30

50

100

Simple HS

2.06.E−04

4.35.E−04

7.19.E−04

1.44.E−03

1.68.E−03

3.09.E−03

PSF-HS APF-HS SGHSA NSHS PAHS

3.05.E−04 2.33.E−04 2.10.E−05 1.26.E−04 4.77.E−03

5.99.E−04 5.21.E−04 2.07.E−06 2.30.E−04 6.37.E−03

7.26.E−04 1.01.E−03 1.73.E−05 4.37.E−04 8.65.E−03

1.02.E−03 1.08.E−03 1.10.E−03 8.64.E−04 2.05.E−02

2.25.E−03 2.09.E−03 1.48.E−03 1.07.E−03 3.29.E−02

4.58.E−03 2.53.E−03 2.06.E−03 1.46.E−03 5.91.E−02

Table 5 The Griewank function optimization results Algorithm Dimension 2

5

10

30

50

100

Simple HS

1.97.E−10

7.73.E−06

2.62.E−04

1.47.E−03

2.08.E−03

3.09.E−03

PSF-HS APF-HS SGHSA NSHS PAHS

2.39.E−09 3.46.E−09 1.17.E−13 2.23.E−13 7.76.E−06

5.71.E−06 5.71.E−06 2.73.E−11 6.86.E−11 4.43.E−04

1.07.E−04 9.27.E−05 2.14.E−04 5.58.E−10 1.20.E−03

9.58.E−04 1.03.E−03 1.58.E−03 3.05.E−09 2.76.E−03

1.48.E−03 1.65.E−03 2.07.E−03 5.34.E−09 3.54.E−03

2.28.E−03 2.42.E−03 3.15.E−03 1.20.E−08 4.95.E−03

Table 6 The Ackley function optimization results Algorithm Dimension 2

5

10

30

50

100

Simple HS

5.22.E−05

8.15.E−03

6.53.E−02

1.32.E−01

1.51.E−01

1.76.E−01

PSF-HS APF-HS SGHSA NSHS PAHS

1.57.E−04 1.57.E−04 1.01.E−06 1.25.E−06 1.55.E−02

1.01.E−02 9.83.E−03 7.40.E−06 7.59.E−06 7.88.E−02

3.81.E−02 3.59.E−02 6.37.E−02 2.14.E−05 1.19.E−01

1.03.E−01 1.10.E−01 1.33.E−01 7.18.E−05 1.71.E−01

1.25.E−01 1.32.E−01 1.47.E−01 1.28.E−04 1.83.E−01

1.45.E−01 1.52.E−01 1.77.E−01 2.90.E−04 2.00.E−01

112

Y. H. Choi et al.

References 1. Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. Simulation 76(2), 60–68 (2001) 2. Kim, J.H., Geem, Z.W., Kim, E.S.: Parameter estimation of the nonlinear Muskingum model using harmony search. JAWRA J. Am. Water Resour. Assoc. 37(5), 1131–1138 (2001) 3. Mahdavi, M., Fesanghary, M., Damangir, E.: An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 188(2), 1567–1579 (2007) 4. Omran, M.G., Mahdavi, M.: Global-best harmony search. Appl. Math. Comput. 198(2), 643–656 (2008) 5. Geem, Z.W.: Parameter estimation of the nonlinear Muskingum model using parameter-settingfree harmony search. J. Hydrol. Eng. 16(8), 684–688 (2010) 6. Shivaie, M., Ameli, M.T., Sepasian, M.S., Weinsier, P.D., Vahidinasab, V.: A multistage framework for reliability-based distribution expansion planning considering distributed generations by a self-adaptive global-based harmony search algorithm. Reliab. Eng. Syst. Saf. 139, 68–81 (2015) 7. Luo, K.: A novel self-adaptive harmony search algorithm. J. Appl. Math. (2013) 8. Jiang, S., Zhang, Y., Wang, P., Zheng, M.: An almost-parameter-free harmony search algorithm for groundwater pollution source identification. Water Sci. Technol. 68(11) (2013) 9. Kumar, V., Chhabra, J.K., Kumar, D.: Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems. J. Comput. Sci. 5(2), 144–155 (2014)

Copycat Harmony Search: Considering Poor Music Player’s Followship Toward Good Player Sang Hoon Jun , Young Hwan Choi , Donghwi Jung and Joong Hoon Kim

Abstract Harmony Search (HS), one of the most popular metaheuristic optimization algorithms, is inspired by musical improvisation process. HS operators mimic music player’s different behaviors to make the best harmony. For example, harmony memory considering realizes the player’s utilization of a combination of sounds among the good harmony found in the past whereas pitch adjustment is derived from fine pitch tuning. However, at the authors’ best knowledge, there is no harmony search which takes into account the fact that poor music player improves as he/she follows from the good performer. This study proposes a new improved version of HS called Copycat Harmony Search (CcHS) which employs a novel pitch adjustment approach for dynamic bandwidth change and poor solution’s followship toward a good solution. The performance of CcHS is compared to that of the original HS and HS variants with modified pitch adjustment in a set of well-known mathematical benchmark problems. Results obtained show that CcHS outperforms other algorithms in most problems finding the known global optimum. Keywords Copycat harmony search · Improved pitch adjustment Poor solution’s followship

S. H. Jun · Y. H. Choi Department of Civil, Environmental and Architectural Engineering, Korea University, Seoul 136-713, South Korea D. Jung Research Center for Disaster Prevention Science and Technology, Korea University, Seoul, South Korea J. H. Kim (B) School of Civil, Environmental and Architectural Engineering, Korea University, Seoul 136-713, South Korea e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_12

113

114

S. H. Jun et al.

1 Introduction In mathematics and computer science, optimization refers to the process of finding the best element from some sets of available alternatives. For optimization, mathematical methods (e.g., linear programming, non-linear programming, dynamic programming) were traditionally used to solve problems. However, because of their drawbacks such as requiring large number of evaluations for applying to the complex mathematical problems and real-life optimization problems, metaheuristic algorithms which mimic natural and behavioral phenomena are now widely used. Harmony Search (HS) [1] is one of the most famous metaheuristic algorithms for its simplicity and efficiency. It is inspired by the musical improvisation process that musicians search for the best harmony adjusting the pitches of instrument. Since HS is developed, there were many improved versions of the algorithm to make its performance better such as Improved Harmony Search (IHS) [2], Global-best Harmony Search (GHS) [3], and Self-Adaptive Harmony Search (SaHS) [4]. These algorithms modified the pitch adjustment of the original HS to eliminate the drawback that occurs by fixing the value of parameters. In this study, a new variant of HS called Copycat Harmony Search (CcHS) is proposed to enhance the skill for searching global optimum. Also, to reduce the dependency on the selection of different parameters, CcHS searches the solution automatically based on its Harmony Memory (HM). The details of the algorithm are explained in Sect. 2. The performance of CcHS is examined and compared to other HS algorithms using eight mathematical benchmark problems of 30 dimensions.

2 Copycat Harmony Search In this study, CcHS makes poor solutions mimic good solutions in HM. A new harmony is improvised as optimization proceeds, and the new one is changed with the worst harmony in HM if it is better than the worst. However, if the harmonies in HM are nearly identical, it is more likely that the best and worst harmonies in HM are not changed over iterations. To overcome the limitation, a novel adjustment is employed when the best and worst harmonies are not changed for a predefined number of iteration. CcHS has two different ways of generating new harmony when pitch adjusting. If the best harmony in HM does not change during a predefined number of iterations, a new harmony is formed considering the range of good harmonies. The number of good harmonies to consider for optimization process is decided by the user. In this research, the number of good harmonies (NGH ) is set to 3, which means that a new harmony is the created within the range of top 3 solutions in HM. Other strategy is applied when the worst harmony is fixed for specific iteration. The worst harmony in HM will not be updated if the newly searched harmony is not better than the worst. In CcHS, these new improvised harmonies are considered as

Copycat Harmony Search: Considering Poor Music Player’s …

115

the bad solutions because the good harmony is not generated anymore. To improvise a better solution, the harmony is formed considering the best harmony in HM. For the pitch adjustment process, the new harmony tries to move forward to the best value in HM. This concept is from swarm intelligence to make the bad solutions mimic the good one. Unlike GHS [3], the new solution is generated between the best in HM and the random value in HM. For the proposed method of adjustment, new parameters, update counting of best (UCB ), update counting of worst (UCW ), and fixed number of iteration (FI ) are introduced. The number of iterations during which the best and worst harmony remains unchanged is stored in the counter parameters Ucb and Ucw, respectively. When UCB and UCW exceeds its FI , new harmony is adjusted as Eqs. (1) and (2).       xi,new  min H M Ni G H + (max H M Ni G H − min H M Ni G H ) ∗ rand()   xi,new  xi + xbest,i − xi ∗ rand()

(1) (2)

When the best harmony is fixed until F   I of the best,  a new harmony is generated  as Eq. (1) while min H M Ni G H and max H M Ni G H are the smallest and the largest values of NGH in HM, respectively. NGH is for deciding how many good harmonies to be considered including the best harmony in HM. For the adjustment while the worst harmony is fixed until FI of the worst, Eq. (2) is applied. To update the worst harmony (i.e., the poorest music player), each decision variable mimics the best solution (i.e., the good player). Besides bad solution’s followship to the good solution, to eliminate the inconvenience of setting fixed values of the parameters (e.g., PAR, BW), PAR is linearly increased from 0.0 to 1.0 during the iteration and BW is dynamically changed according to HM at each iteration. Wang and Huang [4] suggested that PAR should be decreased linearly as iteration proceeds, but based on the results of preliminary tests, increasing PAR during optimization outperformed in most cases. Also, to avoid setting a constant value of BW, a novel pitch adjustment is suggested. In CcHS, BW dynamically changes considering the values of variables in HM at each iteration as follows:   bwi,t  max H M i − min(H M i )

(3)

  The BW of ith variable at tth iteration is determined as Eq. (3) while max H M i and min(H M i ) mean the largest and the smallest values of ith variable in HM. By proposed pitch adjustment, the inconvenience of setting specific value for BW is solved. Also, by adjusting the size of ith variable in HM for its BW at each iteration, it is possible to apply pitch adjustment considering memories found before. Decision variables in HM would have different ranges at each iteration. The variable which shows big difference between its maximum and minimum value would mean that it has not converged yet. It needs more exploration, searching globally by large BW, which is calculated considering its own range. Meanwhile, when the maximum and minimum values are nearly same, it represents that the decision variable converged

116

S. H. Jun et al.

to specific value. So exploitation should be performed, and the small BW of the variable will help. Therefore, BW of variables changing dynamically regarding their own status in HM seems reasonable for both global and local search.

3 Application and Results In this study, the proposed algorithm is applied in seven 30-dimensional and one 2-dimensional mathematical benchmark problems (Table 1). The performance of CcHS is compared to that of the original HS, IHS, GHS, and SaHS with respect to the final solution’s quality. Thirty independent optimizations are conducted to calculate the mean, best, and worst solution value, starting with randomly generated HM. The parameters sets suggested in previous studies are adopted for the comparison (Table 2). The consistent value of FI for UCB , UCW and NGH is used for all problems (FI of best  40, FI of worst  20, NGH  3). The total number of function evaluations allowed is set to 50,000 for all algorithms. Table 3 shows the obtained results from eight mathematical benchmark problems. In most problems, CcHS outperforms other variants of HS finding the known global optimum. However, SaHS achieved better results for mean and worst value than

Table 1 The details of 8 mathematical benchmark problems (D  30) Name Function Range D 2 Sphere f1(x)  i1 x −100 < xi < 100 D D Schwefel function 2.22 f2(x)  i1 |x| + i1 x −10 < x i < 10 ( f min : 0) Rosenbrock’s valley

f3(x)      D−1 2 2 + (x − 1)2 i i1 100 x i+1 − x i

Step function

f4(x) 

Schwefel function 2.26 Rastrigin function

f5(x)  √ D  418.98289 ∗ D + i1 −xi sin |xi | D  2 f6(x)  i1 xi − 10 cos(2π xi ) + 10

Ackley function

f7(x)  −20 ∗ exp −0.2

D

i1 [x i

exp Six-Hump Camel-Back function

1 D

D 

−100 < x i < 100 ( f min : 0)

+ 0.5]2



−30 < x i < 30 ( f min : 0)

1 D

D  i1

xi2 −

 cos(2π xi ) + 20 + e

−512 < x i < 512 ( f min : 0) −5.12 < x i < 5.12 ( f min : 0) −32 < x i < 32 ( f min : 0)

i1

f8(x)  4x12 − 2.1x14 + 13 x16 + x1 x2 − 4x22 − 4x24

−3 < x 1 < 3, −2 < x2 < 2, ( f min : −1.03162845)

Copycat Harmony Search: Considering Poor Music Player’s … Table 2 Parameter data in HS variants Parameter HS IHS

117

GHS

SaHS

CcHS

HMS HMCR PAR PARmin PARmax BW BWmin

5 0.9 0.3 – – 0.01 –

5 0.9 – 0.01 0.99 – 0.0001

5 0.9 – 0.01 0.99 – –

50 0.99 – 0.0 1.0 – –

10 0.99 – 0.0 1.0 – –

BWmax



(xU −x L ) 20







CcHS in the Ackley problem. Harmony Memory Size (HMS) in SaHS is 50 as suggested in previous study. Large size of HM could consider various combinations of the decision variables but requires more time for comparing the generated solution with the solutions in HM. The running time of optimization is important factor to consider the performance of algorithms. Although SaHS showed better results, it has deficiency with the evaluating time. The effect of HMS in CcHS should be investigated later for the best performance.

4 Conclusions Since its introduction, HS have gained its popularity and have been applied to many complex problems. To enhance the performance and to solve disadvantages of the original HS, a lot of improved version of HS have been invented until today. In this study, a new version of HS called Copycat Harmony Search was proposed with a novel pitch adjustment strategy. When the solution is not generated for predefined number of iteration, the bad solution mimics the good solution, with dynamic bandwidth considering the values in Harmony Memory. The performance of proposed algorithms was compared to that of other improved versions of HS in a set of benchmark problems. The results showed that CcHS outperformed other algorithms. By the followship of the bad solution toward the good solution, the algorithms showed enhancement. In future research, verification of CcHS’s performance on real-life optimization problems should be implemented. Acknowledgements This research was supported by a grant [13AWMP-B066744-01] from Advanced Water Management Research Program funded by the Ministry of Land, Infrastructure, and Transport of the Korean government.

118

S. H. Jun et al.

Table 3 Optimization results Parameter HS Sphere

IHS

GHS

SaHS

CcHS

Mean

5.67E+00

1.14E−06

2.52E−03

2.64E−11

1.34E−12

Best Worst Mean

2.72E+00 7.33E+00 8.19E−02

6.96E−07 1.44E−06 4.23E−03

6.24E−05 6.20E−03 2.29E−02

2.18E−14 1.41E−10 5.12E−05

1.78E−14 4.44E−12 5.63E−09

Best Worst Rosenbrock’s Mean valley

5.76E−02 1.03E−01 1.88E+02

3.57E−03 4.67E−03 1.07E+02

2.35E−03 4.32E−02 1.94E+01

5.93E−07 1.52E−04 2.66E+01

6.21E−10 1.70E−08 7.94E+00

Best Worst Step function Mean

8.95E+01 2.49E+02 8.29E−02

2.29E+01 1.70E+02 1.09E−06

1.16E−01 3.03E+01 1.08E−04

2.50E+01 2.75E+01 1.03E−12

3.91E−02 1.79E+01 2.04E−13

Best Worst Mean

4.60E−03 1.67E−01 2.16E+01

6.04E−07 1.32E−06 2.52E−02

3.42E−07 5.47E−04 1.55E−02

2.10E−14 5.69E−12 1.08E+00

6.66E−15 8.03E−13 8.18E−05

Best Worst Mean

1.40E+01 2.82E+01 3.57E−01

3.38E−03 1.78E−01 2.34E+00

3.38E−03 4.39E−02 1.93E−03

4.43E−03 3.35E+00 2.61E+00

8.18E−05 8.18E−05 2.81E−09

Best Worst Mean

4.37E−02 1.05E+00 7.23E−01

5.55E−01 5.03E+00 7.80E−04

5.47E−05 7.41E−03 1.20E−02

1.29E+00 3.83E+00 9.89E–06

1.07E−14 2.59E−08 1.93E−05

Best Worst Mean

1.87E−01 6.13E−04 2.47E−03 1.94E−07 7.54E−08 1.08E+00 8.62E−04 2.25E−02 2.77E–05 1.06E−04 −1.03162845 −1.03162843 −1.03162522 −1.03162846 −1.03162845

Best Worst

−1.03162845 −1.03162855 −1.03162845 −1.03162846 −1.03162845 −1.03162845 −1.03162843 −1.03162203 −1.03162846 −1.03162845

Schwefel function 2.22

Schwefel function 2.26

Rastrigin function

Ackley function

Six-Hump Camel-Back function

References 1. Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. Simulation 76, 60–68 (2001) 2. Mahdavi, M., Fesanghary, M., Damangir, E.: An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 188, 1567–1579 (2007) 3. Omran, M.G.H., Mahdavi, M.: Global-best harmony search. Appl. Math. Comput. 198, 643–656 (2008) 4. Wang, C.M., Huang, Y.F.: Self-adaptive harmony search algorithm for optimization. Expert Syst. Appl. 37, 2826–2837 (2010)

Fused Image Separation with Scatter Graphical Method Mayank Satya Prakash Sharma, Ranjeet Singh Tomar, Nikhil Paliwal and Prashant Shrivastava

Abstract Image fusion and its separation is a frequently arising issue in Image processing field. In this paper, we have described image fusion and its Separation using Scatter graphical method and Joint Probability Density Function. Fused image separation using Scatter Graphical Method depend on Joint Probability density function of fused image. This technique gives batter result of other technique based on Signal Interference ratio and peak signal-to-noise ratio. Keywords Real image · Scatter · BSS · PSNR · SIR real mixture

1 Introduction Separation of merged and overlapped images is a frequently arising issue in image processing field such as separation of fused and overlapped images achieved from many applications. In which we get a mixture which contains of two or more than two images and for identification we essential to separate them. In this paper, it is supposed that original images are mutually statistically independent and identifiable at the time of mixing and merging, and the difficulty is solved by applying Scatter graphical method, To apply Scatter Graphical in frequency domain, Equivarient Adaptive Separation Via Independence (EASI) algorithm was extended to separate complex valued signals when photographing objects placed behind a glass window or windscreen, since most varieties of glass have semi reflecting properties [1]. The need to separate the contributions of the original and the virtual images to the combined, superimposed, images is important in applications where reflections may create ambiguity in scene analysis. In which we get a mixture which contain of M. S. P. Sharma (B) · N. Paliwal · P. Shrivastava Rustan Ji Institute of Technology, Tekanpur, India e-mail: [email protected] R. S. Tomar ITM University Gwalior, Gwalior, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_13

119

120

M. S. P. Sharma et al.

two or more than two merged images and for identification we necessary to separate them. Algebraically, image mixture can be given below X  KS

K 

k11 k12 , k21 k22

(1)

(2)

where, X  [x1 , x2 ]T are mixed images, S  [s1 , s2 ]T are the real images and K is a combined matrix. Blind source separation (BSS) problem is depend on fused real image separation, because neither source 2d real signal, nor mixed coefficients are known. The observed images are define is weighted linear combination of source 2d signals and mixing weights are also not given [2]. If we can calculate mixing weighted matrix than the original unmixed images can also be define as S  k −1 X.

(3)

There are many other applications of image separation namely, image denoising [3, 4], medical signal processing like FMRI, ECG, EEG [5–7] feature extraction in Content-Based Image Retrieval (CBIR) [8–10], face recognition [4, 9], compression redundancy reduction [11], watermarking [12, 13], remote sensing in cloud prediction and detection [14], where VHRR (very high-resolution radiometer) is a technique of cloud detection in remote sensing, scientific data mining [15], finger print separation (in crime branch) [16]. There are less technique and systems which permit separates the particular speaker from different merged image information and data which is contained at some unwanted noisy environments. The similar application is studies in digital hearing aid system, TV meeting area, image recognition system, etc. In particular, Independent Component Analysis (ICA) Technique and microphone array based approaches are target. Microphone array approach permit enhances an object and target image from the merged images and discard noises and the phase difference among different image sources which relates to the distance between the microphone and the position of the different image sources. There are many algorithms for digital merged image separation namely scatter graphical technique, Singular decomposition based independent component analysis technique, and principal component analysis (PCA), etc. There are many approaches for digital mixed image separation namely (1) scatter graphical technique (2) SVD-based ICA technique (3) Convolutive mixture separation. These techniques are based on BSS (Blind source separation).

Fused Image Separation with Scatter Graphical Method

121

2 Scatter-Geometrical Based Method Scatter graphical method is an efficient technique for separation. In this paper we will use scatter graphical technique for image separation. The two-dimensional blind separation problem consist of the input 2d signals (i.e., mixtures) to be the linear combination of two different source signals. Scatter graphical approach is applicable for non-sparse signal. The fused mixtures are accordingly described by Eqs. (4) and (5) X 1 (x, y)  k11 s1 (x, y) + k12 s2 (x, y)

(4)

X 2 (x, y)  k12 s2 (x, y) + k21 s2 (x, y),

(5)

where si and X i are the sources and fused mixtures signals, respectively. The signal si are supposed to be nonnegative and normalized, i.e., 0 ≤ S ≤ 1. The gain of the signals and dynamic range are integrated into the mixing matrix. Dependencies are presented. The Problem of Blind Source Separation (BSS) when the hidden images are Nonnegative (N-BSS). In this case, the scatter plot of the mixed data is contained within the simplified parallelogram generated by the columns of the mixing matrix. Shrinking Algorithm for not mixing Nonnegative Sources, aims at calculate the mixing matrix and the sources by parallelogram X a  max(w1 )

(6)

ya  max(w2 ),

(7)

where w1 and w2 are one dimensional image vector. Further calculation depends on the assumption that Q1 < Q2 , where Q1 and Q2 are defined by K 21 K 22 K 22 Q2  K 12 Q1 

(8) (9)

Another mathematical concept of scatter approach A  [(k11 + k12 )k, [(k21 + k22 )]k

(10)

B  [(k12 − k11 )k, [−(k21 − k22 )]k

(11)

C  [−(k11 + k12 )k, [−(k21 + k22 )]k

(12)

D  [−(k12 + k11 )k, [(k21 − k22 )]k,

(13)

where ABCD is a parallelogram edges

122

M. S. P. Sharma et al.

[(K 11 + K 12 )]K  xa ,

[(K 21 + K 22 )]k  yb

(14)

[(K 12 − K 11 )]K  xb ,

[−(k21 − k22 )]K  yb

(15)

[−(K 11 + K 12 )]K  xc ,

[−(k21 + A22 )]K  yc

(16)

[(K 12 − K 11 )]K  xd ,

[(K 21 − K 22 )]K  yd

(17)

We will estimate the mixing coefficient with some algebraic equation. These equation are given below xa + xd 2 xa + xb k12 k  2 ya − yd k22 k  2 ya − yb k21 k  2 k11 k 

xa − xb 2 xa − xd k12 k  2 ya + yb k22 k  2 ya + yd k21 k  . 2 k11 k 

(18) (19) (20) (21)

3 Work Done We will take four different images. We will fuse these images with help Scatter Graphical method make six combinations of these images according to c2n where n  number of images. We will separate these images with help of Scatter method, then calculate the Peak to signal ratio and Signal interference ratio (SIR) of difference between the original image and separated image. In this paper, a scatter graphical method of blind source separation is introduced on images. Result of experiment shows the scatter approach can separate images. And show proposed approach can separate every image.

Fused Image Separation with Scatter Graphical Method

123

4 Image Separation with Scatter-Geometrical Method We will take four different gray images size 512 * 512 bmp images. So our aim is to estimate the mixing matrix from original image. Let us take two images IM(1) and IM(2) in Fig. 1 (Fig. 2). Two histogram equalized real images are linearly mixed in Eqs. (22 and 23) then the predicted and observed real images will no longer have uniform probability distributions function x1  k11 I M1 + K 12 I M2

(22)

x2  k21 IM1 + K 22 IM2.

(23)

In vector matrix form the above equation can be written 1M2  K IM,

(24)

where, mixing coefficient is given by K 

K 11 K 12 K 21 K 22

(25)

Fig. 1 Original image

X2 Fig. 2 Fused image IM1 and IM2

124

M. S. P. Sharma et al.

Fig. 3 Probability density function (PDF) of independent component x 1 and x 2

IM1 and IM2 are independent to each other. Then we will take histeqlization (uniform distribution) of given image   1 , if IMi ∈ [−kk] 2k fIM(IMi )  . (26) 0 elsewhere Graphical, both the source IM1 and IM2 and fuse image x1 , x2 are independent with each other and having the uniform distribution within range [−kk] is shown below (Fig. 3). Uniform distribution of independent component x1 and x2 having uniform distribution within the range of −k to k and magnitude of uniform distribution is 2k1     X1 1 X1 1 ∗ (27) F(x1 )  f x1 f x2 k11 k11 k12 k12     X2 1 X2 1 ∗ , F(x2 )  f x2 f x2 k11 k21 k12 k22 where ‘*’ operator the convolution let us assume that Scaling of the fused data   X1 1  f g1 (g1 ) f x1 k11 k11   x2 1  f g1 (g2 ) f x1 k12 k12 f x1 (x1 )  f g1 (g1 ) ∗ f g2 (g2 )

(28)

(29) (30) (31)

Mathematically, we get the expression for the probability density function of the mixture x1 and likewise for mixture x2 graphical probability density function (pdf) of mixture x1 and mixture x2 [21] (Fig. 4).

Fused Image Separation with Scatter Graphical Method

125

Fig. 4 Probability distribution function of fused image x1 and X2 Different Fused image

Fig. 5 Fused image of 2M3

Then the resultant distribution of the observed images for k12 > k11 and k22 > k21 is given function w1 and w2. Where, w1 and w2 is given below (Figs. 5 and 6). ⎤ ⎡ 1 k + k k + w + k ≤ −(k − k −(k ≤ w (k ) )k )k 11 12 1 11 12 1 12 1 2 ⎥ ⎢ 4k11 k12 k ⎥ ⎢ 1 ⎢ −(k12 + k11 )k ≤ w1 ≤ (k12 − k11 ) ⎥ 2k12 k ⎥ F(w1 )  ⎢ ⎥ ⎢ 1 ⎥ ⎢ k + k k + w + k ≤ + k −(k ≤ w (k11 12 1) 11 12 )k 1 12 )k ⎦ ⎣ 4k11 k12 k 2 (k11 otherwise ⎡ ⎢ ⎢ ⎢ f(w2 )  ⎢ ⎢ ⎢ ⎣

1 k 4k21 k22 k 2 (k21

0

+ k22 k + w2 ) −(k21 + k22 )k ≤ w2 ≤ −(k22 − k21 )k

1 2k22 k 1 k 4k11 k12 k 2 (k11

+ k + y1 )

otherwise



⎥ ⎥ −(k22 − k21 )k ≤ w2 ≤ (k22 − k21 ) ⎥ ⎥ ⎥ (k22 + k21 )k ≤ w2 ≤ (k21 + k22 )k ⎥ ⎦ 0

126

M. S. P. Sharma et al. Different Fused image

Fig. 6 Fused Image 3M4

5 Scatter Plot of Mixed Image Show uncorrelated mixture of those independent components, when the mixture are uncorrelated that the distribution is not same. The independent components are mixed using orthogonal mixing matrix, which corresponds rotation of plane. The edge of the square, we are estimate the rotation that gives the original component nonlinear correlation that gives the original component Using two independent components with uniform distribution (Figs. 7, 8, 9, 10 and 11).

Fig. 7 Scatter plot of mixture X 1 and X 2 (1M2) (K 11  0.467 K 12  0.23 K 21  0.33 K 22  0.667) Horizontal axis is labeled as X 1 and vertical axis X 2

Fused Image Separation with Scatter Graphical Method

127

Fig. 8 Scatter plot of mixture X 1 and X 2 (2M33M4) (K 11  0.467K 12  0.23K 21  0.33K 22  0.667) Horizontal axis is labeled as X 1 and vertical axis X 2 Fig. 9 Separated image 1M2

psnr=8.1890

SIR=1.7532E+003

psnr=15.2778

SIR=1.732E+003

6 Results and Discussion Real images separation of fused images, this technique has been evaluated on six fused real-image pairs and performance is analyzed in terms of signal to interference ration (SIR) and peak signal to noise ratio (PSNR). These merged images for k11  0:467; k12  0:29; k21  0:33; and k22  0:67 are generated using randomly chosen four real images in the bitmap form (Tables 1 and 2).

128

M. S. P. Sharma et al.

Fig. 10 Separated image 2M3

psnr=8.1890

Fig. 11 Separated image 3M4

psnr=8.7393 SIR = 2.247E+003

Table 1 Estimated matrix coefficient for 4 combination of image

SIR=1.7532E+003

psnr=15.2778

SIR=1.732E+003

psnr=16.0909 SIR=2.2254E+005

Mixture

k11

k12

k21

k22

1M2 1M3 1M4 2M3 2M4

0.52 0.52 0.52 0.52 0.52

0.23 0.23 0.23 0.23 0.23

3.30E−01 0.33 3.30E−01 3.31E−01 3.31E−01

6.62E−01 6.62E−01 6.62E−01 6.62E−01 6.62E−01

K 

K 11 K 12 K 21 K 22

Fused Image Separation with Scatter Graphical Method Table 2 Result with scatter method

129

Mixture

Scatter method PSNR1 PSNR2

SIR1

SIR2

1M2 1M3 1M4 2M3 2M4

9.6594 8.6195 8.9315 8.189 8.3965

1.97E + 01 17.9967 2.00E + 01 1.86E + 01 1.92E + 01

2.35E + 01 2.31E + 01 2.35E + 01 2.29E + 01 2.32E + 01

17.0704 14.9952 17.2053 15.2778 16.4529

Actual matrix 

0.465 0.23 0.33 0.667

7 Conclusion The given technique for image separation depends on scatter graphical plot successfully and separates the histogram equalized for fused real images and in this paper, we have to separate image with scatter graphical method. Main problem of how can we estimate the mixing matrix? Since the image separation aims at estimating both

130

M. S. P. Sharma et al.

the original image separation and the mixing matrix using only the observation, our aim to estimate mixing matrix gives estimate of source 2d signal. With some information about the source and on the basis of information we are trying to calculate mixing coefficient with the help of scatter graphical method. Some limitations of finding the mixing matrix are—(1) Image sources are independent to each other (2) fused images are noise-free. In this paper, we assume that we have the idea about the distribution of sources, different type of graphical structures and by analysis of these structures; we can estimate the mixing coefficient easily. We can take two images having weighting coefficient, i.e., (k11 k12 k21 k22 ). All the different cases for the all two observed fused image. Mixture

Structure

Estimating coefficient

X1

X2

Straight line

k11  k12  k21  k22

X1

X2

Rhombus

k11  k22 , k12  k21

We have carefully chosen—several different fused image combination of four different samples of proportionate mixtures of mixed image and then has calculated the PSNR and signal interference ratio of difference between the original image and separated image by scatter graphical method. In this paper, scatter graphical algorithms give better result, compared to any other technique based on PSNR and SIR.

References 1. Singh, D.K., Tripathi, S., Kalra, P.K.: Separation of image mixture using complex ICA. In: Proceedings of ASID ’06, pp. 8–12. New Delhi 2. Tonazzini, A., Bedini, L., Salerno, E.: A Markov model for blind image separation by a meanfield EM algorithm. IEEE Trans. Image Process. 15(2) (2006) 3. Kumari, M., Wajid, M.: Source separation of independent components. LRC, JUIT, 2013, SPR 621 KUM, SPM1327 4. Carasso, D., Vizel, E., Zeevi, Y.Y.: Blind source separation using mixtures scatter plot properties. In: 2009 16th International Conference on Digital Signal Processing. IEEE (2009) 5. Sternberg, S.R.: Biomedical image processing. Computer 16(1), 22–34 (1983) 6. Parra, L., Sajda, P.: Blind source separation via generalized eigenvalue decomposition. J. Mach. Learn. Res. 4, 1261–1269 (2003) Submitted 10/02; Published 12/03 7. Zarzoso, V., Comon, P.: Robust independent component analysis for blind source separation and extraction with application in electrocardiography. In: 30th Annual International IEEE EMBS Conference Vancouver. British Columbia, Canada, August 20–24, 2008 8. Choras, R.S.: Image feature extraction techniques and their applications for CBIR and biometrics systems. Int. J. Biol. Biomed. Eng. 1(1), 6–16 (2007) 9. Ziquan, H.: Algebraic feature extraction of image forrecognition. Pattern Recogn. 24(3), 211–219 (1991) 10. Zou, W., Li, Y., Lo, K.C., Chi, Z.: Improvement of image classification with wavelet and independent component analysis (ICA) based on a structured neural network. In: 2006 International

Fused Image Separation with Scatter Graphical Method

11. 12.

13.

14.

15. 16. 17. 18. 19. 20.

131

Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16–21, 2006 Chye, K.C., Mukherjee, J., Mitra, S.K.: New efficient methods of image compression in digital cameras with color filter array. Consum. Electron. IEEE Trans. 49(4), 1448–1456 (2003) Jadhav, S.D., Bhalchandra, A.S.: Blind source separation based robust digital image watermarking using wavelet domain embedding. In: 2010 IEEE Conference on Cybernetics and Intelligent Systems (CIS). IEEE (2010) kundur, D., Hatzinakos, D.: A robust digital image watermarking scheme using the waveletbased fusion. In: Image Processing, International Conference on vol. 1. IEEE Computer Society (1997) Huadong, D., Yongqi, W., Yaming, C.: Studies on cloud detection of atmospheric remote sensing image using ICA algorithm. In: 2nd International Congress on Image and Signal Processing, 2009 CISP’09. IEEE (2009) Ye, N. (ed.): The Handbook of Data Mining, vol. 24. Lawrence Erlbaum Associates, Publishers (2003) Zhao, Q., Jain, A.K.: Model based separation of overlapping latent fingerprints. Inf. Forensics Secur. IEEE Trans. 7(3), 904–918 (2012) Tonazzini, A., Bedini, L., Salerno, E.: A Markov model for blind image separation by a meanfield EM algorithm. Image Process. IEEE Trans. 15(2), 473–482 (2006) Chen, F., et al.: Separating overlapped fingerprints. Inf. Forensics Secur. IEEE Trans. 6(2), 346–359 (2011) Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis, vol. 46. Wiley (2004) Hyvarinen, A.: Blind source separation by nonstationarity of variance: a cumulant-based approach. Neural Networks IEEE Trans. 12(6), 1471–1474 (2001)

Ascending and Descending Order of Random Projections: Comparative Analysis of High-Dimensional Data Clustering Raghunadh Pasunuri, Vadlamudi China Venkaiah and Bhaskar Dhariyal

Abstract Random Projection has been used in many applications for dimensionality reduction. In this paper, a variant to the iterative random projection K-means algorithm to cluster high-dimensional data has been proposed and validated experimentally. Iterative random projection K-means (IRP K-means) method [1] is a fusion of dimensionality reduction (random projection) and clustering (K-means). This method starts with a chosen low-dimension and gradually increases the dimensionality in each K-means iteration. K-means is applied in each iteration on the projected data. The proposed variant, in contrast to the IRP K-means, starts with the high dimension and gradually reduces the dimensionality. Performance of the proposed algorithm is tested on five high-dimensional data sets. Of these, two are image and three are gene expression data sets. Comparative Analysis is carried out for the cases of K-means clustering using RP-Kmeans and IRP-Kmeans. The analysis is based on K-means objective function, that is the mean squared error (MSE). It indicates that our variant of IRP K-means method is giving good clustering performance compared to the previous two (RP and IRP) methods. Specifically, for the AT & T Faces data set, our method achieved the best average result (9.2759 × 109 ), where as IRP-Kmeans average MSE is 1.9134 × 1010 . For the Yale Image data set, our method is giving MSE 1.6363 × 108 , where as the MSE of IRP-Kmeans is 3.45 × 108 . For the GCM and Lung data sets we have got a performance improvement, which is a multiple of 10 on the average MSE. For the Luekemia data set, the average MSE is 3.6702 × 1012 and 7.467 × 1012 for the proposed and IRP-Kmeans methods respectively. In summary, our proposed algorithm is performing better than the other two methods on the given five data sets. Keywords Clustering · High-dimensional data · K-means · Random Projection R. Pasunuri (B) · V. China Venkaiah · B. Dhariyal School of Computer and Information Sciences, University of Hyderabad, Hyderabad, India e-mail: [email protected] V. China Venkaiah e-mail: [email protected] B. Dhariyal e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_14

133

134

R. Pasunuri et al.

1 Introduction The K-means [2] is a clustering algorithm in which the data with n points given in R d and san integer K is specified. The algorithm finds K cluster centers such that the mean squared error is minimized. It starts by initializing the centers by randomly selected K points. The initial centers are updated regularly after each iteration by taking the mean of each cluster. For every iteration the means are recalculated and all the points are reassigned to its closest center, which is the mean of the cluster points. The total squared error is reduced in each of the iteration. The algorithm converges when it reaches the minimum squared error. The disadvatage of K-means is that it can be caught in local minimum. Random Projection (RP) [3] is a is a very famous and powerful technique for dimensionlaity reduction, which uses matrix multiplication to project the data into lower dimensional space. It uses a random matrix to project the original highdimensional data into a low-dimensional subspace, by which the distance between the data points is approximately preserved. Fradkin and Madigan [4] have done a comparative analysis on the combination of PCA and RP with SVM, decision trees and nearest neighbor methods. Bingham and Mannila [5] is another work in the literature, in which different dimensionality reduction methods have been compared for image nad text data. The distortion rate and computational complexity is reported as performance. Fern and Brodley [6] have used RP and ensemble methods to improve the clustering performance of high-dimensional data. Deegalla and Bostrom [7] applied PCA and RP for Nearest Neighbor Classifier to report the advantage of performance increse when dimensions grow fastly. An iterative version of RP Kmeans algorithm is given in Cardoso and Wichert [2], which got some improvement over the RP K-means. A variant to IRP K-means method that performs clustering of the high-dimensional data using random projections in the iterative dimensions of IRP K-means algorithm [2] is proposed in this work. We call this method Variant of IRP K-means (VIRP K-means). The performance of VIRP Kmeans is compared with the related methods namely, IRP K-means, RP K-means. From the empirical results, we can say that the performance (mean squared error) of VIRP Kmeans is improved when compared to RP Kmeans and IRP Kmeans methods. Results of the conducted experiments reveal that gradual decrease in the reduced dimensionality and then clustering on that lowdimensions gives us better solution than clustering on the original high-dimensional data. The remaining contents of this paper is organized as: In Sect. 2, we describe Kmeans clustering. In Sect. 3, Random Projection (RP), RP K-means and IRP K-means algorithms are presented. Section 4 presents the proposed VIRP K-means. Section 5 reports our experimental results. Section 6 ends with conclusion and some future directions.

Ascending and Descending Order of Random …

135

2 K-means Algorithm K-means performs cluster analysis on low and high dimensional data. It is basically an iterative algorithm which takes input as N observations and divide them across K non-overlapping clusters. The clusters are identified by initializing K random points as centroids and iterating them over N observations. The centroids for K clusters are calculated by minimising the error function used to discriminate a point from its cluster, in this case euclidean distance. The lesser the error, more is the “goodness” of that cluster. Let X = {xi , i = 1, . . . , N } be the set of N observations, and these observations are going to be grouped into K clusters, C = {ck , k = 1, . . . , K }, where K  N . The main goal of K-means is to reduce the squared Euclidean distance between the center of a cluster and the observations in the cluster. The mean of a cluster ck is denoted by μk and is defined as [8, 9]: μk =

1  xi Nk x ∈c i

(1)

k

where Nk is the number of observations in cluster ck . Minimisation of error function can be done using gradient descent approach. It scales well with large datasets and is considered to be a good heuristic for optimising the distance. The following are the steps involved in K-means algorithm: 1. 2. 3. 4. 5.

Randomly initialize K cluster centroids. Calculate euclidean distance between each observation and each cluster center. Find the closest center for each point and assign it to that cluster. Find the new center or mean of each cluster using Eq. (1). Repeat steps 2 and 3 until no change in the mean.

3 Random Projection Random Projection (RP) is a dimensionality reduction method, that projects the data into lower dimensional space by using a random matrix multiplication. It approximately preserves the distance between the points [4, 5]. Random Projection method projects the original d−dimensioanl data to a Ddimensional subspace (D  d) using a random d × D orthogonal matrix P. The orthogonal matrix Pd X D is having unit length columns. Symbolically, it can be written as: P = X N ×d Pd×D X NR ×D

(2)

The theme of Random Projection is based on the Johnson-Lindenstrauss (JL) lemma. Johnson-Lindenstrauss lemma [3] states that if N points in vector space of dimension

136

R. Pasunuri et al.

d are projected onto a randomly selected subspace of dimension D, then the Euclidean distance between the points are approximately preserved. We can find more details about JL lemma in [10]. The statement of the JL lemma given in [10] is as follows: Theorem 1 [JL Lemma] For any 0 <  < 1 and any integer N , let D be a positive integer such that 

2 3 D≥4 − 2 3

−1 ln N .

Then for any set V of N points in R d , we can find a map f : R d → R D such that for all u, v ∈ V , (1 − ) u − v2 ≤  f (u) − f (v)2 ≤ (1 + ) u − v2 . Here, the map f can be constructed in randomized polynomial time. Proof of this lemma is given in [10, 11]. Many researchers proposed different methods for generating the random matrix [11, 12]. The computational cost of projecting the data is reduced by using integers and by using sparseness in random matrix P generation. Tha matrix P is actually not orthogonal, but it incurs a large amount of computational cost to make it orthogonal. However, there are almost orthogonal directions are present in the high-dimensional space [13]. Hence, we can say that these vectors that are having random directions are considered as orthogonal. In the Literature, there are many algorithms are present to generate random projections which satisfy JL Lemma. Of these, Achlioptas [11] algorithm is very famous and used widely. In [11], the elements of a random vector P are defined as:  pi j = or

+1 with Pr = 21 ; −1 with Pr = 21 .

⎧ √ 1 ⎪ ⎨+ 3 with Pr = 6 ; pi j = 0 with Pr = 23 ; ⎪ ⎩ √ − 3 with Pr = 16 .

(3)

(4)

The computational complexity of random projection is O(d D N ) where d represents the original high-dimension of the input, D represents reduced dimensionality of the projected subspace and N is the size of the input data that is the number of samples it contains. It becomes O(cD N ), when the input X has c non-zero entries per column and is sparse [14].

Ascending and Descending Order of Random …

3.1

137

RP K-means

Several researchers combined the K-means clustering algorithm with random projection [12, 15, 16]. The basic idea here is project the original high-dimensional data into a low-dimensional space and then perform clustering on this low-dimensional subspace. This reduces the K-means iteration cost effectively. The solution we get in low-dimensional space is same as the one in the high-dimension. The RP K-means, first initializes cluster membership G randomly. Select K points randomly as cluster centers. Then generates a random matrix P to project the input data. Project the input data X N ×d to D dimensions where D < d, using the projection matrix Pd×D . The initial cluster centers C R P defined by the mean of each cluster in X R P with the help P and G. We apply K-means clustering upto convergence of projected data data X NR ×D or we will stop based on some stopping condition. The details of this method is described in Algorithm 1. Algorithm 1 RP K-means[1] Input: Dimension D, Data Set X N ×d , No. of clusters K Output: cluster membership G. begin 1: Set G as K by taking random points from X . 2: Set a random matrix Pd×D P = X 3: Set X NR ×D N ×d Pd×D R P 4: Set Ck×D by finding the mean of each cluster in X R P according to G. 5: Find G with K-means on X R P with C R P as initialization. 6: return G

3.2

Iterative Version of RP K-means

It is an iterative algorithm [1], the dimension of the space is increased in each iteration so that the local minimums are avoided in the original space. Each solution constructed in one iteration can be used in the following iterations thereby saving the computations. This is same as cooling in simulated annealing clustering [17]. The wrong cluster assignments are reduced as dimensionality increased. A wrong cluster is defined by the Euclidean distance from center to the point in the original space. The algorithm is same as RP K-means, but here the projection and clustering is applied in many iterations. The projection dimension is increased in each iteration. The clusters in the previous iterative dimension are the base for initializing the clusters in the present dimension. The algorithm randomly selects K points from the input data set X and initializes as cluster membership G. The algorithm starts in dimension D1 , Initial centroids are the randomly selected K points. The input data X is projected into a D1 (D1 < d)

138

R. Pasunuri et al.

dimension space by random projection P1 , obtaining X R P1 . K-means clustering is performed in X R P1 to get the new cluster membership G, and this G will become the basis for next dimension (D2 ) for initilizaing K-means. We recalculate the centroids now in dimension D2 (D1 ≤ D2 < d) by using the cluster membership G obtained from K-means in dimension D1 and X R P2 to obtain the new initial centroids C R P2 , in a new D2 dimensional space. Now in D2 , we perform K-means clustering again using C R P2 as initialization. This process is repeated until the last Dl (D1 ≤ D2 ≤ · · · ≤ Dl < d) is reached, returning the cluster membership from Dl . This algorithm is based on a heuristic relation D1 ≤ D2 ≤ · · · ≤ Dl < d which is analogous to simulated annealing cooling. The procedure is presented in Algorithm 2. Algorithm 2 Iterative RP K-means Input: list of dimensions Da = 1, 2, 3, ..., l, Data Set X N ×d , No. of clusters K Output: G which is cluster membership. begin 1: Select K random points from X and assign as G. 2: for Da = 1 to l do 3: Define Pa (d × Da ) (random matrix) 4: Set X R Pa (N × Da ) = X Pa 5: Set C R Pa (K × Da ) by finding the mean of each cluster in X R Pa according to G. 6: Apply K-means on X R Pa with C R Pa as initialization to get G. 7: end for 8: return G

4 Proposed Variant of IRP-Kmeans The proposed variation is based on Iterative dimension reduction using random projections K-means(Algorithm 2) but instead of gradually increasing the dimension, we decrease the dimension from the high-dimension to low-dimension in the random projection part of the algorithm. Similar to IRP-Kmeans, we try to capture the solution constructed in one iteration and use it in subsequent iteration. In this way, it transfers the characteristics of previous generation to following generation. In our experiment, we ran our method for the reduced dimensions from the list (d, d/2, d/4, d/8).

5 Experimental Study The performance analysis is done on five high-dimensional data sets, two image (AT & T, Yale), three micro array (also called gene expression data) data sets, which are: GCM, Leukemia and Lung.

Ascending and Descending Order of Random …

139

Algorithm 3 Proposed Variant Input: list of dimensions D = (d/2, d/4, d/8), Data Set X N ×d , No. of clusters K Output: G which is cluster membership. begin 1: Select K random points from X and assign as G. 2: Set Da = d/2 3: Set a random matrix Pa (d × Da ) 4: Set X R Pa (N × Da ) = X Pa 5: Set C R Pa (K × Da ) by finding the mean of each cluster in X R Pa according to G. 6: If Da < d/8 7: Da = Da /2 8: and Goto STEP 3 9: Apply K-means on X R Pa with C R Pa as initialization to get G. 10: return G Table 1 Specifications of data sets Data set No. of samples AT&T faces (ORL) Yale GCM Leukemia Lung

400 165 280 72 181

No. of features

No. of classes

10304 1024 16063 7129 12533

40 15 2 2 2

The mean squared error (MSE) which is the objective function of K-means clustering is taken as a measure to report the performance of the proposed method.

5.1 Data Sets In this study, we considered five high-dimensional data sets to evaluate the performance of the proposed variation of IRP-K-means algorithm. A detailed specifications of the data sets are present in Table 1. AT & T Database of Faces (formerly ORL Database) consists a total of 400 images of 40 different persons. Global Cancer Map (GCM) data set consists of 190 tumor samples and 90 normal tissue samples. Leukemia data set contains 72 samples of two types: 25 acute lymphoblastic leukemia (ALL) and 47 acute myeloid leukemia (AML). Lung cancer is a gene expression data which contains 181 samples, which are classified into malignant pleural mesothelioma (MPM) and adenocarcinoma (ADCA). Yale data set contains 165 face images of 15 persons and 11 images per person, with a dimensionality of 1024.

140

R. Pasunuri et al.

Table 2 MSE for several datasets Data set D

AT&T faces (ORL) Yale GCM Leukemia Lung

221 166 234 212 226

IRP-Kmeans (Classical normal matrix)

IRP-Kmeans (Achlioptas random matrix)

7.8850 × 108 1.2311 × 108 4.5467 × 1011 4.1263 × 1011 10.88 × 1010

8.1216 × 108 1.459 × 108 4.9832 × 1011 4.1620 × 1011 4.43 × 1010

Sample average over 20 runs Table 3 MSE for several datasets S.No. Data sets 1 2 3 4 5

AT&T faces (ORL) Yale GCM Leukemia Lung

RP

IRP

Proposed (VIRP)

8.53 × 109

19.134 × 109

9.2759 × 109

1.61 × 108 1.20 × 1013 4.17 × 1012 1.19 × 1012

3.45 × 108 1.551 × 1013 7.467 × 1012 13.3 × 1012

1.6363 × 108 0.74438 × 1013 3.6702 × 1012 1.309 × 1012

Sample average 20 runs

5.2 Results and Discussion The system configuration used to perform the experiments is: 4 GB RAM, Intel i5third generation processor. By implementing the Theorem 1, we have calculated the bound for the data sets that are considered for experimentation. The  value is fixed at 0.99 in all the experiments. The MSE for several data sets with the implementation of Angelo et al. [2] and by using Achlioptas Random matrix (our own implementation), we got almost similar results, except for the Lung data set with a difference of 101 times in the MSE for AT & T Faces, Lung and GCM data sets. These results are presented in Table 2. The average MSE over 20 runs for the proposed variant along with three other methods is shown in Table 3. From this, it is evident that the proposed variant outperforms the IRP-K-means method on the given five high-dimensional data sets. When compared with RP-K-means, the performance of the proposed one is almost same for all the data sets considered except GCM. The performance of VIRP is doubled for GCM data set when compared with RP-Kmeans Algorithm. The performance of VIRP is 6 times improved when GCM data set is considered. The performance of the proposed VIRP method is double as that of IRP method on the first four data sets, and it is 10 times improved for the Lung data set.

Ascending and Descending Order of Random …

141

6 Conclusion and Future Directions In this paper, we have proposed a variant for IRP K-means algorithm by gradually decreasing dimension in the iteration there by preserving the inter-point distances efficiently. This can be confirmed by the empirical results produced above. Our method is compared with the Single Random Projection (RP), IRP K-means (IRP) methods. Compared to these two methods, our proposed method is giving best results for the given high-dimensional data sets. The future course of work may involve using any dimensionality reduction technique generate the random matrix and verify if the method saves the inter point distances. And also to comparative analysis of the proposed method with some standard clustering algorithms. Acknowledgements The first author would like to thank Dr.Angelo Cardoso for providing the IRP-Kmeans code.

References 1. Cardoso, A., Wichert, A.: Iterative random projections for high-dimensional data clustering. Pattern Recogn. Lett. 33, 1749–1755 (2012) 2. Lloyd, S.: Least squares quantization in pcm. Inf. Theory IEEE Trans. 28, 129–137 (1982) 3. Johnson, W., Lindenstrauss, J.: Extensions of lipschitz mappings into a hilbert space. Contemp. Math. 26, 189–206 (1984) 4. Fradkin D., Madigan D.: Experiments with random projections for machine learning. In: KDD ’03: Proceedings of the ninth ACM SIGKDD International Conference on Knowledge Discovery and Data mining (2003) 5. Bingham E., Mannila H.: Random projection in dimensionality reduction: applications to image and text data. In: KDD ’01: Proceedings of the seventh ACM SIGKDD International Conference on Knowledge Discovery and Data mining (2001) 6. Fern, X.Z., Brodley, C.E.: Random projection for high dimensional data clustering: a cluster ensemble approach. In: Proceedings of the Twentieth International Conference of Machine Learning (2003) 7. Deegalla S., Bostrom H.: Reducing high-dimensional data by principal component analysis vs. random projection for nearest neighbor classification. In: Proceedings of the 5th International Conference on Machine Learning and Applications (ICMLA), Fl, pp. 245–250 (2006) 8. Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recogn. 31, 651–666 (2010) 9. Alshamiri, A.K., Singh, A., Surampudi, B.R.: A novel ELM K-means algorithm for clustering. In: Proceedings of 5th International Conference on Swarm, Evolutionary and Memetic Computing (SEMCO), pp. 212–222. Bhubaneswar, India (2014) 10. Dasgupta, S., Gupta, A.: An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms 22, 60–65 (2003) 11. Achlioptas D.: Database-friendly random projections: Johnson-lindenstrauss with binary coins. J. Comput. Syst. Sci. 66, pp. 671–687. Special Issue on PODS 2001 12. Li P., Hastie T.J., Church K.W.: Vary sparse random projections. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 287– 296. ACM, New York, NY, USA (2006) 13. Hecht-Nielsen R.: Context vectors: general purpose approximate meaning representations selforganized from raw data. Computational Intelligence: Imitating Life, pp. 43–56 (1994)

142

R. Pasunuri et al.

14. Papadimitriou, C.H., Raghavan, P., Tamaki, H., Vempala, S.: Latent semantic indexing: a probabilistic analysis. In: Proceedings of 17th ACM Symposium on the Principles of Database Systems, pp. 159–168 (1998) 15. Boustsidis, C., Zouzias, A., Drineas, P.: Random projections for k-means clustering. Adv. Neural Inf. Proc. Syst. 23, 298–306 (2010) 16. Dasgupta S.: Experiments with random projection. In: Uncertainity in Artificial Intelligence: Proceedings of the Sixteenth Conference (UAI-2000), pp. 143–151 (2000) 17. Selim, S.Z., Alsultan, K.: A simulated annealing algorithm for the clsutering problem. Pattern Recogn. 24, 1003–1008 (1991) 18. Megan A.: Dimensionality reductions that preserve volumes and distance to affine spaces, and their algorithmic applications. Randomization and Approximation Techniques in Computer Science. Springer. volume 2483 of Lecture Notes in Computer Science, pp. 239–253 (2002)

Speed Control of the Sensorless BLDC Motor Drive Through Different Controllers Vikas Verma, Nidhi Singh Pal and Bhavnesh Kumar

Abstract Nowadays Brushless DC motors (BLDC) are gaining popularity and are also replacing the motor with brushes in numerous applications due to their high efficiency, low maintenance, and effective operation. This paper presents the sensorless speed control of the BLDC drive with the technique of zero-crossing detection of indirect back EMF. Several controllers are employed and compared for acquiring the effective control over the speed. The particular paper demonstrates the performances of sensorless BLDC drive has been evaluated with different controller schemes such as a conventional controller (PI), anti-windup PI, Fuzzy based and the Hybrid (FuzzyPI) controller at different load and speed. Their results have been compared in which fuzzy based controller offers a better response in maximum cases. This reduces the cost and complexity without compromising the performance. Fuzzy Logic Controller is used to enhance its robustness and reliability. The effectiveness of the work is demonstrated through simulation done in MATLAB Version (2013) environment and simulation results of the sensorless drive have been analyzed. Keywords BLDC motor · Back EMF sensing · Sensorless drive PI · Anti-windup-PI · Fuzzy logic · Hybrid (Fuzzy-PI)

1 Introduction In industrial as well as consumer applications Brushless DC (BLDC) motors are mostly used because it is efficient as well as reliable and also requires less maintenance and salient in operation [1, 2]. In recent times, there is high demand for V. Verma (B) · N. S. Pal Gautam Buddha University, Greater Noida, India e-mail: [email protected] N. S. Pal e-mail: [email protected] B. Kumar Netaji Shubhash Institute of Technology, Delhi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_15

143

144

V. Verma et al.

this type of permanent magnet drives. BLDC motor is a permanent magnet synchronous motor. In case of BLDC motor, the electromagnet does not move as it is replaced by the permanent magnet which rotates and the armature is at rest [2]. The electronic commutators are employed for successful commutation of current to the armature but information of the rotor position is required for this. The position information is mainly obtained with position sensors, which do not perform well at high-temperature applications [3]. Due to the sensor failure at high temperatures the system becomes unstable. Another drawback is related to the traditional controllers which are generally used by industries in large numbers. These controllers are simple in structure and easy to implement but their performances are affected by the load disturbances, nonlinearity and in conditions like variations in parameters. The problems like rollover which is related to saturation effect also compel to develop new schemes to attain the better control on speed. For this the sensorless BLDC drive the traditional controllers are being accomplished with the intelligent controllers such as fuzzy controllers to optimize the system performances [4, 5]. Also, to reduce the saturation effect, anti-windup scheme instead of the traditional scheme is used. In recent times, different control algorithms are being incorporated with conventional controllers to achieve better control. Nowadays, there is wide use of hybrid controllers [6]. In this, a combination of both traditional PI as well as Fuzzy controllers acts accordingly. One reduces the error and disturbances due to load variation and another one minimizes error due to large changes in input. This paper develops a sensorless speed control of BLDC motor drive based on indirect back EMF zero-crossing detection with intelligent techniques such as a fuzzy logic controller. This also overcomes all the drawbacks related to the sensor drive along with the use of conventional controllers. The sensorless drive is reliable and has good tracking capability over a wide range of speed. On the other hand, it is efficient and effective in cost aspects also which makes this proposed system economical.

2 Sensorless Operation of BLDC Motor Drive The permanent magnet BLDC motors are controlled electronically, for this it requires the information of rotor position to achieve proper commutation of current in the stator winding. For information of rotor position, hall sensors are used. But the use of hall sensors is not desirable because it increases the cost, complexity in structure and also the sensor fails at high-temperature applications. For achieving the sensorless operation there are various methods of sensorless control such as third harmonic voltage integration method, Flux estimation method, observer-based technique, detection of freewheeling diode conduction, back EMF sensing techniques [7–9]. Among all methods, the most efficient is the back EMF sensing method for the proper commutation sequence in motors and it also estimates and gives the information of the rotor position to rotate with synchronized phases [10, 11]. In sensorless drive scheme, only electrical dimensions are being used. At the condition of standstill, the back EMF value is zero as it is proportional to rotor’s speed. This operation shows limitation

Speed Control of the Sensorless BLDC Motor Drive …

145

Fig. 1 Simulink model of proposed sensorless speed control scheme of BLDCM drive

when the speed is low or zero. To tackle this a strategy is adopted which is based on starting the drive in an open loop [12]. BLDC motor has magnetic saturation characteristics so with the change in inductance variation the current value also changes this helps in determining the initial rotor’s position. The starting of this drive is done with open-loop strategy and after that transferred to the sensorless mode [13, 14].

3 Proposed Sensorless Speed Control Scheme of BLDCM Drive The designed sensorless model along with different controllers is shown in Fig. 1.

3.1 Speed Control of Sensorless BLDC Motor Drive The generated signals from the above sensing scheme are implied into different controllers and all the output working response of this sensorless drive with the conventional, anti-windup scheme, intelligent techniques such as fuzzy logic controllers and the hybrid (combination of conventional and fuzzy) controllers are being analyzed and compared. PI Controller The structure of the traditional PI is shown in Fig. 2. It is a most common controller, which is prominently used in industries. The values of this controller are obtained through Ziegler Nicholas’s tuning method as shown in Table 1.

146

V. Verma et al.

Fig. 2 Traditional PI controller structure Table 1 PI controller gain values Controller Kp PI

0.033

Ki 10.61

Fig. 3 Anti-windup-PI controller structure

Anti-windup-PI controller The performance is degraded because of the effect, which occurs from the rollover action of the traditional PI controller. In the case of a conventional PI controller, this problem arises only because of the saturation effect. This happens because of the large input value of error or due to the constant input value to the integrator. So to remove this drawback, the input value resulting from the difference of the unsaturated and saturated output is given to the integrator. This improves the output performance. For this, there is a modification in the conventional controller and named as anti-windup as shown in Fig. 3. Fuzzy Logic-Based Controller Fuzzy logic controllers involve the control logic which is based on the approach of the control with a linguistic variable. Figure 4 shows the steps involved in it. Fuzzy logic involves fuzzification, inference system, and defuzzification. The Fuzzy logic controller (FLC) is designed by using the Fuzzy Toolbox in MATLAB. In our work the logic preferred is of Mamdani type. Change in error of speed (ce) and error of speed (e) are the two inputs for this particular controller. The functions taken are in triangular membership. There are totally 49 (7 * 7) rules being developed in the rule block. Hybrid (Fuzzy-PI) Controller The specified Fuzzy-PI controller is the type of a hybrid controller that utilizes both PI and Fuzzy Logic Controllers, which provide

Speed Control of the Sensorless BLDC Motor Drive …

147

Fig. 4 Basic fuzzy logic controller structure

Fig. 5 Hybrid (Fuzzy-PI) controller structure

the best response in nonlinearity. Both the controllers also give the good response during speed tracking for steady state. The hybrid structure is shown in Fig. 5. By combining both the controllers the error and overshoot are minimized as well as it also gives the fast output response of the system. For the hybrid (Fuzzy-PI) algorithm, the structures are designed in such a way so that switching can occur smoothly. The designing is done in such a manner so that the utilization of both the controllers can be acquired by smooth switching between the lower speed and the higher speed gains.

4 Simulation Results and Discussion The different control strategies are applied to sensorless BLDC motor drive, which is verified through the simulation with all the standard specifications. This sensorless

148

V. Verma et al.

Fig. 6 Speed curve at 3000 rpm (fixed speed) under no load

Fig. 7 Speed curve with 2 Nm at fixed speed 3000 rpm

drive is simulated by using different types of controllers designed for this specific work. The output curves obtained from this sensorless BLDC drive has been compared and evaluated under the fixed speed as well as the variable speed with different loading conditions. All these conditions are described in the figures below. Figure 6 shows the output speed curves of four different controllers which are hybrid (Fuzzy-PI) controller, Fuzzy controller, anti-windup-PI and the traditional PI controller is observed at 3000 rpm (fixed speed) under the condition of no load. The output response curves ensure that the particular Fuzzy-PI controllers is fast in comparison with the other controllers and settling time is 0.016 s, second, the antiwindup-PI also shows fast response and less peak overshoot than a conventional PI controller. Figure 7 shows the output speed curves of all the four controllers used for the condition, when the motor is rotating at 3000 rpm under the 2 Nm loading condition. As Fuzzy-PI controller takes 0.012 s for settling, which is very less than another controller. Figure 8 shows the performance curves of the different controllers with the speed change of 3000–1500 rpm in 0.5 s for the condition of no load. Particularly, Fuzzy-PI controller gives a very fast response in comparison with others and the time taken for settling is 0.010 s.

Speed Control of the Sensorless BLDC Motor Drive …

149

Fig. 8 Speed curve at change in speed 3000–1500 rpm in 0.5 s without any load

Fig. 9 Speed curve at changing speed 3000–1500 rpm with 2 Nm load

Figure 9 shows the performance curves of the different controllers under a fixed load of (2 Nm) with the changing speed of 3000 rpm–1500 in 0.5 s. As under loading condition also Fuzzy-PI controller rises fast and has minimum peak overshoot and settling time of this controller is also less than the conventional controllers. Figure 10 shows the output performance curves of the different controllers at the variable changes in speed that is from 3000 rpm to 1000 rpm in 0.5 s then from 1000 rpm to 3000 in 1 s under the condition of no load. The output response shows the reliability and tracking capability of the system. In which the Fuzzy-PI controllers is much faster than the other controllers. Figure 11 shows the performance curves of the different controllers at the fixed speed of 3000 rpm under the condition of load variation from 1 to 4 Nm in 0.5 s. As with changing load the time of settling is better in case of Fuzzy-PI. In this case, even the anti-windup-PI also takes nearly same time as of PI controller. For the evaluation of the output performance of all the four controllers, which is employed in the sensorless speed control of BLDC drive is also being compared in respect of (t r ) rise time, (t s ) settling time, (%M p ) peak overshoot is shown in Table 2.

0.056

0.053

0.052

0.051

0.54

0.036

0.039

0.036

0.033

0.034

3000 rpm with 2 Nm load 3000–2000 rpm at no load 3000–2000 rpm with 2 Nm load Variable speed with no load 3000 rpm with load change of 1–4 Nm in 0.5 s

%mp

1.6

1.4

1.8

2.5

2.9

0.021

0.027

0.025

0.028

0.026

0.029

0.51

0.041

0.045

0.040

0.042

0.049

ts

tr

0.059

2.7

tr

0.037

ts

Anti-windup-PI

Controllers PI

3000 rpm at no load

Parameters

Table 2 Performance comparison of different controllers

%mp

1.9

1.6

1.7

1.4

1.2

1.5

0.012

0.015

0.014

0.016

0.014

0.019

tr

Fuzzy ts

0.021

0.023

0.020

0.021

0.022

0.024

0.01

0.04

0.05

0.03

0.11

0.02

%mp

0.006

0.004

0.003

0.005

0.007

0.009

tr

Hybrid (fuzzy-PI) ts

0.011

0.012

0.011

0.010

0.012

0.016

%mp

0.000

0.001

0.003

0.002

0.001

0.003

150 V. Verma et al.

Speed Control of the Sensorless BLDC Motor Drive …

151

Fig. 10 Speed curve at variable speed changing: 3000–2000 rpm in 0.5 s and 3000 rpm at 1 s with no load

Fig. 11 Speed curve at fixed speed of 3000 rpm with change in load from 1 to 4 Nm at 0.5 s

5 Conclusion In this paper, the sensorless speed control of three-phase BLDC motor with different types of Intelligent and conventional controllers based on the sensorless technique of back EMF sensing have been simulated using the MATLAB Version (2013) and their performance is observed. The simulation results show and depict output performances for conventional PI, anti-windup-PI, Fuzzy logic and hybrid (Fuzzy-PI) controllers. Their performance is compared in the respect of time taken to rise, the time taken to settle down and percentage of peak overshoot at a fixed speed as well as at variable speed with different loading conditions. The results obtained from the simulation shows that Fuzzy-PI shows the best performance among all controllers. Both Fuzzy as well as Fuzzy-PI shows better results than conventional controllers. Even anti-windup-PI controller also shows fast response and minimum peak overshoot than a conventional PI controller. The results of this designed model demonstrate that the system is cost-effective, reliable, and robust which makes it suitable for robotics, fuel pumps, and the industrial automation-related applications.

152

V. Verma et al.

References 1. Miller J.E.: Brushless permanent magnet dc motor drives. Power Eng. J. 2(1) (1998) 2. Bose, B.K.: Modern Power Electronics and AC Drives. Pearson Education Asia (2002) 3. Kim, T., Lee, H.-W. Ehsani, M.: Position sensorless brushless DC motor drives: review and future trends. IEEE, IET Electr. Power Appl. 1(4), 557–564 (2007) 4. Sriram, J., Sureshkumar, K.: Speed control Of BLDC motor using fuzzy logic controller based on sensorless technique. In: IEEE International Conference on Green Computing Communication and Electrical Engineering (2014) 5. Anjali, A.R.: Control of three phase bldc motor using fuzzy logic controller. Int. J. Eng. Res. Technol. 2 (2013) ISSN: 2278-0181 6. Abidin, M.F.Z., Ishak, D., Hasan, A.H.A.: A comparative study of PI, fuzzy and hybrid PI fuzzy controller for speed control of BLDC motor drives. In: Proceedings of the IEEE International Conference in Computer Application and Industrial Electronics Application, pp. 189–195. Malaysia (2011) 7. Shao, J., Nolan, D., Hopkins, T.: A novel direct back EMF detection for sensorless brushless DC motor drives. In: IEEE Conference on Applied Power Electronics Conference and Exposition Seventeenth Annual, vol. 1 (2002) 8. Mathew, T., Sam, C.A.: Closed loop control of BLDC motor using fuzzy logic and single current sensor. In: IEEE International Conference on Advanced Computing And Communication System, pp. 19–21 (2013) 9. Damordhan, P., Sandeep, R., Vasudevan, K.: Simple position sensorless starting method for brushless DC motor. IET Electr. Power Appl. 2(1), 49–55 (2008) 10. Singh, S., Singh, S.: A control scheme for position sensorless PMBLDC motor from standstill to rated speed. In: IEEE International Conference on Control, Instrumentation, Energy and Communication (2014) 11. Somantham, R., Prasad, P.V., Rajkumar, A.D.: Modelling and simulation of sensorless control of PMBLDC motor using zero crossing back emf detection. In: IEEE Intenational Symposiyum on Power Electronics, Drives, Automotive Motion (2006) 12. Lad, C.K., Chudamani, R.: Sensorless brushless DC motor drive based on commutation instants derived from the line voltages and line voltage differences. In: IEEE Annual Indian Conference (2013) 13. Damodharan, P., Vasudevan, K.: Line voltage based indirect back-emf zero crossing detection of bldc motor for sensorless operation. Int. J. Power Energy Syst. 28 (2008) 14. Damordhan, P., Vasudevan, K.: Sensorless brushless DC motor drive based on the zero crossing detection of back EMF from the line voltage difference. IEEE Trans. Energy Conv. 25(3), 661–668 (2010)

Urban Drainage System Design Minimizing System Cost Constrained to Failure Depth and Duration Under Flooding Events Soon Ho Kwon , Donghwi Jung

and Joong Hoon Kim

Abstract Recently, property damages and loss of life caused by natural disasters are increasing in urban area because of local torrential rainfall, which is mostly originated from recent global climate change. Acceleration of population concentration and increase of impervious area from urbanization worsen the situation. Therefore, it is highly important to consider system resilience which is the system’s ability to prepare, react, and recover from a failure (e.g., flooding). This study proposes a resilience-constrained optimal design model of urban drainage network, which minimizes total system cost while satisfying predefined failure depth and duration (i.e., resilience measures). Optimal layout and pipe sizes are identified by the proposed model comprised of Harmony Search Algorithm (HSA) for optimization and Storm Water Management Model (SWMM) for dynamic hydrology-hydraulic simulation. The proposed model is applied to the design of Gasan urban drainage system in Seoul, Korea, and the resilience-based design obtained is compared to the least-cost design obtained with no constraint on the resilience measures. Keywords Urban drainage system (UDS) · Resilience · Harmony search

S. H. Kwon Department of Civil, Environmental and Architectural Engineering, Korea University, Seoul, South Korea D. Jung Research Center for Disaster Prevention Science and Technology, Korea University, Seoul, South Korea J. H. Kim (B) School of Civil, Environmental and Architectural Engineering, Korea University, Seoul 136-713, South Korea e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_16

153

154

S. H. Kwon et al.

1 Introduction Urban Drainage System (UDS) is an urban water infrastructure to carry wastewater and rainwater to the outlet of an urban basin from which they are either treated or discharged to a river. UDS consists of various components such as drainage pipe, detention reservoir, and pump station. Drainage pipe delivers rainwater entering into manholes to the downstream pipes whereas detention reservoir stores the delivered rainwater for reducing the peak discharge in downstream. Pump station expels the stored rainwater from low to high elevation against the gravity finally to riverside land. Therefore, determining the size and capacity of these components is an important task for securing cost-effective functionality of a UDS. Previous studies on the optimal design of UDS drain pipes are classified into two groups: one that determines pipe sizes only with fixed pipe layout and the other that optimizes both sizes and layout. [1, 2] have developed a model that minimizes total system cost by considering both slopes of pipes and sizes in the sewer system. Other studies have developed separate algorithms for layout generation and pipe sizing for UDS [3, 4]. However, few efforts have been devoted to maximizing system resilience, especially with respect to failure depth and recovery time. In this study, we introduce a resilience-constrained UDS optimal design model that determines both layout and pipe sizes to minimize total system cost satisfying a predefined level of resilience. The maximum nodal flooding volume is used as a failure depth indicator and considered in a constraint. The proposed resilienceconstrained optimal design model is demonstrated in the Gasan urban drainage network in Seoul, Korea. The optimal results obtained under different levels of failure depths were compared with respect to total system cost and resulting total flooding volume.

2 Resilience-Constrained Optimal Design Model The proposed UDS design model minimizes total system cost satisfying a constraint on the level of failure depth as follows in (1): Minimize F 

N  i1

Ci (Di ) × L i +

N 

Pj

(1)

j1

where C i (Di ) is the unit cost of the pipe I which is a function of Di ($); L i is the length of the pipe (m); Di is the diameter of conduit (m); Pj is the penalty cost ($); N is the total number of conduits in UDS. The penalty cost was calculated based on the total flooding volume of UDS. In addition, this study is calculated the objective function by considering the constraints as follows in (3)–(5):

Urban Drainage System Design Minimizing System Cost Constrained …

155

di  Di + 0.5(m)

(3)

failure depth < 80% × MAXMAXF

(4)

failure depth < 90% × MAXMAXF

(5)

where d i is the burial depth at each node (0.5 m is added considering the freezing depth). In this study, the level of failure depth is defined as the maximum value of each time interval’s maximum nodal flooding volumes (MAXMAXF). The base level of failure depth is obtained from the MAXMAXF for the least-cost design. The proposed model with two different levels of MAXMAXF, i.e., 80 and 90% of the base MAXMAXF is applied independently to the study network.

3 Study Area The Gasan sewer network in Seoul, Korea is as shown in Fig. 1. The study network consists of 32 pipes, 32 nodes, and sub-catchments. A pumping station is located at the outlet of the sewer network for expelling the collected rainwater to the mainstream. There are five pumps in the pumping station, the first to the third pumps have the identical capacity of 100 m3 /min. The fourth and the fifth pumps have the capacity of 170 m3 /min. The first and second pumps turn on when the water depth in the front detention reservoir reaches at 0.6 and 0.8 m, respectively, and the third and fifth pumps turn on at a water depth of 1 m.

4 Application Results Table 1 indicates the unit cost of pipes. In this study, the HSA is used to the design of minimizing system cost and to reduce flooding volume in each node for UDS. Applied parameters on this model are HMCR  0.8, PAR  0.05, and number of iterations  100,000. The least design cost based on proposed resilience-constrained optimal design model is calculated by considering constraint (see Table 2). Table 2 obtained under a different level of failure depths were compared with respect to least design cost and resulting total flooding volume. The results show that as the level of failure depth decrease, the total flooding volume is decreased. In addition, the pipe sizes are set larger because the total flooding volume decrease, the total system cost increased.

156

S. H. Kwon et al.

Fig. 1 The schematic of the sewer network

5 Conclusions This study proposes a method to apply the disaster response and management to prepare the damage and mitigate the property losses. The design of minimizing system cost in urban drainage system by integrating harmony search algorithm and stormwater management model was presented. The level of failure depth based on resilience-constrained optimal design model was calculated by considering least design cost and total flooding volume. The results of both return periods show that as the design cost increase, the total flooding volume decrease. Further research could be compared different flood damages with their corresponded design system costs by considering the importance of the buildings regarding their domestic of industrial application.

Urban Drainage System Design Minimizing System Cost Constrained …

157

Table 1 The unit cost of pipes Pipe size (m)

Unit cost ($/m)

0.25 0.30 0.35 0.40 0.45 0.50 0.60 0.70 0.80 0.90 1.00 1.10 1.20

239.20 246.68 270.04 281.11 304.86 339.10 400.25 488.14 552.89 634.52 738.17 834.09 943.18

Table 2 The result of least design cost and total flooding volume Level of failure depth (90%) Level of failure depth (80%) Total system cost Total flooding ($) volume (m3 ) 50-year 6,158,821 frequency design rainfall 100-yr frequency 6,161,564 design rainfall

Total system cost Total flooding ($) volume (m3 )

21.181

6,177,816

15.251

22.840

6,187,743

17.047

Acknowledgements This research was supported by a grant (13AWMP-B066744-01) from the Advanced Water Management Research Program funded by the Ministry of Land, Infrastructure, and Transport of the Korean government.

References 1. Mays, L.W., Yen, B.C.: Optimal cost design of branched sewer systems. Water Resour. Res. 11(1), 37–47 (1975) 2. Mays, L.W., Wenzel, H.G.: A serial DDDP approach for optimal design of multi-level branching storm sewer systems. Water Resour. Res. 12(5), 913–917 (1976)

158

S. H. Kwon et al.

3. Lui, G., Matthew, R.G.S.: New approach for optimization of urban drainage systems. J. Environ. Eng. 116(5), 927–944 (1990) 4. Tekeli, S., Belkaya, H.: Computerized layout generation for sanitary sewers. J. Water Resour. Planning Manage. 112(4), 500–515 (1986)

Analysis of Energy Storage for Hybrid System Using FLC Ayush Kumar Singh, Aakash Kumar and Nidhi Singh Pal

Abstract In this paper hybrid renewable energy resources (HRES) composed of PV, wind, and batteries as storage units use a fuzzy logic technique to control the energy between load demand and generation. The control technique using a fuzzy logic controller is simulated on MATLAB, which balances the suitable power management between intermittent energy generation by renewable sources and loads. Keywords PV · WECS · Hybrid energy system · Fuzzy · Battery power management

1 Introduction Renewable energy resources (RES) such as solar, wind energy, etc., are a hopeful option for future power generation as they are freely available and environmental friendly. Hybrid solar PV-Wind system is an efficient resource to supply power to the grid or an isolated load [1]. A wind turbine converts kinetic energy into mechanical energy and further generates AC power by the generator. Solar PV modules that convert sun energy into DC power. Use of conventional resources is not for multiple challenges. Renewable energy is the only solution to such energy challenges [2, 3]. The major drawback of this energy is that they are nature dependent so due to intermittence, uncertainty, and low availability of nature which makes system A. K. Singh (B) · N. S. Pal Department of Electrical Engineering, Gautam Buddha University, Greater Noida, India e-mail: [email protected] N. S. Pal e-mail: [email protected] A. Kumar Energy Conservation Services Jeevanam Water Technologies Maharashtra, Pune, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_17

159

160

A. K. Singh et al.

Fig. 1 Block diagram of hybrid PV-wind with battery storage system [4]

unreliable. Therefore, a hybrid topology is used to overcome intermittency and other issues of RES and make the system more reliable [4, 5]. The basic block diagram of hybrid PV-Wind with battery storage system is shown in Fig. 1. In this paper, the study of two renewable energy sources: PV generator and a PMSG-based wind turbine as the primary sources and a battery storage system as the secondary source is implemented to overcome the fluctuations from PV and wind turbine. The intelligent control-based fuzzy is implemented to control the flow of power between generation and load side.

2 Hybrid PV-Wind System with Battery Storage System The hybrid system consists of renewable resources such as wind and solar PV system and battery storage system to fullfill the desired demand. These types of systems are not always connected to the grid, but it makes also the solution for the standalone system. By combining the two renewable sources, solar and wind get better reliability and is economical to run. The PV system consists of the solar cells. Whenever a photovoltaic cell is exposed to the sun, since it is a semiconductor material, it absorbs solar energy and converts it into electrical energy. The basic PV cell equivalent circuit contains Rs as the equivalent series resistance and Rp as the equivalent parallel resistance of the PV array. PV generator is used as a renewable energy system and connected to the inverter through DC/DC boost converter. The relationship between output voltage V PV and load current I L can be expressed as [6]. In this paper, the wind turbine is the permanent magnet synchronous generator (PMSG) type. The kinetic energy of wind is converted into mechanical energy with

Analysis of Energy Storage for Hybrid System Using FLC

161

the help of wind turbine (WT). Study of torque and power characteristics produced by WT at various wind speeds and other parameter variations due to variation in wind speed can be done as given in [2, 7]. PMSG has been used as a generator in this paper. The power Pwind extracted from the wind is [8]. There is a need for storage device with renewable energy with the help of which the fluctuations in renewable energy can be compensated. If excess energy is produced by renewable sources then the battery will store the excess energy. Whenever renewable energy is not enough to satisfy the load then the battery will provide the energy to meet the load demand. The battery is mostly used for long-term storage. It has fixed maximum capacity and voltage and current ratings are provided by the manufacturer. The most important parameter is state of charge (SOC). SOC refers to the percentage charge present in the battery. Battery calculation is taken from [9]. SOC  100% means battery is completely charged. SOC  0% means battery is completely discharged. To avoid operations like undercharging and overcharging, they should not be completely discharged or overcharged. For this reason, it is necessary to determine the maximum depth of discharge. Generally, the depth mainly varies from 30 to 80% [10]. A good intermediate value is 50% which means that only half of the capacity of the battery will be used.

3 Control Scheme The operating conditions of PV on which output of PV depends are irradiance and temperature. At a different value of irradiance and temperature, the output of PV will be different. For the productive operation, PV system must work at maximum power point (MPP) of P–V curve. Different types of MPPT techniques have been introduced [11]. The incremental conductance (IC) algorithm has been carried in this paper. The PV MPPT senses current I PV and voltage V PV of PV and according to that change in duty cycle, so that PV extracts the maximum power throughout the day. Due to fluctuating wind speed, the variation in frequency and amplitude of a variable-speed PMSG makes it unfit for the proposed operation. Here, AC output of WT is converted into DC voltage with the help of three-phase diode bridge rectifier. To extract the maximum power of wind turbine at any wind speed, the duty cycle of the switch of DC/DC boost converter should be controlled. To achieve the maximum energy from the WT below rated wind speed, a variable-speed control technique is introduced [7, 12]. At different wind speeds (V wind ), rotor speed (ωm ) of WT is different and the corresponding mechanical power obtained is also different. The mechanical power of WT is depended upon the rotor speed. To achieve maximum power from wind turbine which is of variable speed nature, the rotor of the wind turbine is operated at optimal speed using MPPT so that the maximum power from the turbine can be obtained at below rated wind speed.

162

A. K. Singh et al.

A three-phase six switches, pulse width modulation (PWM) VSI has been implemented for the proposed HES model. A converter must act as a unidirectional power controller between DC and load. In this control system, desired power transfer by hybrid PV-wind system to load as they are generated [6, 13]. There is two type of loops in control technique which is applied to inverter control. The first control loop is internal which control the load current and second control loop is external which control the DC link voltage. The main function of the internal control loop is to maintain the power quality and external control loop is to maintain the power flow in the system. MPPT is associated with WT to get optimal power at different speeds.

4 Power Management Strategy An overall control strategy is needed in multisource energy system for power management strategy [14]. Pitch angle controller controls the WECS and maximum power extracted by MPPT and with the help of the maximum power point tracker, the output of PV is controlled. A battery is used to compensate the fluctuation and full fill the power at load side. A fuzzy controller is used with the battery to control the power. A bi-directional converter is also used to charge and discharge the battery. With the help of fuzzy controller, power produced by hybrid PV-Wind system and battery system is capable of transferring the desired power to load.

4.1 Battery Power Management System The intelligent control system is necessary for the nonlinear system. The main purpose to introduce the intelligent control system is to keep away from the insufficient operating time and to protect the battery storage system. The intelligent control system supply desired power to load and also help to compensate the fluctuating generation by hybrid sources. The algorithm applied to this intelligent control system provides a better management for battery storage system. The fuzzy logic controller has two inputs and one output [15]. According to the value of SOC, fuzzy logic controller decides the battery charging and discharging operation. The net output power produced by hybrid PV-Wind and battery are calculated as Pnet  PPV + Pwind + Pbattery

(1)

PPV —PV Power; Pwind —WECS Power; Pbattery —Battery Power. The control strategy is that at any time if power generated by PV and wind is excess, then is used to charge the battery. Now the equation is PPV + Pwind  Pbattery + Pload

Pnet > 0

(2)

Analysis of Energy Storage for Hybrid System Using FLC Table 1 Fuzzy rule table

e NB NS Z PS PB

Table 2 PV module parameters

163

PL NB

NS

Z

PS

PB

NB NM NM NM PB

NB NM NM NM PB

NM NS ZE PS PM

NB PM PM PM PB

NB PM PM PM PB

Maximum power (Pmax )

9.5 KW

Voltage at MPP

29 V

Current at MPP Open-circuit voltage

7.35 A 36.3 V

Short circuit current

7.84 V

When power generated by sources is less than load required by load side then the battery power is used to compensate the deficit power and fullfill the load side. PPV + Pwind + Pbattery  Pload

Pnet < 0

(3)

The fuzzy logic controller decides the charging and discharging operation of the battery, which depends on the SOC. A 5 * 5 rule base used in fuzzy controller is given in Table 1. Two inputs are load power (PL ) and error (e) between power generated by PV-Wind system and load power. Output is state of charge (SOC).

5 Simulation Results The simulated responses of the implemented hybrid energy system with battery power management using MATLAB/Simulink are studied.

5.1 PV System The surface temperature of PV is considered to be 25 °C and irradiation varies. IC-based MPPT tracks and controls the constant voltage throughout the day and varies according to irradiance. Thus maximum power is extracted at each irradiation. In this system, a 9.5 KW of PV is simulated on MATLAB. The PV module parameters are given in Table 2. Block diagram of PV system is given in Fig. 2. Combining PV modules in various ways of series and parallel connection gives the 500 V.

164

A. K. Singh et al.

Fig. 2 Irradiance, output power and voltage of PV

Fig. 3 Wind speed and rotor speed of WT

Variation of irradiance of PV is shown in Fig. 2. From time t  0 to t  2 s irradiance is 1000 w/m2 and time t  2 to t  6 s irradiance is 850 w/m2 . The output voltage of PV was 500 V and it was boosted up to voltage of 640 V using boost converter. The output power of PV is 9500 W from time t  0 to t  2 s at time t  2 s power is 8100 W. After 2 s, power reduces because radiation was decreased to the level of 850 w/m2 .

5.2 Wind System The block diagram of WT system is shown in Fig. 3. The speed of rotor changes with the wind speed (V wind ). As speed of wind increases, rotor speed (ωm ) also increases and corresponding output power of WT increases and vice versa. With increases in the output power of WT, rectified current and voltage also increases. WT parameters used are given in Table 3. Speed of wind is variable in nature. From time t  0 to t  0.5, wind speed is 5 m/s and from time t  0.5 s to t  3 s wind speed is 12 m/s and from time t  3 s to t  6 s wind speed is 9 m/s. There are variations in rotor speed as the wind speed is changed. As wind speed is very low for time t  0–0.5 s so corresponding rotor speed is also very less. At time t  1–3 s, if speed of wind is maximum then rotor

Analysis of Energy Storage for Hybrid System Using FLC Table 3 PMSG based wt parameters

165

Maximum power (Pmax)

8.5 KW

Rated wind speed

12 m/s

Rectified voltage at rated wind 500 V speed Rectified current at rated wind 11.8 A speed Fig. 4 WT rectified current and voltage

Fig. 5 Wind power

speed also reaches the maximum speed. At t  3 s, when speed of wind decreases the rotor speed also decreases as shown in Fig. 3. The rectified output current and voltage also varied according to wind speed as shown in Fig. 4. Initially, the voltage is less due to less wind speed. At time t  1–3 s, speed of wind is maximum then voltage also reaches the maximum. At t  3 s, when speed of wind decreases the voltage also decreases. The output power of wind turbine also varies according to wind speed as shown in Fig. 5. Initially, the power was zero but at t  1–3 s the wind speed reaches its maximum speed then power also reaches the maximum rated power. Further, at t  3 s, when wind speed is decreased the power also decreases.

166 Table 4 Battery parameters

A. K. Singh et al. Battery

Ni–MH

Voltage

300 V

Current State of charge (%)

6.5 A 60

Fig. 6 SOC (%)

5.3 Battery System The simulation of the battery consists of the battery with the bi-directional DC–DC converter. The battery employed in this system is Ni–MH acid battery and parameters are given in Table 4. Initially, when irradiance is good but wind speed is very low than PV-battery supply, the load and battery will start discharging as shown in Fig. 6. After t  1 s, when enough power is generated from PV-wind then the battery will start charging. At t  2 s, the power of PV reduces due to decrease in irradiance and hence a little decrease in state of charge because of battery getting less current. Further at t  3 s, wind speed decreases so output power of WT also decreases, hence battery gets very low current from PV-wind system. Due to which rate of charge is almost constant. At t  4–5 s, an extra load is added to the system. PV-wind system is unable to satisfy the extra added load so at that time battery compensates the deficit power and start discharging which is reflected by a sharp decrease during t  4–5 s. The variation in battery charging voltage and charging current under different load conditions will be different. When the load is connected to the battery, battery start discharging and current will be positive otherwise negative. The power of battery that varied according to the power required to system is shown in Fig. 7. Initially, load is satisfied by PV and battery. Further battery is getting charge, hence charging condition is shown at time t  1–4 s. At time t  4–5 s an extra load is added to system and PV-Wind system is not satisfy the load. Hence battery fed power to satisfy the load is shown in this duration.

Analysis of Energy Storage for Hybrid System Using FLC

167

Fig. 7 Battery power

Fig. 8 Battery power management

5.4 Battery Power Management The simulation output waveform of implemented system is based on the data provided i.e. during the time interval of 0 < t < 0.5 s the wind speed is 5 m/sec and is increased to 12 m/sec at t  0.5 s and again decrease to 9 m/s at t  3 s. The irradiation is initially 1000 w/m2 for 0 < t < 2 s and at t  2 s reduces to 850 w/m2 . The demand on the load side is of 10 KW throughout time in the system but an extra load of 4 KW is added to the system during time t  4–5. Initially, PV generator and battery fed the power to full fill the load side demand. At t  1 s the power produced by PV-Wind is sufficient to full fill the load demand and the remaining power is used to charge the battery. At t  2 s, the output of PV generator is decreased due to less irradiance and WT produce maximum power hence PV-Wind system is capable to satisfy the load and remaining power is used to charge the battery. At t  3 s the wind power also decreases due to less wind speed and here PV-Wind system is again capable to full fill the load demand so the battery is in charging condition. But at t  4–5 s, a 4 KW load is increased in the system so at that time generated power is insufficient to full fill the load requirement so battery fed power to system to full fill the load power. Battery feeds power to system when required by the system. Battery power management is shown in Fig. 8.

168

A. K. Singh et al.

6 Conclusion In RES, the output power of solar and the wind are fluctuating in nature because these energy sources are nature dependent. So, hybrid topology is used to overcome the intermittence and complement each other. In this paper, discuss control and operation of a balanced power between sources and load is discussed. The system contains hybrid PV-Wind and battery connected to load. The hybrid PV-Wind system and battery are connected to common DC bus in which PV-Wind is connected through DC/DC boost converter and battery are connected through bi-directional. In MATLAB/Simulink, 9.5 KW PV and 8.5 KW wind hybrid system have been implemented. Power generated by hybrid PV-Wind system and battery system are capable of transferring the desired power to load. This paper implements the fuzzy control to obtain the battery power management system applications. Such type of intelligent control system increases the accuracy of this nonlinear system and it also obtains the optimization and distributed energy generation by its control algorithm.

References 1. Villalva, M.G., Gazoli, J.R., Filho, E.R.: Comprehensive approach to modelling and simulation of photovoltaic arrays. IEEE Trans. Power Electron. 24(5), 1198–1208 (2009) 2. Bae, Sungwoo, Kwasinski, Alexis: Dynamic modeling and operation strategy for a microgrid with wind and photovoltaic resources. IEEE Trans. Smart Grid 3(4), 1867–1876 (2012) 3. Liu, X., Wang, P., Loh, P.C.: A hybrid AC/DC microgrid and its coordination control. IEEE Trans. Smart Grid 2(2), 278–286 (2011) 4. Wang, C., Nehrir, M.H.: Power management of a stand-alone wind/photovoltaic/fuel cell energy system. IEEE Trans. Energy Convers. 23(3), 957–967 (2008) 5. Ahmed, N.A., Miyatake, M., Al-Othman, A.K.: Power fluctuations suppression of stand-alone hybrid generation combining solar photovoltaic/wind turbine and fuel cell systems. Energy Convers. Manage. 49(10), 2711–2719 (2008) 6. Li, Xiangjun, Hui, Dong, Lai, Xiaokang: Battery energy storage station (BESS)-based smoothing control of photovoltaic (PV) and wind power generation fluctuations. IEEE Trans. Sustain. Energy 4(2), 464–473 (2013) 7. Haque, MdEnamul, Negnevitsky, Michael, Muttaqi, Kashem M.: A novel control strategy for a variable-speed wind turbine with a permanent-magnet synchronous generator. IEEE Trans. Ind. Appl. 46(1), 331–339 (2010) 8. Kasera, J., Chaplot, A., Maherchandani, J.K.: Modeling and simulation of wind-PV hybrid power system using Matlab/Simulink. In: 2012 IEEE Students’ Conference on Electrical, Electronics and Computer Science (SCEECS). IEEE (2012) 9. Ayush, A.K., Gautam, S., Shrivastva, V.: Analysis of energy storage system for wind power generation with application of bidirectional converter. In: 2016 Second International Conference on Computational Intelligence &Communication Technology (CICT). IEEE (2016) 10. Tani, A., Camara, M.B., Dakyo, B.: Energy management in the decentralized generation systems based on renewable energy—ultra capacitors and battery to compensate the wind/load power fluctuations. IEEE Trans. Indus. Appl. 51(2), 1817–1827 (2015) 11. Hussein, K.H., et al.: Maximum photovoltaic power tracking: an algorithm for rapidly changing atmospheric conditions. IEE Proc.-Gener. Transm. Distrib. 142(1), 59–64 (1995) 12. Yang, Y. et al.: Nonlinear dynamic power tracking of low-power wind energy conversion system. IEEE Trans. Power Electron. 30(9), 5223–5236 (2015)

Analysis of Energy Storage for Hybrid System Using FLC

169

13. Ciobotaru, M., Teodorescu, R., Blaabjerg, F.: A new single-phase PLL structure based on second order generalized integrator. In: Power Electronics Specialists Conference (2006) 14. Wang, Trudie, O’Neill, Daniel, Kamath, Haresh: Dynamic control and optimization of distributed energy resources in a microgrid. IEEE Trans Smart Grid 6(6), 2884–2894 (2015) 15. Ruban, M.A.A.M., Rajasekaran, G.M., Rajeswari, M.N.: Implementation of Energy Management System to PV-Wind Hybrid Power Generation System for DC Microgrid Application (2015)

Impact of Emission Trading on Optimal Bidding of Price Takers in a Competitive Energy Market Somendra P. S. Mathur, Anoop Arya and Manisha dubey

Abstract All over the world, electricity sector emerged as the main source of GHG emission. Emission trading scheme and Renewable support schemes are the main schemes to diminish Greenhouse Gas emissions, which is adopted by various countries and some developed countries or regions are going to be implementing. In the first part, this paper depicts the summary of several obligatory greenhouse gases trading schemes adopted by the various countries worldwide and their future trends in carbon trading. The second part evaluated the optimal bidding of thermal power plant in a competitive energy market with the strategy that considering the impact of CO2 emission in an emission trading market. In this paper, a stochastic optimization model is presented with the theory that the pdf of rival’s bidding are known. For this purpose, in a sealed auction with considering the impact of CO2 emission trading a nature-inspired new genetic algorithm approach has been employed in a day-ahead market to solve the optimization problem with symmetrical and unsymmetrical information of rivals. The feasibility of the proposed method is checked by an IEEE-30 bus system with six generators. Keywords Competitive energy market · Emission trading schemes · Genetic algorithm

S. P. S. Mathur (B) · A. Arya · M. dubey Electrical Engineering Department, Maulana Azad National Institute of Technology Bhopal, Bhopal, India e-mail: [email protected] A. Arya e-mail: [email protected] M. dubey e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_18

171

172

S. P. S. Mathur et al.

1 Introduction Electric power industry has undergone a restructuring process worldwide and competition is increased greatly from monopoly to competitive market power. The main aim of the power industry is to establish a competitive electricity market by reformation. The reformation started around mid-1980s in the various countries of the world. The pioneer of reformation is the Chile, where it is started in 1987. Electric power industry reforms started in India when Electricity act 2003 and various policy i.e. National electricity policy and Tariff policy were adopted by the government. Two power exchanges i.e. Power Exchange India Ltd (PXIL) and Indian Energy Exchange Ltd. (IEX) are operational in India since 2008. The endeavor of reformation is to change the economics of energy market from monopoly to competitive market power, increased fuel availability and develops new technologies [1, 2]. In a competitive electric power industry, all the price takers have market power and can make the healthy profit via its strategic bidding behavior, and much research has been undertaken. Theoretically, to maximize the profit price takers should bid at very close to their marginal cost in the competitive energy market and when price takers did this, then this behavior is called strategic bidding. According to the different market mechanisms and bidding protocols, various modeling techniques have been adopted by many researchers. These modeling techniques can be classified as Optimization models, Game theory models, agent-based simulation models, and hybrid models. References [3, 4] describes the various price takers’ strategic bidding modeling methods in a competitive energy market on the state of the art. In the current energy market, various causes affect the bidding strategies of price takers in the day-ahead market [5]. This paper considers the impact of emission trading schemes on the optimal bidding of price takers in a competitive energy market. Currently, GHG emissions are the main environmental issues worldwide. Market liberalization and economic development played an important role in raising the levels of CO2 emissions and other greenhouse gases in the atmosphere [6]. Worldwide, energy market recognized as a vital cause of GHG emission. 1/3rd of CO2 emissions are accounted for Generation Company in Europe. In the Netherlands, more than 50% of generation of emission is from the energy market, while in India this is more than 45%. The inception of emission trading schemes into the generation companies contributes to the cutback of emission and impact energy market process. According to the size, scope, and design, various ETSs are operating worldwide. Most of them are linked with the Kyoto Protocol commitments (UNFCC 1998) [7, 8]. Some schemes are mandatory, others are voluntary. However, they all are sharing a common premise: emission reductions i.e. cutting the overall cost of skirmishing climate change. Carbon trading consist, trading of six major greenhouse gases i.e. carbon dioxide (CO2 ), methane (CH4 ), nitrousoxide (N2 O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6 ). This paper is organized as follows, in Sect. 2, a mathematical model for the price takers in a day-ahead market with the impact of emission trading in the energy market is developed, and represented as a stochastic optimization problem. Section 3

Impact of Emission Trading on Optimal Bidding of Price Takers …

173

described a computational procedure of newly developed genetic algorithm technique. Section 4 illustrates the execution of the proposed method with numerical simulation. Finally, Sect. 5 concludes the paper with possible directions for future research.

2 Mathematical Formulation 2.1 Market Structure According to their characteristics in various countries, the formation of energy market primarily consists the spot market, medium and long-term trade market which are suitable for the practical purpose. For example, PJM market in North America operates a day-ahead, real-time, daily capacity, monthly and multi-monthly capacity, regulation, and the (FTRs) auction energy market. The Nordic PX Pool, established in 1993 and it operates the day-ahead energy market. Japan Electric Power Exchange (JPEX), it starts operating in 2003 and it operates the power pool with the day-ahead energy market [4]. Assume a power exchange operates in a day-ahead market and ISO checks the system security and stabilization for the better operating condition. Power exchange consists of M generating companies and N utility companies, who participate in the demand side bidding submitting a nondecreasing demand function for the trading time slot t ε T  (1, 2… 24). If trading time slot assumes as 30 min, then T  48. M generating companies include thermal power station submit nondecreasing supply curve in a day-ahead market. The assumptions are made for the modeling are as follows: (1) all price takers have no market power; (2) power outputs of price takers can be accurately controlled, (3) load prediction error is assumed to be negligible.

2.2 Cost Model Under ETS, price takers (agent) needs to purchase CO2 emission allowances from an emission allowance trading market with price pCO2 , then production cost function and a marginal cost function of the agent (i) can be represented by the following equation: 2 Ci (qt,i )  (bi + pco2 ηi )qt,i + 0.5ci qt,i

(1)

Mi (qt,i )  (bi + pco2 ηi ) + ci qt,i

(2)

where, C i (qt,i ) Production cost of Genco i including CO2 emission,

174

M i (qt,i ) bi , c i qt,i ηi

S. P. S. Mathur et al.

Marginal cost of Genco i; Production cost constant coefficient; Output of Genco i at hour t and CO2 emission factor

If agent i selects jth strategy, then the corresponding coefficient is j D j  Dmin + (Dmax − Dmin ) K −1

(3)

Here, Dmin and Dmax are the lower and upper limit of the coefficients D. Thus, the bidding price of agent i can be represented as Bi (qt,i )  αi + pco2 ηi + D j βi qt,i

(4)

2.3 Bidding Model Assume a power exchange operates in a day-ahead market consist of ‘m’ independent price takers and ‘n’ load customers participate in a competitive electricity power market, in which price takers submit their sealed bid with a pay-as-bid MCP to the energy market. Also, assume that each price takers and customers submit a nondecreasing supply/demand curve to power exchange. For determining the generation outputs, minimize the total purchase cost and maximize the expected profit can be obtained by solving the following equations: αi + pco2 ηi + D j βi qt,i  R i  1, 2 . . . m n 

qi  Q(R)

(5) (6)

i1

qi,min ≤ qi ≤ qi,max

(7)

Here, α i , β i are the bidding coefficients of agent i, R is the MCP and Q(R) is the aggregate pool load curve which can be represented by the linear equation: Q(R)  Q 0 −k R

(8)

where Q0 is a nonnegative constant which is represent the price elasticity of the system, for k  0 the system is largely inelastic. When the inequality constraints are neglected, then the solution of Eq. (5) and (6) are R

Q0 +

n

K+

(αi + pco2 ηi ) i1 D j βi n 1 i1 D j βi

(9)

Impact of Emission Trading on Optimal Bidding of Price Takers …

Pt,i 

R − (αi + p co2 ηi ) D j βi

i  1, 2 . . . n

175

(10)

The solution of Eq. (10) changes according to their generation output limits (7), i.e., if Pi exceeds the Pi,max , then Pi is set to Pi,max and if Pi is lower than the Pi,min , then Pi is zero and the related price takers detached from the problem as a noncompetitive contestant.

2.4 Profit Model For ith price takers at unit time, profit maximization function can be represented as max πi (αi , βi )  R × Pt,i − Ci (Pt,i )

(11)

From Eq. (11), our objective is to evaluate α j and β j for maximization the profit subject to some inequality constraints expressed by Eq. (5)–(7). Price takers do not have access to complete information of their opponent, so it is required for price takers to estimate other participant’s unknown information. For the ith supplier’s, bidding coefficients αi and βi can be represented by the following probability density function (pdf). pdf(αi , βi ) 

1

 × (β) 2π σi(α) σi 1 − ρi2 ⎧ ⎡

2 ⎨ αi − μi(α) 1 ⎣ exp − ⎩ 2(1 − ρi2 ) σi(α)





⎤⎫ (β) (β) 2 ⎬ 2ςi αi − μi(α) βi − μi βi − μi ⎦ − + (β) (β) ⎭ σi(α) σi σi

(12)

3 Optimal Bidding by GA 3.1 Overview of GA Genetic algorithm is a stochastic nondeterministic method, to evaluate the most excellent solution in the complicated problem through the optimization. It is based on the theory of survival of the fittest to get a best possible solution. GA start with a string of solution called population (Chromosome). A string of new population

176

S. P. S. Mathur et al.

(offspring) will be generated from the solution of each population according to the hope that the new populations have higher fitness value to reproduce [9]. The procedure of genetic algorithm can be divided into three modules, i.e., Production, evaluation and reproduction module. In the production module, the initial population will be created using the initialization operator with randomly generated individuals. In the evaluation module, fitness operator checks the character of each chromosome based on maximum or minimum level to satisfy the objective. Under the reproduction module, three operators will be used, i.e., selection, recombination, and mutation operator.

3.2 GA Procedure The proposed methodology for optimal bidding using newly developed GA consists of the following steps: Step 1 (Initialization) Read cost coefficients of price takers and their limits, Aggregate load and price elasticity, convergence tolerance, k, Dmin , Dmax, and emission factor. Step 2 Set iteration count  1. Set chromosome count  1. Step 3 (Representation) Identified the chromosomes as a parent and create a random population of β j of Eq. (12) using Monte Carlo simulation. Step 4 Evaluate the Market clearing price and fitness of each population using Eqs. (9) and (11) respectively. Step 5 “Healthiest” chromosomes are sorted in decreasing order of their fitness value. Step 6 Calculate error function from (12). Check if error Obj (xi (G)). xi (G), otherwise.

This process make sure that the average fitness of population does not depreciate.

3 Black-Hole Gbest DE Algorithm John Michell and Pierre Laplace [9] identified the phenomenon of black-hole (BH). There are always chances that black-hole may appear if a gigantic star is distorted in space [2]. The black-hole has very high gravitational power as a lot of mass dispersed in it. The BH swallowed every object that crosses its boundary and never comes out of its gravitational pull, not even light. The border line of the BH is called the event horizon E H , which has very high gravitational power [16]. The E H radius is named as the Schwarzschild radius Rs [2, 4] and it can be computed with the help of Eq. 2. Rs =

2G M C

(2)

Here G denotes the gravity constant, the mass of the BH denoted by M, and C denotes the velocity of the light. During the implementation of DE, sometimes candidate solutions stop searching for new feasible regions and works in very small proximity, this situation is called stagnation. It happens, as most of the candidate solutions are located in very small proximity. Due to stagnation DE not able to find best feasible solution and leads to premature convergence or may stuck in local optima. In order to avoid the event of stagnation in DE algorithm and to make it more explorative, two modifications are proposed in DE. 1. A trial counter is associated with every solution to avoid stagnation. 2. Concept of black-hole is introduced in DE. The above-mentioned modification in the DE solution search process is applied to avoid the situation of stagnation of the population.

Black-Hole Gbest Differential Evolution Algorithm . . .

1013

The first modification will help in relocation of the solutions which have been stuck and unable to update its position until a predefined number of iterations called ‘limit’. So, a ‘trial’ counter is coupled with each solution of the population which counts the continuous number of iterations for which the solution is not updated and if the trial counter of a solution crosses the ‘limit’ which is set as (D × NP)/2, then the solution is re-initialized in the search region as shown below. if tr ial[i] > limit then Engender a new food source xi arbitrarily in the search region as displayed in Eq. 3; end if xi j = xlb + rand[0, 1](xub − xlb )

(3)

In Eq. 3, xi symbolizes the ith solution in the populous, xlb and xub are lower and upper boundary values of xi in j th dimension, respectively, and rand[0, 1] is a uniformly dispersed random number in the range [0, 1]. In second modification, black-hole (BH) phenomenon is introduced in DE after the crossover and mutation operations. For simulating this, solution with highest fitness is selected as black-hole and rest all the solutions are contemplated as star. Analogous to other population based algorithm the BH and other stars are initialized. The BH attracts stars, i.e., their position updated based on the distance and direction of BH. Thus, the position update phenomenon of star depicted in Eq. 4: vi = xi + φi (xi − xk ) + ψi (x B H − xi )

(4)

Here, xi is the parent solution, xk is an arbitrarily chosen individual solution from the population, x B H is a most feasible solution in the population identified up to now, φi is an arbitrary number uniformly selected from −1, 1, ψi is an arbitrary number uniformly selected from [0, C], where C is a positive constant. For detailed illustration refer to [8]. The search process of BH is explained as follow: a star (solution) may accomplish a position which may be better then the BH while moving in the direction of the BH. In such a case, the BH and the star interchange their positions, i.e., the star with best feasible solution is elected as new BH of the search space. There are always probabilities to cross the E H (radius of BH, which is calculated as shown in Eq. 5)of the BH, when a star moves toward a BH. Therefore, the BH absorbs all the star that crosses the E H of the BH. When a star is absorbed by the BH, it is considered as dead and a new solution is engendered in the search space. The distance R E (in this paper Euclidean distance but may be any) between the star and BH is computed and evaluate with the radius of the E H . The radius of the E H of BH is calculated with the help of Eq. 5: fBH E H = N i=1

fi

(5)

1014

P. Sharma et al.

Here, f B H represents the fitness value of the BH (current best solution) and f i is the fitness value of the ith star. Like, DE, the BHGDE algorithm is also divided into two parts, namely mutation and crossover. The BH phenomenon is used in DE if the distance of the parent solution to the BH solution is too low that the parent solution is merged in the BH solution and a new solution is generated arbitrarily in the search region. The above-mentioned BHGDE strategy is summarized in Algorithm 2. Algorithm 2 BHGDE Algorithm: Initialization of parameters S F, CR, NP, and initial population P(0); while Check the termination condition do for Every solution, xi (G) ∈ P(G) do Assess the objective value, Obj (xi (G)) and fitness of ith solution; Generate the offspring xi (G) through the trial vector ti (G) and parent vector by using mutation and crossover operators using equation 4; if Obj (xi (G)) is better than Obj (xi (G)) then Add xi (G) to P(G + 1); if Obj (xi (G)) is better than Obj (x B H (G)) then Exchange the position of x  and x B H ; end if else E H denotes the radius of the event horizon of x B H while R E denotes the Euclidean distance amid the parent solution and the x B H solution if (E H > R E )||(trial[i] > D×NP then 2 Engender a new food source xi arbitrarily in the search space; else Add xi (G) to P(G + 1); end if end if end for end while The solution with best fitness is declared the identified solution (Optima) of the problem;

4 Experimental Results and Discussions So as to validate the performance of newly anticipated BHGDE, it is tested on 15 various global optimization problems ( f 1 to f 15 ) as displayed in Table 1 [1]. The results of experiments over test functions are compared with DE, HABCDE, and ODE to prove the performance of the BHGDE algorithm. The below-mentioned experimental setting is adopted: – – – –

Number of simulations = 100, Number of solutions NP, φi j = rand[-1, 1] limit = (D × NP)/2,

i=1

 D−1



exp



2 +0.5x x −(xi2 +xi+1 i i+1 ) 8



 ×I

f min = −9.66015

[0 π]

i=1 [ai

11



x1 (bi2 +bi x2 ) 2 ] bi2 +bi x3 +x4

[−5 5]

−2.3458 f (3.13, 15.16, 0.78) =0.4E−04

x1 ∈ [0 5], x2 ∈ [0 6] [−10 10]

f 14 = (1 − 8x1 + 7x12 − 7/3x13 + 1/4x14 )x22 exp(−x2 ) 2   3 ti f 15 (x) = 5i=1 1+xx11txi +x − yi 2 vi

f (−0.0898, 0.7126) = −1.0316 [−5 5]

f 20 (x) = (4 − 2.1x12 + x14 /3)x12 + x1 x2 + (−4 + 4x22 )x22

f (0, −1) = 3 [−2 2]

f (o) = f bias = 390

f (0.192833, 0.190836, 0.123117, 0.135766) = 0.000307486

f 12 (x) = (1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 ))(30 + (2x1 − 3x2 )2 (18 − 32x1 + 12x12 + 48x2 − 36x1 x2 + 27x22 ))

 D−1 (100(z i2 − z i+1 )2 + (z i − 1)2 ) + f bias , z = x − o + 1, x = [x1 , x2 , . . . .x D ], o = [−00 100] f 11 (x) = i=1 [o1 , o2 , ...o D ]

f 10 (x) =

f (−π, 12.275) = 0.3979

−5 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 15

f 9 (x) = a(x2 − bx12 + cx1 − d)2 + e(1 − f ) cos x1 + e

3

2

2

2

10

4

2

4

30

f (1) = 0 f (1) = 0

[−5 5]

10

f (0) = −D + 1

30 30

f (0) = −D × 0.1 f (−0.5 ≤ x ≤ 0.5) = 0

[−1 1] [−100 100] [−5 5]

1.0E−05

1.0E−05

1.0E−03

1.0E−06

1.0E−05

1.0E−14

1.0E−01

1.0E−05

1.0E−05

1.0E−05

1.0E−05

1.0E−05

1.0E−05

1.0E−05

30

f (0) = 0

[−5.12 5.12]

1.0E−05

30

f (0) = 0

[−600 600]

AE

10

D

Optimum value

Search range

f 8 (x) = 100[x2 − x12 ]2 + (1 − x1 )2 + 90(x4 − x32 )2 + (1 − x3 )2 + 10.1[(x2 − 1)2 + (x4 − 1)2 ] + [−10 10] 19.8(x2 − 1)(x4 − 1)

  2 + 0.5x x where, I = cos 4 xi2 + xi+1 i i+1  D−1 f 7 (x) = 0.1(sin2 (3π x1 ) + i=1 (xi − 1)2 × (1 + sin2 (3π xi+1 )) + (x D − 1)2 (1 + sin2 (2π x D ))

f 6 (x) = −

Objective function D xi 1 D 2 √ f 1 (x) = 1 + 4000 i=1 x i − i=1 cos( i ) D 2 f 3 (x) = 10D + i=1 [xi − 10 cos(2π xi )] D 2 sin xi (sin ( ixπi )20 ) f 4 (x) = − i=1 D  D cos 5π xi ) + 0.1D f 4 (x) = i=1 xi 2 − 0.1( i=1 D ( xi + 0.5 )2 f 5 (x) = i=1

Table 1 Test problems; TP: Test problem, AE: Acceptable error

Black-Hole Gbest Differential Evolution Algorithm . . . 1015

1016

P. Sharma et al.

– The experiments for ODE and HABCDE are carried out by using parameter setting recommended by their inventors. – The Constant parameter C = 1.5 [8]. Table 2 presents the simulating results of the considered algorithms. Table 2 shows an assessment over success rate (S R), average number of function evaluations (AF E), mean error (M E), and standard deviation (S D). Results in Table 2 imitates

Table 2 Evaluation of the outcomes of test problems, TP: Test Problem TP Algorithm SD ME AFE f1

f2

f3

f4

f5

f6

f7

f8

BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE

6.63E−07 4.52E−03 4.78E−03 7.77E−07 4.56E+00 5.71E+00 4.68E+00 1.80E+00 3.26E−06 4.84E−02 5.13E−02 6.91E−03 7.81E−07 2.90E−02 1.47E−02 6.79E−07 0.00E+00 5.03E−01 9.60E−01 0.00E+00 1.54E−06 6.25E−01 5.99E−01 2.03E−06 8.66E−07 1.53E−03 2.39E−03 6.39E−07 2.15E−03 1.90E−01 4.82E−01 7.71E−02

9.17E−06 2.05E−03 2.20E−03 9.16E−06 2.90E+00 1.46E+01 1.16E+01 4.59E+00 5.91E−06 4.90E−02 4.64E−02 1.32E−03 9.08E−06 5.92E−03 1.49E−03 9.23E−06 0.00E+00 1.30E−01 2.40E−01 0.00E+00 7.85E−06 9.54E−01 5.62E−01 7.61E−06 9.04E−06 2.28E−04 5.58E−04 9.33E−06 7.40E−03 6.44E−02 1.12E−01 4.68E−02

29589.01 64036.5 63289.5 29676.5 187877.11 200050 198575 200050 36475.65 167536 173264 24240.5 15657 30339 22941 20823 10418.5 32241 41572 13828 46874.74 179149.5 128240 55396 14201.5 24021.5 27814.5 18140.5 22616.05 30231 26523 118157

SR 100 81 80 100 42 0 1 0 100 23 19 95 100 96 99 100 100 91 85 100 100 15 42 100 100 98 95 100 100 87 89 51 (continued)

Black-Hole Gbest Differential Evolution Algorithm . . . Table 2 (continued) TP Algorithm f9

f 10

f 11

f 12

f 13

f 14

f 15

BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE BHGDE DE ODE HABCDE

1017

SD

ME

AFE

SR

6.43E−06 7.01E−06 6.79E−06 6.82E−06 2.27E−04 3.17E−04 3.40E−04 1.16E−04 3.20E+00 2.03E+00 8.93E−01 7.68E+00 5.60E−15 4.77E−14 4.92E−14 4.81E−14 1.49E−05 1.46E−05 1.48E−05 1.44E−05 5.82E−06 6.43E−06 5.79E−06 6.14E−06 2.83E−06 6.09E−05 1.46E−05 2.94E−05

5.82E−06 5.98E−06 6.16E−06 5.67E−06 1.49E−04 2.53E−04 2.95E−04 3.02E−04 1.67E+00 1.98E+00 4.62E+00 4.71E+00 4.83E−15 5.40E−14 5.17E−14 4.88E−14 1.65E−05 1.86E−05 1.72E−05 1.68E−05 5.02E−06 5.85E−06 5.64E−06 5.43E−06 1.95E−03 1.96E−03 1.95E−03 1.95E−03

26518.98 33752 31798 34186 58231.36 56378 69946 194535 167696.62 189375.5 200003 193044.5 20182.87 109771.5 103804.5 106300 99442.09 114680.5 102724 100655.5 11184.07 14896.5 18839.5 16409.5 3344.73 11802.5 3788.5 59291.5

90 84 85 86 92 74 67 8 46 6 0 9 100 46 49 52 51 43 49 51 95 93 91 93 100 95 99 97

that BHGDE outperforms usually in terms of accuracy, reliability, and efficiency as compared with the DE, BHGDE, and HABCDE. In addition to these results, boxplots analyses are also done for AF E. The boxplots for BHGDE and other measured algorithm are shown in Fig. 1. It can be analyzed that the interquartile range and the median of BHGDE are relatively low. Further, Mann–Whitney U (MWU) rank sum test is done on the AFEs. As it is clear from the boxplots (refer Fig. 1) that the AFE’s are not uniformly distributed, so the MWU Rank sum test is done at 5% level of significance (α = 0.05) as shown in Table 3. The MWU rank sum test is applied for identification of the significant differences in function evaluations between BHGDE-DE, BHGDE-ODE, and BHGDE-

1018

P. Sharma et al.

(a)

(b) Average Function Evaluations

Successful Runs

5

100 80 60 40 20 0

2

x 10

1.5 1 0.5 0

BHGDE

DE

OBDE

BHGDE

HABCDE

DE

OBDE

HABCDE

(c) Mean Error

10

0

−5

10

−10

10

BHGDE

DE

OBDE

HABCDE

Fig. 1 Boxplots of a Success rate (Successful runs out of 100 runs), b Average number of function evaluations c Mean error Table 3 Comparison based on AFEs and the Mann–Whitney U rank sum test at a α = 0.05 significance level (‘+’ indicates BHGDE is significantly better, ‘−’ indicates BHGDE is not better and ‘=’ indicates that there is no significant difference), TP: Test Problem TP DE ODE HABCDE f1 f2 f3 f4 f5 f6 f7 f8 f9 f 10 f 11 f 12 f 13 f 14 f 15 Total number of + sign

+ + + + + + + + + = + + + + + 14

+ + + + + + + + + + + + = + + 14

= + − + + + + + + + + + = + + 12

Black-Hole Gbest Differential Evolution Algorithm . . .

1019

HABCDE. In Table 3, ‘+’ symbol indicates the significant less function evaluations consumed by the BHGDE, while ‘−’ symbol indicates the significant high-function evaluations used by the BHGDE to the compared algorithm. The symbol ‘=’ shows no significant difference of the function evaluations of the algorithms. The last row shows competitiveness of the proposed algorithm.

5 Solving Path Planning Problem for Robots (PPPR) Using BHGDE To formalize the PPPR problem, a set of principles are generated based on some assumptions. The objective is to find an optimized path free from collision, from the initial state to the target state as listed below. 1. The problem consists of a robot and a two-dimensional space, which includes the dangerous obstacles in the path. 2. The initial and final positions are determined. 3. There are several obstacles in the space whose radius and co-ordinates are defined by r, x-axis, and y-axis. 4. A robot aligns itself toward the goal by following a path. 5. The path contains several handle points or segments defined by n, at which the robot can change its rotation to left or right. 6. The points make a complete path from source S to target T represented as (S, n 1 , n 2 , n 3 , ..., n, T ). 7. If in case the movement of robot results in clashing with the objects, it has the ability to turn left or right by an angle of rotation. 8. If the robot reaches the target without collision, the final path is generated to the goal position. Let (x, y) be the current location of the robot at time t then at time t + 1, the next location (x  , y  ) is calculated as follows: x  = x + v × cos θ δt 

y = y + v × sin θ δt where v is the robots’s velocity, θ is angle of rotation and δt represents change in time instance. 9. The

distance d traveled by the robot with velocity v is defined by equation d = (x  − x)2 + (y  − y)2 10. The  objective of the PPPR problem is to minimize the total distance covered, i.e., d. The implementation of the proposed BHGDE to search for an optimal path is described in steps below:

1020

P. Sharma et al.

Step-1 Model the 2-dimensional workspace of the robot’s movement based on the starting and the finishing positions, number of obstacles, and number of handles. Step-2 Set the parameters needed, size of the population, maximum iteration, number of runs. Step-3 Implement the propounded BHGDE algorithm to search the optimal path of the given space. Step-4 Calculate the fitness values and detect the collision of the obstacles if any. Step-5 While the maximum number of iterations have not reached. 1. Find the next fit position. 2. Update the position of the robot locally as described by the Eq. 4. 3. Make the next position as the current and moves in the forward direction to the next position, until it reach es the target. 4. Update the solution’s position globally. 5. Store the feasible results and calculate the optimize path length. 6. Increment the iteration counter t = t + 1. Step-6 Output the optimal path and pilot the robot to reach the target position. The simulation results of the BHGDE and DE are calculated and compared with the state-of-the-art algorithms, namely Teacher Learning-Based Optimization (TLBO) [12], and Particle Swarm Optimization (PSO) [6] under following computational environment: – – – –

Operating System: Windows 10, Processor: Intel core-i5, Language: MATLAB 12.1, Maximum iterations: 5000.

The experiments have been carried out for 3 cases: 1. Case 1:- 3 obstacles, 3 handle points, start point (0,0) and target point (4,6), 2. Case 2:- 9 obstacles, 5 handle points, start point(0,0) and target point (50,50), 3. Case 3:- 15 obstacles, 8 handle points, start point(0,0) and target point (100,100). The Fig. 2 shows the simulation of BHGDE for the three cases. The Table 4 shows the simulation results of the propounded BHGDE algorithm with DE, TLBO, and PSO in terms of optimal distance for all the three cases. It is clear from the Table 4 that the optimal distance measured by the BHGDE is minimum than the DE, TLBO, and PSO algorithms.

6 Conclusion This paper focuses on a novel modification in DE algorithm, namely Black-Hole Gbest DE (BHGDE) algorithm. The newly anticipated strategy is inspired by a unique phenomenon in space namely, the black-hole (BH) phenomenon. The most suitable amongst the entire solutions is identified as BH. The exploration capability of the

Black-Hole Gbest Differential Evolution Algorithm . . .

(a) 6

50

5

40

4

30

3

(b)

20

2

10

1 0

1021

0 0

2

4

−10

6

0

10

20

30

40

50

(c) 100 80 60 40 20 0 0

20

40

60

80

100

Fig. 2 Different cases considered for PPPR problem; a (case 1), b (case 2) and c (case3) Table 4 Compared results of the optimal path; NO: Number of obstacles, NH: Number of handles, OD: Optimal distance NO NH Algorithms OD 3

3

9

5

15

8

PSO TLBO DE BHGDE PSO TLBO DE BHGDE PSO TLBO DE BHGDE

7.6109 7.5984 7.5965 7.5512 82.0904 96.8758 78.1612 72.0569 144.7534 143.3134 145.1194 142.9637

1022

P. Sharma et al.

DE algorithm is improved by the absorption property of BH. Further, trail counterbased reinitialization strategy is incorporated to keep away the population from the stagnation. To assess the proposed algorithm, it is tested on the 15 benchmark functions. Based on result analysis, it can be concluded that the BHGDE will be a better choice to solve the complex optimization problems. The robustness of the BHGDE algorithm is analyzed while applying it to solve the path planning problem of the robots.

References 1. Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Global Optim. 31(4), 635–672 (2005) 2. Doraghinejad, M., Nezamabadi-pour, H., Sadeghian, A.H., Maghfoori, M.: A hybrid algorithm based on gravitational search algorithm for unimodal optimization. In: Computer and Knowledge Engineering (ICCKE), 2012 2nd International eConference on, 129–132. IEEE, 2012 3. Engelbrecht, A.P.: Computational Intelligence: An Introduction. Wiley (2007) 4. Hatamlou, A.: Black hole: a new heuristic optimization approach for data clustering. Inf. Sci. 222, 175–184 (2013) 5. Jadon, S.S., Tiwari, R., Sharma, H., Bansal, J.C.: Hybrid artificial bee colony algorithm with differential evolution. Appl. Soft Computing 58, 11–24 (2017) 6. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 4, pp. 1942–1948. IEEE, 1995 7. Lampinen, J., Zelinka, I.: On stagnation of the differential evolution algorithm. In: Proceedings of MENDEL, pp. 76–83, 2000 8. Mokan, M., Sharma, K., Sharma, H., Verma, C.: Gbest guided differential evolution. In: Industrial and Information Systems (ICIIS), 2014 9th International Conference on, 1–6. IEEE, 2014 9. Montgomery, C., Orchiston, W., Whittingham, I.: Michell, laplace and the origin of the black hole concept. J. Astron. History and Heritage 12, 90–96 (2009) 10. Price, K.V.: Differential evolution: a fast and simple numerical optimizer. In: Fuzzy Information Processing Society, 1996. NAFIPS. 1996 Biennial Conference of the North American, 524– 527. IEEE, 1996 11. Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.A.: Opposition-based differential evolution. Evolutionary Comput. IEEE Trans. on 12(1), 64–79 (2008) 12. Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput-Aided Des. 43(3), 303–315 (2011) 13. Sharma, H., Bansal, J.C., Arya, K.V.: Fitness based differential evolution. Memetic Computing 4(4), 303–316 (2012) 14. Sharma, H., Bansal, J.C., Arya, K.V.: Self balanced differential evolution. J. Comput. Sci. 5(2), 312–323 (2014) 15. Sharma, H., Shrivastava, P., Bansal, J.C., Tiwari, R.: Fitness based self adaptive differential evolution. In: Nature Inspired Cooperative Strategies for Optimization (NICSO 2013), 71–84. Springer, 2014 16. Zhang, J., Liu, K., Tan, Y., He, X.: Random black hole particle swarm optimization and its application. In: Neural Networks and Signal Processing, 2008 International Conference on, 359–365. IEEE, 2008

Fibonacci Series-Inspired Local Search in Artificial Bee Colony Algorithm Nirmala Sharma, Harish Sharma, Ajay Sharma and Jagdish Chand Bansal

Abstract Nowadays swarm intelligence (SI)-based techniques are emerging techniques in the field of optimization. Artificial bee colony (ABC) algorithm is a significant member in the area of SI-based strategies. This research propounds a local search (LS) strategy motivated by the generation of the Fibonacci sequence. Further, the propound LS strategy is integrated with ABC to enhance the exploitation behavior. The propound LS strategy is named as Fibonacci-inspired local search (FLS) strategy and the hybridized algorithm is termed as Fibonacci-inspired artificial bee colony (FABC) algorithm. In the propound LS strategy, the Fibonacci series equation is altered by incorporating the commitment and community-based learning elements of ABC algorithm. To analyze the potential of the propound strategy, it is analyzed over 31 benchmark optimization functions. The reported outcomes prove the validity of the propound approach. … Keywords Swarm intelligence · Fibonacci-inspired local search · Artificial bee colony · Nature-inspired algorithms

N. Sharma (B) · H. Sharma Rajasthan Technical University, Kota, India e-mail: [email protected] H. Sharma e-mail: [email protected] A. Sharma Government Engineering College, Jhalawar, India e-mail: [email protected] J. C. Bansal South Asian University, New Delhi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_96

1023

1024

N. Sharma et al.

1 Introduction A variety of real-world optimization problems since the past couple of years are being solved by swarm intelligence (SI)-based strategies. The swarm-based social creature behavior is emerging due to the advent of computational intelligence. These algorithms apply hit and trial techniques with their ability to learn from individual and neighborhood to find solution of complex optimization issues. The main motivating source behind the development of SI motivating algorithms is their peer to peer learning behavior [1]. Artificial bee colony (ABC) algorithm is a unique and stable member of SI motivating family. The ABC algorithm is motivated by the intellectual behavior and the ability of honeybees to communicate by virtue of which their swarm collects honey by selecting good quality nectar from the food sources which is taken as a parameter to determine fitness of the solutions. It is relatively a noncomplex, fast, and population-based stochastic algorithm able to solve real-world problems as per the available literature [2, 3]. There is always another side of the coin, ABC also may sometimes stop proceeding toward global optimum although the solutions have not converged to local optimum [1]. So there is always a possibility to get jump of the actual solution. Thus, incorporation of local search (LS) methodology may enhance the exploitation capacity of the ABC and, subsequently decreasing the chance of jumping actual solution. For this reason, in this paper, a new LS strategy is propound by taking inspiration from the Fibonacci series equation [4] and the propound strategy is termed as Fibonacci-inspired local search (FLS) strategy. Further, the propound FLS strategy is associated with ABC in hope to improve exploitation capacity of the ABC. The propound associated algorithm is termed as Fibonacci-inspired ABC (FABC). The potential of FABC is evaluated through different experiments in terms of accuracy, reliability, and consistency. The remaining paper is organized as follows: FLS strategy is incorporated with basic ABC in Sect. 2. In Sect. 3, the performance of propound FABC is evaluated. Finally, conclusion of the work is presented in Sect. 4.

2 Fibonacci-Inspired Local Search Strategy and Its Incorporation to ABC Every succeeding number is the addition of the preceding two numbers in fibonacci sequence [4]. The sequence may either start from 0 or 1. The sequence was given by Italian mathematician Fibonacci in his book liber abaci [5]. Fibonacci sequence is represented by the following Eq. 1. f n = f n−1 + f n−2 ,

(1)

Fibonacci Series-Inspired Local Search in Artificial Bee …

1025

where f 1 = 0 or 1 and f 2 = 1 or 1 This article propounds a new LS strategy (FLS) inspired from the above Fibonacci sequence and Fibonacci-inspired spider monkey optimization (FSMO) [6]. The position update strategy is derived by generating a third solution from the first two most fit solutions of the search space. A solution is generated by best solution (bestsol ) and the second best solution (sbestsol ) of the search space and it is compared with the bestsol and sbestsol of the search space. Among these three solutions, two solutions are chosen based upon their fitness. The detailed working of the FLS is explained below: In the propound FLS strategy, the bestsol and sbestsol , based on their respective fitness values are considered to generate a new solution in the search space. Here, the highest fitness solution of the search space is termed as the bestsol , while the second highest fitness solution is termed as sbestsol . To control the perturbation, if the arbitrary number generated is greater than or equal to perturbation rate pr , then new solution is generated using the Eq. 2: xt j = xbest j + U (0, 1) ∗ (xsbest j );

(2)

Here, xt j , xbest j and xsbest j are generated solution, bestsol and sbestsol , respectively. U (0, 1) is an evenly scattered arbitrary number amid 0 and 1. The above Eq. 2 is inspired from the Fibonacci series Eq. 1. Further, if the newly generated solution lies outside the boundary of the search space, it is shifted to the boundary values of search space. Now the objective value of the newly generated solution is evaluated and compared with respective values of sbestsol and the bestsol . If the objective value of the Fibonacci solution is less than the sbestsol and greater than the bestsol . In this case the sbestsol is replaced by newly generated solution. Further, if the objective value of the generated solution is less than the bestsol then the bestsol is replaced by the newly generated solution. The bestsol and the sbestsol is replaced by the newly generated solution based upon the objective value. In this paper, in each iteration either bestsol or the sbestsol is allowed to update its position using the FLS strategy. The pseudocode of the propound FLS in ABC is shown in Algorithm 1. In Algorithms 1 and 2, pr (perturbation rate) is a number amid 0 and 1 which manages the quantity of perturbation in the bestsol and the sbestsol , U (0, 1) is an arbitrary number amid 0 and 1, D is the dimension of the problem under concern. T is the total number of iterations of LS. The value of T is decided using an extensive analysis that is mentioned in the experimental setting section. The propound FLS strategy is incorporated with the ABC after scout bee phase. Based on the above discussion, the pseudocode of the propound Fibonacci-inspired ABC algorithm (FABC) is shown in Algorithm 3.

1026

N. Sharma et al.

Algorithm 1 Fibonacci-inspired Local Search (FLS) Strategy: Input optimization function Min f (x), xbest and xsbest ; Initialized iteration counter t = 0 and total iterations of LS, T; while t < T do Generate new solution xt using by Algorithm 2. Calculate objective value of the solution f (xt ). if f (xt ) < f (xsbest ) then if f (xt ) < f (xbest ) then xbest = xt ; else xsbest = xt ; end if end if end while Return xbest or xsbest .

Algorithm 2 New solution generation: Input xbest and xsbest ; for j = 1 to D do if U (0, 1) ≥ pr then xt j = xbest j + U (0, 1) ∗ (xsbest j ); else xt j = xbest j ; end if end for Return xt

Algorithm 3 Fibonacci-inspired ABC algorithm: Initialize the parameters; while Termination criteria do Employed bee phase. Onlooker bee phase. Scout bee phase Apply Fibonacci-inspired Local Search (FLS) Strategy using Algorithm 1. end while Print best solution.

3 Performance Evaluation of FABC Algorithm 3.1 Benchmark Problems To validate the authenticity of the propound FABC algorithm, 31 benchmark functions of different degrees and complexities are considered for experiments [7, 8]. The definition and characteristic of the functions are listed in Table 1.

f 2 (x) =

i=1 i.(x i )

D

Objective function D 2 f 1 (x) = i=1 xi 4

Levy montalvo 1

Sum of different powers Step function Inverted cosine wave

 D−1 f 16 (x) = πD (10sin2 (π y1 ) + i=1 (yi − 1)2 (1 + 2 2 10sin (π yi+1 )) + (y D − 1) ), where yi = 1 + 14 (xi + 1)

D f 11 (x) = i=1 |xi |i+1 D f 12 (x) = i=1 (xi + 0.5)2 f 13 (x) =   2 2    D−1 −(xi +xi+1 +0.5xi xi+1 ) exp × I − i=1 8 D D Neumaier 3 problem (NF3) f 14 (x) = i=1 (xi − 1)2 − i=2 xi xi−1  D i Rotated hyper-ellipsoid f 15 (x) = i=1 j=1 x 2j

Griewank

D xi 1 D 2 √ f 3 (x) = 1 + 4000 i=1 x i − i=1 cos( i ) D 2 Michalewicz f 4 (x) = − i=1 sin xi (sin ( i xπi )20 ) D D Cosine mixture f 5 (x) = i=1 xi 2 − 0.1( i=1 cos 5π xi ) + 0.1D D Exponential f 6 (x) =-(ex p(-0.5 i=1 xi 2 )) + 1  D−1 2 (xi+1 )2 +1 x 2 +1 Brown3 f 7 (x) = i=1 (xi + xi+1 2 i ) D D Schewel f 8 (x) = i=1 |xi | + i=1 |xi |   D D 2 2 Salomon problem f 9 (x) = 1 − cos(2π i=1 x i ) + 0.1( i=1 x i ) D 2 Axis parallel hyper-ellipsoid f 10 (x) = i=1 i × xi

De Jong f4

Sphere

Test problem

30 30 30 30

f (0) = 0 f (0) = 0 f (0) = 0 f (0) = 0

[−1, 4] [−10, 10] [−100, 100] [−5.12, 5.12] [−1, 1] [−100, 100] [−5, 5]

[−D 2 , D 2 ] [-65.536, 65.536] [−10, 10]

10 30 30

f min = −9.66015 f (0) = −D × 0.1 f (0) =-1

30

10 30

f min = − (D(D+4)(D−1)) 6 f (0) = 0 f (−1) = 0

30 30 10

f (0) = 0 f (−0.5 ≤ x ≤ 0.5) = 0 f (0) = −D + 1

30

30

[0, π ] [−1, 1] [−1, 1]

f (0) = 0 f (0) = 0

30

D

[−5.12, 5.12] [−5.12, 5.12] [−600, 600]

f (0) = 0

Search range Optimum value

1.0E−05

1.0E−01 1.0E−05

1.0E−05 1.0E−05 1.0E−05

1.0E−05

1.0E−01

1.0E−05 1.0E−05

1.0E−05 1.0E−05 1.0E−05

1.0E−05

1.0E−05

1.0E−05

AE

Table 1 Test problems. D: Dimensions, C: Characteristic, U: Unimodal, M: Multimodal, S: Separable, N: Non-Separable, AE: Acceptable Error

(continued)

N, M

U, N S, M

S, M S, U N, M

U, S

N, M

U, N N, U

N, M S, M N, M

N, M

S, M

S, U

C

Fibonacci Series-Inspired Local Search in Artificial Bee … 1027

Easom’s function Hosaki Problem

Six-hump camel back

Goldstein-Price

Shifted Ackley

Shifted Sphere

2D Tripod function

Branins  s function

Ellipsoidal Beale function

Levy montalvo 2

Test problem

Table 1 (continued)

2

2

f 26 (x) = −cosx1 cosx2 e((−(x1 −π ) −(x2 −π ) )) f 27 = (1 − 8x1 + 7x12 − 7/3x13 + 1/4x14 )x22 exp(−x2 ) subject to 0 ≤ x1 ≤ 5, 0 ≤ x2 ≤ 6

f 21 (x) = p(x2 )(1 + p(x1 )) + |(x1 + 50 p(x2 )(1 − 2 p(x1 )))| + |(x2 + 50(1 − 2 p(x2 )))| D 2 f 22 (x) = i=1 z i + f bias , z = x − o ,x = [x1 , x2 , . . . x D ], o = [o1 , o2 , . . . o D ]   D f 23 (x) = −20 exp(−0.2 D1 i=1 z i2 ) − 1 D exp( D i=1 cos(2π z i )) + 20 + e + f bias , z = (x − o), x = (x1 , x2 , . . . x D ), o = (o1 , o2 , . . . o D ) f 24 (x) = (1 + (x1 + x2 + 1)2 · (19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 )) · (30 + (2x1 − 3x2 )2 · (18 − 32x1 + 12x12 + 48x2 − 36x1 x2 + 27x22 )) f 25 (x) = (4 − 2.1x12 + x14 /3)x12 + x1 x2 + (−4 + 4x22 )x22

 D−1 f 17 (x) = 0.1(sin2 (3π x1 ) + i=1 (xi − 1)2 × (1 + 2 2 sin (3π xi+1 )) + (x D − 1) (1 + sin2 (2π x D )) D f 18 (x) = i=1 (xi − i)2 f 19 (x) = [1.5 − x1 (1 − x2 )]2 + [2.25 − x1 (1 − x22 )]2 + [2.625 − x1 (1 − x23 )]2 f 20 (x) = a(x2 − bx12 + cx1 − d)2 + e(1 − f )cosx1 + e

Objective function

30 2

f (1, 2, 3, . . . , D) = 0 f (3, 0.5) = 0

[-10, 10] [0,5], [0,6]

[-5, 5]

[−2, 2]

[-32, 32]

[−100, 100]

2

2

f (0, −1) = 3

f (−0.0898, 0.7126) = −1.0316

2 2

10

f (o) = f bias = −140

f (π, π ) = −1 -2.3458

10

f (o) = f bias = −450

2

2

30

D

f (1) = 0

−5 ≤ x1 ≤ f (−π, 12.275) = 10, 0 ≤ x2 ≤ 0.3979 15 [−100, 100] f (0, −50) = 0

[−30, 30] [−4.5, 4.5]

[−5, 5]

Search range Optimum value

1.0E−13 1.0E−6

1.0E−05

1.0E−14

1.0E−05

1.0E−05

1.0E−04

1.0E−05

1.0E−05 1.0E−05

1.0E−05

AE

(continued)

S, M N, M

N, M

N, M

S, M

S, M

N, M

N, M

S, U N, M

N, M

C

1028 N. Sharma et al.

Sinusoidal

Shubert

Meyer and Roth Problem

x 1 x 3 ti i=1 ( 1+x 1 ti +x 2 vi

5 − yi )2

5 f 30 (x) = − i=1 icos((i + 1)x1 + 5 1) i=1 icos((i + 1)x2 + 1) f 31 (x) D = D sin(xi − z) + i=1 sin(B(xi − z))], −[A i=1 A = 2.5, B = 5, z = 30

f 29 (x) =

[0, 180]

[−10, 10]

−1.5 ≤ x1 ≤ 4, −3 ≤ x2 ≤ 3 [−10, 10]

f 28 (x) = sin(x1 + x2 ) + (x1 − x2 )2 − 23 x1 + 25 x2 + 1

McCormick

f (90 + z) = −(A + 1)

f (3.13, 15.16, 0.78)=0.4E−04 f (7.0835, 4.8580) = −186.7309

f (−0.547, −1.547)= −1.9133

Search range Optimum value

Objective function

Test problem

Table 1 (continued)

10

2

3

30

D

1.0E−02

1.0E−05

1.0E−03

1.0E−04

AE

N, M

S, M

U, N

N, M

C

Fibonacci Series-Inspired Local Search in Artificial Bee … 1029

1030

N. Sharma et al.

3.2 Parameter Setting In order to validate the performance of the propound FABC algorithm following, experimental setting is adopted: • The number of simulations/run =100, • Colony size N p = 50 and Number of food sources S N = N p /2, • φi j = rand[−1, 1] and limit = Dimension×Number of food sources = D × S N [9], • The terminating criteria: Either acceptable error (mentioned in Table 1) meets or maximum number of function evaluations (which is set to be 200,000) is reached, • Parameter settings for the algorithm spider monkey optimization (SMO) [10], ABC [11], differential evolution (DE) [12], particle swarm optimization (PSO2011) [13], Gbest guided ABC (GABC) [14], modified ABC (MABC) [9], best so far ABC (BSFABC) [15], memetic ABC (MeABC) [16], levy flight ABC (LFABC) [17], Distruption ABC (DiABC) [18], black hole ABC (BHABC) [19], and FSMO (FSMO) [6] are same as specified in their legitimate research papers respectively, • To set termination criteria of FLS, the performance of FABC is measured for considered test problems on different values of T and results in terms of success are analyzed in Fig. 1a. It is clear from Fig. 1a that T = 25 gives better results (highest value of sum of success). Therefore, termination criteria is set to be T = 25. • In order to investigate the impact of parameter pr (perturbation rate of local search) depicted by Algorithm 2 on the performance of FABC, its sensitivity with respect to various values of pr in the range [0.1, 1.0], is examined in the Fig. 1b. It can be seen from Fig. 1b that the algorithm is exceptionally delicate towards pr and its value 0.6 gives comparatively better results. Therefore, pr = 0.6 is chosen for the experiments in this paper.

3.3 Results Comparison To validate the performance of the suggested FABC algorithm, it is compared with the basic version of ABC [11], DE [12], PSO-2011 [20], SMO [10], FSMO [6] and significant variants of ABC namely, MABC [9], BSFABC [15], MeABC [16], GABC [14], LFABC [17], DiABC [18], BHABC [19]. The comparison is performed in terms of four parameters that are standard deviation (S D), mean error (M E), average number of function evaluations (AF E), and success rate (S R). The reported results are demonstrated in Table 2. The outcomes demonstrate that FABC is competitive than ABC and other considered swarm-based algorithms for greater part of the benchmark test problems (T Ps) independent of their tendency either as far as separability, modality and other parameters. The algorithms are also assessed by Mann–Whitney U rank sum test [21] and boxplots analysis (B P) [22]. The convergence speed of the propound FABC is

Fibonacci Series-Inspired Local Search in Artificial Bee …

1031

(a) Sum of success

3100 3000 2900 2800 2700 2600 2500 10

15

20

25

35

30

Local search Iteration (T)

(b) Sum of success

3100 3000 2900 2800 2700 2600 2500 2400 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Parameter (pr)

Fig. 1 Variation in sum of success a with local search iteration (T) b with parameter pr

evaluated by examining the AF Es. An inverse relation is there amid the convergence speed and the value of AF Es. To minimize the effect of randomness and nondeterministic nature of the algorithm, the reported values of AF Es are averaged for 100 simulations for all the considered benchmark problems. The Mann–Whitney U rank sum test is also applied on AF Es. For all considered algorithms the experiment is performed at 5% significance level (α = 0.05) and the outcomes for 100 runs are recorded in Table 3. In Table 3 ‘+’ sign represents that our propound FABC algorithm is superior in comparison with the other considered algorithm while ‘-’ sign indicates that the other considered algorithm is superior. The above analysis depicts that FABC may be a competitive candidate in the area of SI-based algorithms. The boxplots (B Ps) investigation has also been performed for a comparison regarding consolidated performance of all the considered algorithms. The B Ps investigation represents the graphical distribution of empirical data in an efficient manner. The B Ps for FABC and other considered algorithms are depicted in Fig. 2. It is clear from the Fig. 2 that FABC performs better than other considered algorithms as median and interquartile range is quite low.

100

3.11E−06

4.90E−06

100

1.26E−03

2.28E−04

47

3.65E−06

3.84E−06

100

2.44E−06

7.47E−06

100

2.34E−06

7.12E−06

12108

100

2.84E−06

6.76E−06

6081.55

100

1.56E−03

2.30E−04

34308.21

97

3.75E−06

5.35E−06

20301.71

100

6.62E−01

6.66E−02

8356.96

100

1.99E−01

2.00E−02

18254.81

100

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

f6

f5

f4

f3

f2

8.17E−06

8.44E−06

ME

f1

3.87E−03

7.12E−03

100

32596.5

9.03E−06

8.62E−07

100

38101.5

9.33E−06

100

36974

42951.5

29192.27

100

28227.5

9.33E−06

6.15E−07

86

63043.5

2.22E−02

5.68E−02

3

198326

3.12E−01

2.34E−01

69

100

17269

8.98E−06

9.15E−07

92

37386

1.33E−02

4.72E−02

20

171411

4.21E−02

4.26E−02

86

55540

1.71E−03

5.00E−03

100

20859.5

9.01E−06

8.51E−07

100

22444

9.06E−06

8.24E−07

PSO-2011 DE

6.10E−07

145942.82 113502.5

49578.5

20409

ABC

2.02E−06

FABC

1.83E−06

Measure

SD

TP

100

9794.07

8.96E−06

7.46E−07

88

62144.3

1.77E−02

4.80E−02

100

52153.75

4.98E−06

3.72E−06

84

67200.59

1.73E−03

4.21E−03

100

10725.66

8.49E−06

1.20E−06

100

12642.3

8.87E−06

8.37E−07

SMO

Table 2 Comparison of the results of test problems, TP: Test problem BSFABC

100

18707

7.36E−06

2.26E−06

100

32029

7.05E−06

2.41E−06

100

43095.01

3.21E−06

3.19E−06

100

63314.45

5.96E−06

2.79E−06

100

24524.5

5.31E−06

3.12E−06

100

30063

7.49E−06

2.15E−06

MABC

100

16647.5

9.11E−06

6.96E−07

100

22763.5

9.23E−06

7.32E−07

100

36367.7

6.70E−06

3.28E−06

100

44038.5

9.24E−06

5.42E−07

100

22584

8.63E−06

1.27E−06

100

22359

8.95E−06

9.48E−07

LFABC

100

14033.43

8.20E−06

3.10E−02

100

17750.16

8.21E−06

9.65E−02

89

40306.64

6.08E−03

2.57E−02

99

36502.5

8.15E−05

7.35E−04

100

9556.12

6.62E−06

3.02E−06

100

16733.85

8.39E−06

1.73E−06

GABC

100

11746

8.13E−06

1.75E−06

100

15414.5

7.90E−06

1.99E−06

100

21914.86

3.79E−06

3.67E−06

99

35334.58

6.33E−06

2.99E−06

100

8388

5.51E−06

2.72E−06

100

14347.5

8.11E−06

1.81E−06

BHABC

100

17656.91

8.00E−06

1.92E−06

100

35006.99

7.72E−06

2.39E−06

100

28283.24

4.32E−06

3.71E−06

97

44066

1.54E−04

1.03E−03

100

8687.05

5.75E−06

2.63E−06

100

22304.92

8.53E−06

1.44E−06

DiABC

100

11922.89

7.93E−06

2.22E−06

100

15232.67

7.90E−06

2.63E−06

100

22325.38

4.35E−06

3.74E−06

100

42998.13

6.47E−06

1.26E−03

100

8482.48

4.92E−06

3.11E−06

100

14350.04

8.33E−06

2.02E−06

MeABC

100

9987.18

9.30E−06

6.83E−07

100

23565.56

9.17E−06

9.54E−07

100

21681.84

5.61E−06

3.65E−06

100

43249.74

8.79E−06

1.44E−06

100

6112.82

8.67E−06

1.30E−06

100

19659.92

9.19E−06

8.10E−07

FSMO

(continued)

100

3846.47

8.50E−06

1.46E−06

100

17759.4

8.93E−06

8.42E−07

99

52957.45

4.23E−04

4.15E−03

86

47903.94

2.08E−03

5.56E−03

100

6180.05

6.57E−06

2.21E−06

100

12210.92

8.73E−06

1.42E−06

1032 N. Sharma et al.

1.03E−06

9.16E−06

100

1.57E−01

9.12E−01

2.04E−06

7.96E−06

100

2.65E−06

5.16E−06

100

8.94E−02

1.57E−02

1.09E−06

9.17E−06

11154.35

100

1.24E−01

9.42E−01

140136.85 200026.18 200050

0

100

81

1.89E−06

8.39E−06

13303.99

100

2.73E−06

6.13E−06

20742.47

100

0.00E+00

0.00E+00

17589.39

100

2.01E−06

7.54E−06

38277.17

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

f 13

f 12

f 11

f 10

f9

f8

100

5975.92

AFE

1.40E+00

6.06E−01

100

35050

0.00E+00

0.00E+00

100

9897

8.48E−06

1.38E−06

100

44374.5

9.33E−06

6.37E−07

0

2.88E−01

5.53E−02

100

70794.5

9.61E−06

3.09E−07

100

35048.5

9.24E−06

118560.63 198670

11304.52

0.00E+00

0.00E+00

100

46229

52672

41646

20917

7.84E−06

8.79E−06

ME

SMO

0.00E+00 0.00E+00

1.10E−01

183451.5

1.10E+00

6.19E−01

92

77652.58

8.17E−06

1.78E−06

100

14261.81

100

5229.18

7.43E−06

1.81E−06

100

14811.39

9.00E−06

8.60E−07

13

4.67E−01 30377

BSFABC

8.79E−01

1.34E−01

100

52966

8.99E−06

1.10E−06

100

31416.5

7.39E−06

2.15E−06

MABC

2.91E−01

2.86E−02

100

32888.5

9.49E−06

3.85E−07

100

22992

9.02E−06

7.58E−07

LFABC

8.30E−01

1.04E−01

100

31122.67

8.98E−06

1.12E−04

100

16139.5

8.06E−06

1.12E−02

121912.5

9.87E−02

2.91E−01

100

37799.13

0.00E+00

0.00E+00

100

14299

5.79E−06

2.96E−06

100

36750

7.66E−06

2.18E−06

0

67025.07

8.34E−06

1.35E−06

100

15568.5

0.00E+00

0.00E+00

100

9389

8.01E−06

1.82E−06

100

25785

9.19E−06

6.12E−07

0

40490.21

8.10E−06

9.61E−02

100

10917.27

0.00E+00

2.72E−04

100

7303.19

6.22E−06

2.72E−02

100

18108.15

8.48E−06

8.30E−03

0

191407.68 200024.63 200007.13 200042.2

1.87E−01

3.36E−02

100

23156.1

9.39E−06

4.89E−07

100

12635.37

9.10E−06

6.39E−07

100

7950

7.23E−06

2.02E−06

100

25899

8.92E−06

1.01E−06

9

195073

1.88E−01

3.46E−02

100

44943

9.39E−06

5.99E−07

100

22253.5

9.01E−06

8.21E−07

PSO-2011 DE

f7

6.26E−07

ABC

2.13E−06

FABC

1.30E−06

Measure

SD

TP

Table 2 (continued) GABC

BHABC

9.33E−01

4.45E−02

100

122848.9

9.24E−06

8.78E−07

100

23739.99

8.04E−06

2.06E−06

48078.22

6.53E−06

2.62E−06

100

9052.5

0.00E+00

0.00E+00

100

9285

5.62E−06

2.95E−06

100

16084.5

7.71E−06

1.98E−06

0

DiABC

1.05E+01

4.56E−01

100

9056.75

0.00E+00

0.00E+00

100

16008.12

8.30E−06

1.99E−06

97

89121.92

9.33E−01

6.61E−02

100

27679.5

9.22E−06

1.12E−06

100

13948.3

7.94E−06

2.29E−06

MeABC

0.00E+00

0.00E+00

100

7240.66

5.13E−06

3.07E−06

100

25677.84

9.15E−06

8.73E−07

100

23006.5

9.22E−01

3.82E−02

100

54120.92

9.63E−06

3.43E−07

100

20632.76

9.12E−06

9.02E−07

1.27E+00

9.70E−01

0

3.52E−05

2.71E−04

100

110353.66 191363.59 49036.82

1.00E−01

4.96E−02

0

200023.19 200021.02 11384.33

1.06E+01

5.19E−01

100

9030.52

0.00E+00

0.00E+00

100

25433.47

7.94E−06

2.33E−06

97

200038.57 94411.64

7.50E−01

1.13E−01

100

27636

9.30E−06

6.90E−07

100

14072

7.91E−06

1.88E−06

FSMO

(continued)

44782.56

1.05E−02

1.04E−01

100

1701.24

0.00E+00

0.00E+00

100

568.56

5.71E−06

2.85E−06

100

13601.83

8.07E−06

1.75E−06

66

91178.5

1.34E−01

4.74E−02

100

9846.68

8.33E−06

1.85E−06

100

6147.59

8.27E−06

1.93E−06

Fibonacci Series-Inspired Local Search in Artificial Bee … 1033

f 19

f 18

f 17

f 16

f 15

f 14

TP

0

1.94E−06

8.11E−06

2.28E+00

1.34E+00

38161.9

100

2.29E−06

8.14E−06

SD

ME

AFE

SR

SD

ME

ABC 1.50E−06 8.05E−06

4.30E−07

9.57E−06

7.32E−06

100

2.24E−06

7.54E−06

100

2.35E−06

7.52E−06

100

1.83E−06

8.29E−06

8.05E−06

8640.96

100

1.43E−06

8.72E−06

20463.57

100

1.95E−06

8.32E−06

14577.55

100

2.92E−06

6.64E−06

2259.16

100

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

SD

ME

AFE

SR

100

16954.05

24219.5

21940

49816

2.13E−06

1.88E−06

SR

58270.5

100

34168.87

100

AFE

100

2753.5

4.96E−06

2.81E−06

100

44306

9.33E−06

5.56E−07

88

56513.5

1.33E−03

3.58E−03

97

37764.5

3.12E−03

1.77E−02

100

56547

9.20E−06

8.13E−07

100

100

1415.5

4.74E−06

2.81E−06

100

27365.5

8.93E−06

8.74E−07

97

25989

3.38E−04

1.87E−03

100

19550

9.02E−06

9.48E−07

100

32747.5

8.86E−06

9.82E−07

100

17422.5

12

PSO-2011 DE

2

200004.59 67426.5

1.05E+00

1.01E+00

96

FABC

100

Measure

SR

Table 2 (continued) SMO

BSFABC

4.34E+00

5.93E+00

85

MABC

2.49E−02

7.32E−02

100

LFABC

8.55E−02

3.91E−01

100

GABC

1.04E+00

1.22E+00

100

BHABC

6.97E−06

2.56E−06

96

100

1593.9

4.33E−06

2.74E−06

100

15453.9

8.78E−06

8.45E−07

98

27448.24

2.28E−04

1.53E−03

100

13527.62

8.81E−06

1.15E−06

100

18535.77

9.05E−06

7.77E−07

85

98

60415.4

9.64E−06

3.18E−06

100

40934.5

7.05E−06

2.34E−06

100

28876

6.89E−06

2.48E−06

100

26672

6.78E−06

2.78E−06

100

49452.5

7.23E−06

2.43E−06

0

100

9803.94

5.32E−06

2.90E−06

100

26766

9.24E−06

6.73E−07

100

20902.5

9.21E−06

7.64E−07

100

22791.5

9.27E−06

6.67E−07

100

33059

9.10E−06

8.09E−07

28

100

4274.48

7.42E−06

2.94E−06

100

18646.7

8.49E−06

2.78E−06

100

16459.73

8.44E−06

2.10E−04

100

15182.17

8.48E−06

2.10E−02

100

21140.97

8.83E−06

2.10E+00

0

100

9135.64

6.06E−06

3.00E−06

100

16665

8.05E−06

1.89E−06

100

14290

7.56E−06

2.03E−06

100

13160

7.92E−06

1.97E−06

100

19522.5

7.58E−06

2.26E−06

0

99

63024.92

8.26E−03

1.89E−03

100

7259.59

5.49E−06

2.95E−06

100

23504.76

7.71E−06

2.33E−06

100

16332.11

8.10E−06

1.76E−06

100

12163.02

7.88E−06

1.75E−06

100

168728.99 200024.07 194441.44 200039.31 200016.14 28894.12

1.13E−05

4.83E−06

100

DiABC

MeABC

4.24E−06

2.87E−06

100

29262.18

9.01E−06

1.21E−06

100

19093.96

9.33E−06

6.71E−07

100

16871.76

9.21E−06

8.02E−07

100

36702.66

8.95E−06

1.15E−06

100

36913.97

8.56E−02

1.40E−02

99

45

100

157793.68 5358.3

1.59E−02

1.04E−01

100

8996.28

5.49E−06

1.94E−06

100

16569.45

7.70E−06

2.15E−06

100

14365.98

7.77E−06

2.12E−06

100

12903.26

7.98E−06

2.38E−06

100

19333.81

8.14E−06

2.20E−06

9

FSMO

(continued)

100

1578.01

5.44E−06

2.87E−06

100

15405.77

8.83E−06

9.00E−07

99

21285.22

1.18E−04

1.09E−03

100

9725.6

8.90E−06

8.61E−07

100

9512.87

7.79E−06

1.69E−06

100

139852.94

9.88E−06

1.44E−07

99

1034 N. Sharma et al.

54

2.39E−05

6.43E−05

2276.4

100

2.53E−05

6.08E−05

AFE

SR

SD

ME

f 25

f 24

f 23

f 22

f 21

6.32E−06

5.92E−06

ME

f 20

0

1.73E−05

2.07E−06

0.00E+00

2.63E+00

754

100

2.98E−01

3.00E−02

52014.98

79

SD

ME

AFE

SR

SD

ME

AFE

SR

56704.5

SR

24630

4.67E−14

4.55E−14

0

200050

1.40E+02

0.00E+00

100

28

54

143253.92 96991

200025

1.40E+02

0.00E+00

100

8215.77

100

AFE

1.05E−06

8.93E−06

1.98E−06

7.76E−06

3.99E−01

ME

3.97E+00

SD

15785.5

100

39042.5

8.29E−06

1.50E−06

92

29745.5

8.01E−02

2.71E−01

93

17240

5.81E−06

52

97958

4.87E−14

4.81E−14

0

200000

1.40E+02

0.00E+00

100

15564.5

8.81E−06

1.17E−06

100

10353.5

7.95E−06

1.71E−06

92

19150.5

8.01E−02

2.71E−01

88

25736

5.52E−06

6.34E−06

PSO-2011 DE

3.28E−06

100

4266.32

100

ME

SR

6.97E−06

7.45E−06

SD

AFE

2.61E−06

2.18E−06

SR

37927.03

100

7429.64

100

AFE

92128.77

ABC

7.10E−06

FABC

6.91E−06

Measure

SD

TP

Table 2 (continued) SMO

BSFABC

5.16E−14

4.93E−14

0

200066

1.40E+02

0.00E+00

100

31305

8.06E−06

1.71E−06

100

18137.5

6.65E−06

2.37E−06

96

10833.17

8.48E−05

1.30E−04

90

21661.45

5.68E−06

6.67E−06

MABC

4.53E−14

4.83E−14

0

200025

1.40E+02

0.00E+00

100

14030.53

8.90E−06

8.93E−07

100

8704.5

8.10E−06

1.51E−06

92

51848.2

2.51E−04

8.98E−04

84

34517.47

6.67E−06

7.64E−06

45

49

56

115786.54 118010.51 95664.35

5.51E−14

4.82E−14

0

207637

1.40E+02

0.00E+00

100

9122.85

8.76E−06

9.65E−07

100

5953.86

7.38E−06

1.82E−06

100

16563.09

6.62E−05

2.38E−05

80

42709.68

6.20E−06

7.08E−06

LFABC

GABC

60

82656.96

4.14E−14

1.39E−04

0

200030

100

3978.08

1.63E−14

1.63E−14

0

200025

1.40E+02

0.00E+00

1.39E−02 1.40E+02

100

9317

8.18E−06

1.44E−06

100

5568

6.94E−06

2.19E−06

100

8315.55

6.51E−05

2.49E−05

100

1185.37

5.57E−06

6.66E−06

100

10926.37

8.31E−06

8.05E−01

100

6249.21

7.67E−06

3.90E+00

91

22793.84

1.00E−01

3.32E−01

93

14812.69

5.91E−06

1.15E−05

BHABC

DiABC

1.03E−01

1.42E+00

94

84175.89

8.65E−05

6.70E−05

MeABC

6.11E−05

2.45E−05

80

41316.66

6.59E−06

7.30E−06

8.53E+01

1.20E+01

88

7.98E−06

1.74E−06

100

9.41E−03

1.07E−02

0

9.04E−06

9.91E−07

100

100

86195.05

4.97E−14

2.97E−14

100

98

46848.47

5.03E−13

3.76E−05

100

577.28

1.26E−05 746.46

1.08E−05 1.29E−05

14 1.11E−05

61

82

84658.38

2.03E−08

1.35E−07

0

200029

1.40E+02

0.00E+00

100

132549.75 183449.75 20221.9

3.33E−03

6.00E−03

0

200020.61 200030.49 9490.44

8.46E+01

1.08E+01

80

120130.27 111509.83 9471.7

6.67E−01

3.66E+00

100

62584.17

8.71E−05

1.65E−05

FSMO

(continued)

43

115460.61

5.68E−14

4.78E−14

0

200064.35

1.40E+02

0.00E+00

100

9101.61

8.43E−06

1.25E−06

100

5948.35

7.43E−06

1.81E−06

100

15229.95

6.53E−05

2.41E−05

90

21059.44

6.67E−06

7.35E−06

Fibonacci Series-Inspired Local Search in Artificial Bee … 1035

100

5.76E−03

4.91E−01

1177

100

4.69E−03

4.90E−01

1163.31

100

2.42E+03

2.43E+02

AFE

SR

SD

ME

AFE

SR

SD

ME

f 31

f 30

f 29

f 28

f 27

1.20E−05

1.22E−05

ME

f 26

1.75E−05

1.95E−03

100

5.92E−06

5.36E−06

2.10E−02

1625.57

100

5.48E−06

4.87E−06

4147.63

100

ME

AFE

SR

SD

ME

AFE

SR

100

4802.25

28795.15

2.61E−06

1.90E−01

SD

1176.04

100

1405.78

100

SR

8.80E−05

4.32E−03

ME

AFE

6.36E−06

4.21E−02

SD

665

100

758.67

100

71

90199

3.12E−04

1.37E−03

100

3092

1.95E−03

2.93E−06

100

1445

8.84E−05

6.86E−06

14

172267

1.00E−05

5.49E−06

SR

3.84E−06

3.69E−06

AFE

100

5050

4.92E−01

5.55E−03

48

105570.5

100

8287

4.48E−06

5.16E−06

99

3667.5

1.95E−03

1.30E−05

100

971.5

8.78E−05

6.54E−06

11

178108.5

1.03E−05

3.63E−06

100

2123

4.89E−01

4.80E−03

50

100761

1.67E−05

1.49E−05

PSO-2011 DE

1.18E−05

100

1407.52

1017

ABC

1.10E−05

FABC

1.10E−05

Measure

SD

TP

Table 2 (continued) SMO

BSFABC 1.70E−05

1.48E−05

MABC 1.40E−05

1.53E−05

1.05E−05

3.46E−06

100

2838.51

4.89E−01

5.18E−03

59

9.38E−06

4.67E−06

100

2376.59

4.90E−01

5.34E−03

62

LFABC

1.01E−05

2.48E+02

100

756.4

4.92E−01

1.15E−02

58

84353.75

1.53E−05

3.00E−02

GABC

6.07E−06

3.79E−06

100

760.5

4.89E−01

5.10E−03

100

595

1.42E−05

1.14E−05

100

4551.03

4.94E−06

5.61E−06

100

1861.2

1.95E−03

3.02E−06

100

738.54

8.65E−05

6.15E−06

14

100

8956.03

4.71E−06

5.33E−06

100

17861.63

1.95E−03

2.97E−06

100

930.03

8.92E−05

6.65E−06

20

100

27282.44

4.58E−06

5.51E−06

100

10949.9

1.95E−03

2.87E−06

100

1761.7

8.92E−05

6.57E−06

20

100

1594.27

4.93E−06

4.22E−04

100

4317.51

1.95E−03

4.41E−02

100

552.29

9.02E−05

2.50E+00

14

100

2270.24

5.07E−06

5.69E−06

100

5316.83

1.95E−03

3.01E−06

100

1620

8.83E−05

6.92E−06

100

179300.31 110105.38 160232.33 172072.75 387

1.00E−05

4.02E−06

100

1249.38

4.90E−01

5.38E−03

40

125358.44 108387.92 76995.51

1.90E−05

1.41E−05

BHABC

DiABC

9.29E−16

7.47E−17

100

48059.96

8.08E−03

1.81E−03

100

2480.81

4.78E−06

5.88E−06

100

4492.43

1.94E−03

2.77E−06

100

600.1

8.83E−05

6.63E−06

100

746.65

4.89E−01

5.52E−03

0

100

200024.42 39623.59

1.84E−11

5.52E−11

100

42023.26

7.78E−03

1.82E−03

100

7917.47

5.10E−06

5.78E−06

100

4761.02

1.95E−03

3.07E−06

100

800.09

8.83E−05

6.65E−06

100

946.24

4.91E−01

5.77E−03

MeABC

FSMO 2.00E−05

1.48E−05

100

5404.81

4.50E−06

5.27E−06

100

9907.84

1.95E−03

2.90E−06

100

807.76

8.73E−05

6.87E−06

82

36397.37

6.40E−06

6.57E−06

100

1555.31

4.89E−01

5.61E−03

40

100

4198.96

4.87E−06

5.48E−06

100

1939.34

1.95E−03

2.91E−06

100

757.37

8.70E−05

6.50E−06

11

178110.63

1.04E−05

3.66E−06

100

1229.83

4.89E−01

5.31E−03

36

120379.15 128351.92

1.91E−05

1.39E−05

1036 N. Sharma et al.

FSABC Vs PSO-2011

+

+

+

+

+

+

+

+

+

+

-

+

+

+

+

+

+

+

+

+

+

FABC Vs ABC

+

+

+

+

+

+

+

+

+

+

+

-

+

+

+

+

+

+

+

+

+

TP

f1

f2

f3

f4

f5

f6

f7

f8

f9

f 10

f 11

f 12

f 13

f 14

f 15

f 16

f 17

f 18

f 19

f 20

f 21

+

+

-

+

+

+

-

-

+

+

-

+

+

+

+

-

+

+

+

+

+

FABC Vs DE

+

+

-

+

+

+

-

+

+

-

-

+

+

+

+

-

+

+

+

+

+

FABC Vs SMO

+

+

+

+

+

+

+

+

+

+

-

+

+

+

+

+

+

+

+

+

+

FABC Vs BSFABC

+

+

+

+

+

+

-

+

+

-

-

+

+

+

+

-

+

+

+

+

+

FABC Vs MABC

+

+

+

+

-

+

-

+

+

-

-

+

+

+

+

-

+

+

+

+

+

FABC Vs LFABC

+

-

+

+

-

+

-

+

+

-

-

+

+

+

+

-

+

+

+

+

+

FABC Vs GABC

+

+

+

-

+

+

-

-

+

+

-

+

-

+

+

-

+

+

+

+

+

FABC Vs BHABC

+

+

+

-

-

+

-

-

+

+

-

+

-

+

+

-

+

+

+

+

+

FABC Vs DiABC

+

+

+

+

-

+

+

-

+

-

-

+

-

+

+

-

+

+

+

+

+

FABC Vs MeABC

(continued)

+

+

-

+

+

+

-

+

+

-

-

+

-

-

+

-

+

+

+

+

+

FABC Vs FSMO

Table 3 Comparison based on Mann–Whitney U rank sum test at significant level α = 0.05 and average number of function evaluations, TP: Test Problem

Fibonacci Series-Inspired Local Search in Artificial Bee … 1037

+

+

+

+

+

+

+

+

+

+

+

+

+

+

-

+

-

-

+

+

27

f 22

f 23

f 24

f 25

f 26

f 27

f 28

f 29

f 30

f 31

Total no. of ’+ signs

30

FSABC Vs PSO-2011

FABC Vs ABC

TP

Table 1 (continued)

25

+

+

-

+

+

+

+

+

+

+

FABC Vs DE

25

+

+

-

+

+

+

+

+

+

+

FABC Vs SMO

29

+

+

-

+

+

+

+

+

+

+

FABC Vs BSFABC

27

+

+

+

+

+

+

+

+

+

+

FABC Vs MABC

23

-

+

-

+

-

+

+

+

+

+

FABC Vs LFABC

20

-

+

+

-

-

-

-

+

+

+

FABC Vs GABC

20

+

+

+

+

-

-

+

-

+

+

FABC Vs BHABC

20

+

+

+

+

-

-

-

-

+

+

FABC Vs DiABC

24

+

+

-

+

+

+

+

+

+

+

FABC Vs MeABC

23

+

+

-

+

+

+

+

+

+

+

FABC Vs FSMO

1038 N. Sharma et al.

Fibonacci Series-Inspired Local Search in Artificial Bee …

1039

Average number of function evaluations

5

x 10 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

FABC

ABC

PSO

DE

SMO BSFABC MABC LFABC GABC BHABC DiABC MeABC FSMO

Fig. 2 Boxplots graphs for average number of function evaluation

4 Conclusion This article propounds a Fibonacci sequence inspired local search (FLS) strategy. In the FLS strategy, a new solution is generated by using the best solution and second best solution of the search space. Among these three solutions, first two fittest solutions are selected. In the propound strategy, only best and second best solutions are updated. Further, FLS is hybridized with a recent swarm intelligence (SI) motivated artificial bee colony (ABC) algorithm. The propound hybridized strategy is named as Fibonacci-inspired ABC (FABC) algorithm. The performance of FABC is evaluated over 31 well-known benchmark functions and showed its utilization in the field of SI-based algorithms.

References 1. Karaboga, D., Akay, B.: A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 214(1), 108–132 (2009) 2. Bansal, J.C., Sharma, H., Jadon, S.S.: Artificial bee colony algorithm: a survey. Int. J. Adv. Intell. Paradigms 5(1), 123–159 (2013) 3. Karaboga, D., Basturk, B.: On the performance of artificial bee colony (abc) algorithm. Appl. Soft Comput. 8(1), 687–697 (2008) 4. Fibonacci, L., Sigler, L.: Fibonacci’s Liber abaci: A Translation into Modern English of Leonardo Pisano’s Book of Calculation. Springer Science & Business Media (2003) 5. Pisano, L.: Liber abaci. 1202. Cited on, page 24 6. Sharma, A., Sharma, H., Bhargava, A., Sharma, N.: Fibonacci series-based local search in spider monkey optimisation for transmission expansion planning. Int. J. Swarm Intell. 3(2–3), 215–237 (2017)

1040

N. Sharma et al.

7. Ali, M.M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Global Optim. 31(4), 635–672 (2005) 8. Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.P., Auger, A., Tiwari, S.: Problem definitions and evaluation criteria for the CEC: special session on real-parameter optimization. In: CEC 2005, (2005) 9. Karaboga, D., Akay, B.: A modified artificial bee colony (abc) algorithm for constrained optimization problems. Appl. Soft Comput. 11(3), 3021–3031 (2011) 10. Bansal, J.C., Sharma, H., Jadon, S.S., Clerc, M.: Spider monkey optimization algorithm for numerical optimization. Memetic Comput. 6(1), 31–47 (2014) 11. Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical report, Technical report-tr06, Erciyes university, engineering faculty, computer engineering department (2005) 12. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997) 13. Clerc, M., Kennedy, J.: Standard pso 2011. Particle Swarm Central Site [online] http://www. particleswarm.info (2011) 14. Zhu, G., Kwong, S.: Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 217(7), 3166–3173 (2010) 15. Banharnsakun, A., Achalakul, T., Sirinaovakul, B.: The best-so-far selection in artificial bee colony algorithm. Appl. Soft Comput. 11(2), 2888–2901 (2011) 16. Bansal, J.C., Sharma, H., Arya, K.V., Nagar, A.: Memetic search in artificial bee colony algorithm. Soft Comput. 17(10), 1911–1928 (2013) 17. Sharma, H., Bansal, J.C., Arya, K.V., Yang, X.-S.: Lévy flight artificial bee colony algorithm. Int. J. Syst. Sci. 47(11), 2652–2670 (2016) 18. Sharma, N., Sharma, H., Sharma, A., Bansal, J.C.: Modified artificial bee colony algorithm based on disruption operator. In: Proceedings of Fifth International Conference on Soft Computing for Problem Solving, pp. 889–900. Springer, Berlin (2016) 19. Sharma, N., Sharma, H., Sharma, A., Bansal, J.C.: Black hole artificial bee colony algorithm. In: International Conference on Swarm, Evolutionary, and Memetic Computing, pp. 214–221. Springer (2015) 20. Kennedy, J.: Particle swarm optimization. In: Encyclopedia of Machine Learning, pp. 760–766. Springer, Berlin (2011) 21. Sharma, A., Sharma, H., Bhargava, A., Sharma, N., Bansal, J.C.: Optimal placement and sizing of capacitor using limaçon inspired spider monkey optimization algorithm. Memetic Comput. 1–21 (2016) 22. Sharma, A., Sharma, H., Bhargava, A., Sharma, N.: Optimal power flow analysis using lvy flight spider monkey optimisation algorithm. Int. J. Artif. Intell. Soft Comput. 5(4), 320–352 (2016)

Analysis of Lightweight Block Cipher FeW on the Basis of Neural Network Aayush Jain and Girish Mishra

Abstract Over the past few years, several lightweight ciphers have been proposed to supplement the Internet of Things (IOT). FeW is one of the lightweight ciphers, which uses a mix of Feistel and generalized Feistel structures to achieve high efficiency in software-based department. This paper focuses on the analysis of lightweight block cipher FeW using the machine learning approach. This approach involves using artificial neural network to find the inherited biases present in the design of FeW. Keywords Cryptography · Cryptanalysis · Machine learning Neural network · Lightweight cipher

1 Introduction Today’s world marks the new era of IOT, where all our data travel from device to device along with our personal and confidential information. This information needs security, hence we use Cryptography, an art of studying techniques for securing information either in communication or in storage [1]. Using the basic cryptographic algorithm, we could solve the primitive data security problem but as the devices all around the world are becoming small day by day, we need an advancement in the field of cryptography to overcome the problems related to limited resources. Thus, the advancement is in the form of lightweight cryptography, which has been designed and proposed for last several years. Past two decades have provided us with a number of lightweight ciphers [2–5]. FeW is one such cipher proposed by Kumar et al. in 2014 [2]. Though with the introduction of AES we do not need any block cipher suitable for widespread use rather we need a lightweight cipher which can ensure random characteristics in A. Jain (B) Cluster Innovation Centre, University of Delhi, Delhi, India e-mail: [email protected] G. Mishra SAG, Defence Research and Development Organization, Delhi 110054, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_97

1041

1042

A. Jain and G. Mishra

its schema so that its exploitation using various cryptanalysis [6–9] will yield the attacker nothing but the chance probability i.e. 0.5 (50%). Machine learning is the ability of an intelligent system to learn, improve and develop a model for a given problem automatically in the respective domain. In the past, a lot of applications of machine learning techniques in cryptanalysis [10–13] have been proposed. In this paper, we have used cipher output of FeW to predict the individual bit of 64-bit long plaintext using neural network-based pattern recognition technique. We have also used available intermediate round data to predict the plaintext. Although we work hard to extract some meaningful patterns using machine learning technique but the skilfully designed lightweight cipher FeW does not happen to have any biases. The paper is organized as follows. Section 2 contains the algorithm of a lightweight cipher. Section 3 presents the methodology for representing the ciphertext. A brief discussion of the artificial neural network is done in Sect. 4. Experimental results and analysis are shown in Sect. 5. Finally, Sect. 6 concludes the paper.

2 FeW: A Lightweight Cipher FeW is a lightweight block cipher, which consists of 32 rounds. It generates a ciphertext of size 64 bits using the input plain text of the same size. FeW uses two options for the size of Master key MK: 80 bits and 128 bits. Based on two key sizes, the name of the two versions of FeW, the first version with 80-bit key size as FeW-80 and the second version with 128-bit key size as FeW-128 [2]. A round of FeW is shown in Fig. 1.

2.1 The Encryption Process The FeW lightweight cipher [2] divides the input plaintext into two equal halves, P0 and P1 . Each of these two halves is then passed through round functions, swap function and finally concatenated to form single 64-bit ciphertext. Round functions F. Round functions F is applied on 32-bit word Pi+1 which is then xor with Pi to get Pi+2 . Here i is in range, 0 ≤ i ≤ 31. Pi+2 ← Pi ⊕ F(Pi+1 , Ki ) where Ki is the ith round key. It uses two different weight functions WF1 and WF2 [2]. Weight function WF1 consists of the application of S-box, as shown in Table 1, four times in parallel as nonlinear operation and cyclic shifts and xor operation as linear mixing operation L1 . WF2 on the other hand, consists of the application of S-box 4 times in parallel

Analysis of Lightweight Block Cipher FeW …

1043

Fig. 1 Single round illustration of FeW Table 1 Table of S-box x 0 1 2 3 S(x) 2

E

F

5

4

5

6

7

8

9

A

B

C

D

E

F

C

1

9

A

B

4

6

8

0

7

3

D

and apply cyclic shifts on V and xor these with V to get Z as output of WF2 as linear mixing operation L2 which is different from L1 . Swap function. Output of last round is swapped: Pi+2 ← Pi ⊕ F(Pi+1 , Ki ) Concatenation. 32-bits outputs are concatenated to form 64-bit ciphertext. (C0 , C1 ) ← (P33 , P32 ) Cm ← C0 ||C1

1044

A. Jain and G. Mishra

2.2 Key Scheduling for 80 Bits FeW stores 80-bits master key in the register named MK (MK  k79 k78 . . . k0 ) [2]. Round subkey RK0 is obtained by extracting leftmost 16-bits from MK. Subsequent round keys are obtained using the following method: a. For i < 64, update MK in following steps: (1) MK ≪ 13 (2) Update bits using S-Box in the following way: i. [K0 K1 K2 K3 ] ← S[K0 K1 K2 K3 ] ii. [K64 K65 K66 K67 ]← S[K64 K65 K66 K67 ] iii. [K76 K77 K78 K79 ]← S[K76 K77 K78 K79 ] (3) [K68 K69 K70 K71 K72 K73 K74 K75 ] ← [K68 K69 K70 K71 K72 K73 K74 K75 ] ⊕ [i]2 b. Increment i by 1 and extract leftmost 16-bits of current contents of MK as round subkey RKi .

3 Representation of Ciphertext or Intermediate Round Data The main focus is to use an available cipher or intermediate round data to predict all 64 bits of input plaintext individually. These data are just some bit sequence and random data does not have any feasible feature. We represent our feature set by the frequency of occurrence of 1-bit, 2-bit, 3-bit, 4-bit, 5-bit, and 6-bit subsequences in sorted order for each subsequence. We only took top 10 most occurring values (in case of 4-bit, 5-bit, and 6-bit subsequence) of each subsequence to get a feature set of 44 dimensions. This feature set is prepared for each ciphertext sample (ciphertext generated from a given plaintext using a fixed key and FeW encryption algorithm). Thus, we have a set of feature sets that are passed to construct ANN model for training and testing.

4 Artificial Neural Network Artificial Neural Network (ANN) works similar to our brain. A multilayer perceptron, composed of heavily connected processing units called neutrons, having an input layer, output layer and hidden layer(s) of source nodes are generally termed as ANN. The nodes or neurons work in parallel to acquire the state or the knowledge to solve a particular problem. Neural Network pattern recognition toolbox of MATLAB trains a neural network for assigning the correct target classes to a set of given samples [14]. After the training, the unseen samples are divided into two categories, Validation and Testing. By

Analysis of Lightweight Block Cipher FeW …

1045

default, Training set gets 70%, Validation gets 15% and Testing gets remaining 15% of the input sample. The performance is then calculated on the basis of mean square errors and confusion plot. The number of features in the user’s sample determines the number of neurons in the input layer. The knowledge is stored in synaptic weights which are the interconnection strengths between neuron of different layers.

5 Methodology and Results Bistream data from all rounds is collected and used to predict any bit of plaintext in lightweight cipher FeW with a probability more than the chance probability. For the experiment, we took 10,000 plaintexts which were used to generate corresponding 10,000 ciphertexts using the same key. The intermediate round data was also collected for each of 32 rounds of FeW algorithm. Pattern Recognition tool of Neural Network (NN) toolbox is MATLAB [14] was used to predict all the 64 bits of plaintext from the ciphertext or the intermediate round data collectively. Data sets are categorized with default values of Training (70%), Validation (15%), and Testing (15%). Weights are tuned in line with an error during training. Two-layer feedforward network with sigmoid hidden and output neurons is used in this problem. We took a ciphertext as a sample input after validating the NN model. The model is designed as such, it will take the sample input to predict the first bit of the plaintext. The number of sample inputs will be used to predict the 1st bit and if the prediction is significantly more than the chance probability, then the model is considered having the knowledge or learned some pattern available in the algorithm of FeW. We also took the intermediate round samples to do the above task of predicting plaintext bits. Until now, the ciphertext was used to predict some specific bits of plain text. The above-stated method checks if it can predict any bit of plaintext with a probability better than chance probability. If yes, the biasedness could be exploited by attackers. Till now, it is concluded that we get 32 sets of bitstream data with each corresponding to one of the 32 rounds of FeW lightweight cipher. With each such set, 64 different NN models were trained and validated to predict one of the 64 bits present in the plaintext. Figure 2 shows the detailed results for the above-described method. We can observe from the figure that the probability to guess any bit of the 64-bit plaintext is not significantly greater than 50%. The range of predictions we get is from 45.2% to 54.3%. A complete analysis states that there are not any biases in lightweight cipher FeW that can be exploited using neural network-based pattern recognition approach using mentioned features.

1046

A. Jain and G. Mishra

Fig. 2 Success (%) in prediction of bits at 64-bit positions of plaintext

6 Conclusion In this paper, neural network-based pattern recognition is used to analyze lightweight cipher FeW. NNPR exploits the biases inherently present in the schema of the lightweight cipher. The result of the discussion and experiment shows that the algorithm designed for FeW is able to nullify the attacks by NNPR technique. In future, other block ciphers could be analyzed by using other machine learning tools.

References 1. Stallings, W.: Cryptography and Network Security: Principles and Practice, 7th edn. Pearson Education, India (2017) 2. Kumar, M., Pal, S.K., Panigrahi, A.: FeW: a lightweight block cipher. IACR Cryptology ePrint Archive, p. 326 (2014) 3. Bogdanov, A., Knudsen, L., Leander, G., Paar, C., Poschmann, A., Robshaw, M., Seurin, Y., Vikkelsoe, C.: PRESENT: An ultra-lightweight block cipher. In: CHES, vol. 4727, pp. 450–466. Springer, Vienna, Austria (2007) 4. Banik, S., Bogdanov, A., Isobe, T., Shibutani, K., Hiwatari, H., Akishita, T., Regazzoni, F.: Midori: a block cipher for low energy. In: International Conference on the Theory and Application of Cryptology and Information Security, pp. 411–436. Springer, Berlin, Heidelberg (2014) 5. Listing of Lightweight Block Ciphers. https://www.cryptolux.org/index.php/Lightweight_Blo ck_Ciphers. Accessed 12 Dec 2017

Analysis of Lightweight Block Cipher FeW …

1047

6. Cho, J. Y.: Linear cryptanalysis of reduced-round PRESENT. In: CT-RSA, vol. 5985, pp. 302–317. Springer (2010) 7. Collard, B., Standaert, F. X.: A statistical saturation attack against the block cipher PRESENT. In: CT-RSA, vol. 5473, pp. 195–210. Springer (2009) 8. Özen, O., Varıcı, K., Tezcan, C., Kocair, C.: Lightweight block ciphers revisited: cryptanalysis of reduced round PRESENT and HIGHT. In: Proceedings of the 14th Australasian Conference, ACISP, pp 90–107. Brisbane, Australia (2009) 9. Wang, M.: Differential cryptanalysis of PRESENT. Cryptology ePrint Archive, p. 408 (2007) 10. Albassal, A.M.B., Wahdan, A.M.: Genetic algorithm cryptanalysis of a fiestel type block cipher. In: International Conference on Electrical on Electronic and Computer Engineering, pp. 217–221. Egypt (2004) 11. Graepel, T., Lauter, K., Naehrig, M.: ML confidential: machine learning on encrypted data. In: Kwon, T., Lee, M.K., Kwon, D. (eds.) Information Security and Cryptology—ICISC 2012. Lecture Notes in Computer Science, vol. 7839. Springer, Berlin, Heidelberg (2013) 12. Martin, Z., Hajny, J., Malina, L.: Optimization of power analysis using neural network. In: Francillon A., Rohatagi P. (eds.) CARDIS 2013, LNCS, vol. 8419, Springer (2014) 13. Shivgurunathan, G., Rajendran, V., Purusothaman, T.: Classification of substitution ciphers using neural networks. Int J Comput Sci Netw Secur 10(3), 274–279 (2010) 14. Neural Network Toolbox, MATLAB version 17b, The MathWorks Inc. (2017)

Analysis of RC4 Crypts Using PSO Based Swarm Technique Maiya Din, Saibal K. Pal and S. K. Muttoo

Abstract RC4 Cryptosystem is a nonlinear byte level encryption system. This is the most widely used cryptosystem, created by Rivest for RSA Securities Inc. This has two modules known as Pseudo-Random number Generator and Key Scheduler. Key scheduler transforms an initial permutation to a random key permutation. Computational Swarm Intelligence (CSI) is a popular branch of Artificial Intelligence (AI). CSI-based techniques have been used to solve many hard problems of optimization. In this research paper, Discrete Particle Swarm Optimization (DPSO)-based novel technique is applied to solve RC4 stream cipher-based crypts. Authors attempt to find key bits using DPSO-based swarm technique to reduce the number of exhaustive searches significantly. According to obtained results, correct Key bits are computed for the crypts of length 100–300 characters. Keywords Stream cipher · RC4 cryptosystem · Cryptanalysis · Swarm intelligence · Particle swarm optimization

1 Introduction In cryptology, the methods are studied which ensure information secrecy. Cryptography and cryptanalysis are two branches of cryptology. Design of cryptosystems is studied in cryptography, while breaking of crypts or cryptosystems is studied in cryptanalysis for retrieving vital information. Cryptanalysis is the science of breaking ciphertext to get its corresponding plaintext without knowing the used secret key. Cryptanalysis of any cryptosystem can be formulated as an optimization problem. M. Din (B) · S. K. Pal Scientific Analysis Group, DRDO, New Delhi 110054, India e-mail: [email protected] S. K. Pal e-mail: [email protected] S. K. Muttoo Delhi University, Delhi, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_98

1049

1050

M. Din et al.

Computational Swarm algorithms [1, 2] are employed in an attempt to find an optimal solution of the crypto problem. RC4 is the software-based cryptosystem. Its key-size range is 40–256 bits. RC4 Key-Scheduling Algorithm (KSA) converts an initial permutation S of {0, 1,…, N − 1} into a random key permutation, here N is the size of the permutation. PRGA module generates a pseudo-random output key sequence. This stream cipher is known for its speed in software and simplicity. This system becomes more vulnerable for cryptanalyst when two messages are encrypted with the same key-stream and/or the beginning of the output keystream is not discarded before encryption. Artificial Intelligence is defined as the study and design of ‘intelligent’ agents. Intelligence is now being derived from biological aspects of the nature. These biological aspects precisely involve using the collective behavior projected by the various group of insects, birds, and animals in nature. These groups are called swarm and the collective pattern is shown by all the members of the group is termed as ‘Swarm intelligence’. Beni G. and Wang J. introduced the term “Swarm Intelligence” in 1989. The same was referred as a set of procedures to control robotic swarm in the global optimization problem. Swarm Intelligence has found its application in many real-life problems and applications. This research paper describes literature review of Swarm techniques applied in cryptanalysis in Sects. 2 and 3 gives a brief description of PSO-based Swarm techniques. Details of RC4 cryptosystem is mentioned in Sect. 4. The results achieved are discussed in Sect. 5 and the current trend of swarm technique for solving cryptosystems is given in the last section.

2 Literature Review In 1995, Roos [3] observed a correlation between the first byte of the RC4 keystream and the first three bytes of the used key. The key-stream generated by the RC4 is biased in varying degrees towards certain sequences. Mantin and Shamir [4] proved in a correlation attack that the second output byte was biased toward zero with probability 1/128, while it should be 1/256. This is the case when the third byte of the original state is 0, and the second byte is not equal to 2, then the second output byte is always 0. In 2008, Klein [5] described cryptanalysis of the RC4 Cipher showing more correlations between the output key sequence and applied keys. Swarm Intelligence-based techniques are useful in the analysis of Stream Ciphers like LFSR-based Crypto Systems. Heyadri M. et al. applied Cuckoo Search in automated cryptanalysis of Transposition Ciphers [6]. Bhateja A. K. et al., applied “PSO technique with Markov Chain random walk” for analysis of Vigenere crypts [7]. Authors proposed PSO with a random walk for enhancing the performance of PSO algorithm. Bhateja et al. [8], also applied Cuckoo Search in cryptanalysis of Vigenere cipher. Din et al. [9] applied Cuckoo Search-based technique in the analysis of LFSRbased cryptosystem to find initial bits of used LFSRs.

Analysis of RC4 Crypts Using PSO Based Swarm Technique

1051

For cryptanalysis of Block ciphers, Wafaa GA et al. presented Known-Plaintext Attack on DES-16 using PSO technique [10]. Jadon S. S. et al. proposed BPSO technique [11] to recover key bits of DES-16. Dadhich A. and Yadav S. K. also applied SI-based technique for cryptanalysis of DES (4-Round) cryptosystem [12].

3 Particle Swarm Optimization-Based Techniques Many swarm intelligence-based techniques have been devised and are being applied successfully in cryptanalysis. Novel Discrete PSO-based Swarm technique is proposed for analyzing encrypted messages of RC4 stream cipher.

3.1 Particle Swarm Optimization (PSO) PSO was the second successful swarm intelligent model introduced by Eberhart and Kennedy [13]. It was initially used to solve nonlinear continuous optimization problems, but now the application of this algorithm has been extended in many other discrete optimization problems. The PSO algorithm implementation is based on the two equations as follows:   vid (k + 1)  w ∗ vid (k) + c1 ∗ r1 ∗ ( pid − xid (k)) + c2 ∗ r2 ∗ pgd − xid (k) xid (k + 1)  xid (k) + vid (k + 1)

(1) (2)

Here c1 and c2 : positive constants, r 1 and r 2 : random values lies in [0, 1], k: is iteration variable. xi  (xi1 , xi2 , xi3 . . . xid ) is ith particle position; pi  ( pi1 , pi2 , pi3 . . . . pid ) represents the previous best position and pgd represents global best position. v  (vi1 , vi2 , vi3 . . . .vid ) is the velocity of ith particle and w is the inertia weight.

3.2 Discrete PSO-Based Technique to Analyze RC4 Crypts Discrete PSO-based technique is developed for cryptanalysis of RC4 generatorbased crypts [13]. In DPSO, bits are mutated in terms of probabilities. According to this definition, particle velocity and position are updated as per Eqs. (1) and (2) respectively.

1052

M. Din et al.

for k =0 to number of iterations for i=1 to the no_particles(np) for d=1 to the dimension(nd). Update velocity using eqn. (1) Update position using eqn. (2) end for calculate fitness_value at updated position if needed then update Pbest and Pgbest end if end for exit if Pgbest is achieved end for

Fig. 1 Algorithm of DPSO technique

The particle fitness_value is based on linguistic features (monogram and bigram) 20 ∗ wi ). Where of English language and computed as Fitness_Value  f ( i i1 f i and wi are feature frequency and corresponding weights [14]. We have considered the 10 most frequent monograms and bigrams to decipher crypts during experimentation. For English text, above defined cost value lies in the interval [2.5 L, 3.5 L], here L is the length of the text. Global best particle position corresponds to correct solution which gives a correct key for decryption (Fig. 1). Each particle position is represented as a numeric string of length N. Dimension of particle position is considered equal to the key length of RC4. The key space is 2N .

4 Details of RC4 Cryptosystem RC4 Stream Cipher [15, 16] is a varying key-size crypto algorithm based on byteoriented operations. In this algorithm, KSA as the first stage of the algorithm also known as initialization of permutation vector S and PRGA plays the role of stream generator as mentioned in Fig. 2.

4.1 The Key-Scheduling Algorithm The KSA generates initial permutation S of {0,…, N − 1} from a (random) key of length l bytes. The key length lies between 5 and 32 characters. The key length N may have maximum value 256 bits. The vector S is initialized as {0,…, N − 1}

Analysis of RC4 Crypts Using PSO Based Swarm Technique

1053

Fig. 2 RC4 cryptosystem Fig. 3 Key-scheduling algorithm

for i from 0 to N - 1 S[i] = i end for j=0 for i from 0 to N - 1 j = (j + S[i] + key[i mod keylength]) mod N swap(S[i], S[j]) end for

and then bytes of the key are mixed within it according to the following algorithm: (Fig. 3).

4.2 The Pseudo-Random Generation Algorithm To generate a pseudo-random key sequence PRGA uses the permutation S generated by KSA. The output byte is calculated by taking up the values of S(i) and S(j), and then looking up their sum (mod N) in S. The calculated output byte is used as a key sequence in enciphering process (Fig. 4). The RC4 system performance is remarkable using software implementation since it requires only byte-oriented operations. It uses 256 bytes of memory for the array S, l bytes required to store key and for storing i and j integer variables. DPSO-based technique as mentioned in Sect. 3.2 is implemented and tested on RC4 crypts (length 100–300 characters) for different key length (40–48 bits).

1054

M. Din et al.

5 Discussion of Experimental Results The DPSO technique is implemented in C-Programming language and tested on 2.8 GHz PC computer. The program is tested on English text crypts with different keys and PSO parameters: No. of particles (np)  50–100, Inertia weight (w)  0.7–1.0 and Acceleration constants (C1 , C2 )  1.5–2.5 for Maximum Iterations  109 . The obtained results are mentioned in Tables 1, 2 and 3. According to results mentioned in Table 3, CPU time taken for solving considered crypt with 48-bit key is 33775.62 s for PSO parameters (np  100, w  0.85, C1  2.1 and C2  2.1). As per obtained results, CPU time taken for finding a correct solution increases as key length increases.

Fig. 4 Pseudo random generation algorithm

i=j=0 Do Generating Output Key Sequence i = (i + 1) modulo N j = (j + S[i]) modulo N Swap (S[i] and S[j]) Output_Byte = S[(S[i] + S[j]) modulo N] while Required Key Seq. generated

Table 1 Computational time for RC4 crypts (length: 300 characters) S. No. Key Key length np Iterations 1 2 3 4 5

Amit1 Xero5 abcde Amit1* abcde*

40 40 40 48 48

50 50 50 100 100

2749 3826 4708 23436 29820

Table 2 Computational time for RC4 crypts (length: 200 characters) S. No. Key Key length np Iterations 1 2 3 4 5

Amit1 Xero5 abcde Amit1* abcde*

40 40 40 48 48

50 50 50 100 100

2549 3806 4607 21730 25729

CPU time (s) 10040.48 17495.00 20492.11 33029.44 34985.16

CPU time (s) 9840.48 17085.51 19192.31 32926.13 34015.36

Analysis of RC4 Crypts Using PSO Based Swarm Technique

1055

Table 3 Computational time for RC4 crypts (length: 100 characters) S. No. Key Key length np Iterations 1 2 3 4 5

Amit1 Xero5 abcde Amit1* abcde*

40 40 40 48 48

50 50 50 100 100

2356 3621 4416 20549 26623

CPU Time (s) 9597.13 16835.23 18951.48 32676.93 33775.62

6 Conclusion In this paper, Analysis of RC4-based crypts is carried out by finding correct key used in encryption. A novel DPSO-based technique has been developed to compute correct Key bits to generate key sequence and consequently deciphering the crypt. According to obtained results, the technique is able to solve the considered encrypted text in optimal time. As per results shown in Table 3, the proposed technique is able to solve RC4 crypt of 48-bit key (abcde*) in CPU time 33775.62 s. It can be applied to solve other stream ciphers like LFSR-based and Geffe generator-based crypts. Parallel implementation of the proposed technique for is another research direction for reducing computational efforts involved in decrypting RC4 crypts.

References 1. Panigrahi, B.K., Shi, Y., Lim, M.H.: Handbook of Swarm Intelligence Series: Adaptation, Learning, and Optimization, vol. 7. Springer, Berlin, Heidelberg (2011) 2. Yang, X.S., Cui, Z., Xiao, R., Gandomi, A.H.: Swarm Intelligence and Bio-Inspired Computations: Theory and Applications. Elsevier, London (2013) 3. Roos, A.: Class of Weak Keys in the RC4 Stream Cipher. Post in sci.crypt (1995) 4. Fluhrer, S., Mantin, I., Shamir, A.: Weakness in the Key Scheduling Algorithm of RC4, LNCS 2259, pp. 1–24. Springer, Heidelberg (2001) 5. Klein, A.: Attacks on the RC4 stream cipher. J. Des. Code Crypt. 48(3), 269–286 (2008) 6. Heydari, M., Senejani, M.N.: Automated cryptanalysis of transposition ciphers using cuckoo search algorithm. Int. J. Comput. Sci. Mob. Comput. 3(1), 140–149 (2014) 7. Bhateja, A.K., et al.: Cryptanalysis of Vigenere cipher using PSO with Markov chain random walk. Int. J. Comput. Sci. Eng. 5(5), 422–429 (2013) 8. Bhateja, A.K., et al.: Cryptanalysis of Vigenere cipher using cuckoo search. Appl. Soft Comput. 26, 315–324 (2015). Elsevier 9. Din, M., et al.: Applying cuckoo search in analysis of LFSR based cryptosystem communicated to Elsevier. J. Perspect. Sci. 8, 435–439 (2016) 10. Wafaa, G.A., Ghali, N.I., Hassanien, A.E. and Abraham, A.: Known-plaintext attack of des using particle swarm optimization, nature and biologically inspired computing (NaBIC). In: Third World Congress, pp. 12–16. IEEE (2011) 11. Jadon, S.S, Sharma, H., Kumar, E., Bansal, J.C.: Application of binary PSO in cryptanalysis of DES. In: Proceedings of International Conference SoCProS-2011, pp. 1061–1071

1056

M. Din et al.

12. Dadhich, A., Yadav, S.K.: Swarm intelligence and evolutionary computation based cryptography and cryptanalysis of 4-round DES algorithm. Int. J. Adv. Res. Comput. Eng. Technol. 3(5) (2014) 13. Kennedy, J., Eberhart R.C.: A Discrete Binary Version of the Particle Swarm Optimization, pp. 4104–4108. IEEE Magazine, Orlando (1997) 14. Norvig, P.: English Letter Frequency Counts: Mayzner Revisited. http://norvig.com/mayzner. html 15. Klein, A.: Stream Ciphers. Springer, London (2013) 16. Stinson, D.R.: Cryptography-Theory and Practice, 3rd edn. Chacpman & Hall/CRC Publication, Boca Raton (2013)

Pipe Size Design Optimization of Water Distribution Networks Using Water Cycle Algorithm P. Praneeth, A. Vasan and K. Srinivasa Raju

Abstract A simulation optimization model, WCANET that combines water cycle optimization algorithm with EPANET hydraulic simulation software to tackle water distribution problem for least cost design is proposed. The developed model is to achieve optimal design outputs within the hydraulic and design constraints of the problem. The model has been put to test against two benchmark water distribution network problems, namely (1) Hanoi water distribution network and (2) Two loop network problem. The study shows promising results for WCANET compared to earlier studies. Analysis of the results suggests WCANET can be considered for planning of water distribution networks which has high economic benefits. Keywords Water cycle algorithm · Optimization · EPANET · Water distribution systems

1 Introduction Efficient design of water distribution systems is gaining prominence due to the limited resources and complex network that requires least cost analysis. In addition, nonlinearity and discrete variables make the design problem more complex. This necessitates to explore meta-heuristic algorithms to find close-to-optimal solutions. Many meta-heuristic algorithms have been applied to the design of water distribution network problem such as like differential evolution, Shuffled frog leaping algorithm, genetic algorithm, ant colony optimization, shuffled complex evolution, particle swarm optimization etc. [1–8] and have showed significantly improved results compared to the traditional optimization methods like linear and nonlinear programming.

P. Praneeth · A. Vasan (B) · K. Srinivasa Raju Department of Civil Engineering, Birla Institute of Technology and Science, Pilani, Hyderabad Campus, Hyderabad, Telangana, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_99

1057

1058

P. Praneeth et al.

This paper focuses on the application of WCANET, a simulation optimization model, that combines water cycle algorithm (WCA) [9] with EPANET hydraulic simulation software to Hanoi water distribution network and two-loop network and compares them with other widely used algorithms available in the literature. The following sections cover mathematical model for cost minimization, water cycle algorithm and WCANET, results, and discussion, conclusions and future scope.

2 Mathematical Model Formulation Least cost optimization problem is defined in which diameter of the pipes are decision variables. The objective function used to solve the Hanoi and two loop networks is formulated as mentioned in [8].

3 WCA (Water Cycle Algorithm) Water cycle algorithm seeks inspiration from the water cycle in the nature as its name suggests. The water cycle algorithm was developed by [9]. The algorithm tries to mimic how water droplets from rain form streams and streams form river and eventually move downhill into the sea. The rivers are formed higher up in the mountains and flow down toward the sea. The algorithm creates rivers and streams from the raindrops (initial random solutions) and move them towards the sea. In the algorithm, the rivers and streams are analogous to good solutions and the sea is the best solution. The algorithm updates itself in each iteration and replace the old sea with the new sea and similarly the old rivers with the new ones (if any). An evaporation condition is set in place to avoid the being stuck into local optima. Step by step working of the algorithm is as follows: 1. Choose WCA parameters N sr, (number of streams), d max (evaporation constant), N pop , (population), max_iterations. 2. Generate initial random population and form initial rivers, streams, and sea. 3. Evaluate the cost of each raindrop. 4. Compute the intensity of raindrops flowing into rivers or sea using the equation     Cost  n   (1) NSn  round   Nsr  × NRaindrops , n  1, 2, . . . , Nsr  i1 Costi  where NSn is the number of streams that flow into rivers or sea. 5. Streams flow to rivers using the equation   i i+1 i i  X Stream + rand × C × X River − X Stream X Stream

(2)

Pipe Size Design Optimization of Water Distribution Networks …

1059

6. Rivers flow towards seas using the equation as shown below   i i+1 i i X River  X River + rand × C × X Sea − X River

(3)

7. If a stream has better fitness than a river, their positions are interchanged. 8. If a river has better fitness than a sea, their positions are interchanged (similar to step 7). 9. Check the evaporation condition  i  i  < dmax i  1, 2, 3, . . . , Nsr − 1 if  X Sea − X River

(4)

where d max is small number. The above equation indicates if the river is so close to the sea it can be assumed to be the sea and hence to avoid being stuck in local optima the evaporation of the solutions is done. 10. Once the evaporation condition is met, new streams are formed (raining) using the Eq. 6 and to increase the convergence and search near the sea Eq. 7 is used new  LB + rand × (UB − LB) X Stream √ new X Stream  X Sea + µ × rand n(1, Nvar )

(5) (6)

11. d max is then iteratively reduced using the equation i+1 i  dmax − dmax

i dmax max iteration

(7)

This iterative reduction of d max intensifies the search around the sea (optimum solution). 12. Steps 5 through 11 are repeated until the termination criterion is met. In this model the termination criterion is max_iter which is arbitrarily set to desired value.

4 WCANET WCANET is simulation optimization model developed to integrate water cycle algorithm to EPANET hydraulic solver. In this model, the EPANET 2.0 is used to evaluate the feasibility of a network (solutions) which were obtained from the water cycle algorithm and then encoded to the nearest available discrete diameters. In case of infeasible solutions (the solutions which violate the minimum pressure requirements at a given node), a penalty value is added to the network cost hence allowing us to discard infeasible solutions. The penalty function approach has certain drawbacks like (1) the penalty parameters are problem dependent (2) the parameters need finetuning so that the solutions converge to feasible domain area. WCANET is explored to the Hanoi water distribution network, Vietnam and two loop network.

1060

P. Praneeth et al.

5 Results and Discussion 5.1 Hanoi Water Distribution Network Figure 1 presents the configuration of the Hanoi water distribution network [11]. Network consists of 32 nodes and 34 pipes organized in three loops. With 6 commercially available diameters, resulting network yields in 634  2.87 × 1026 possible designs. Thirty meters is fixed as minimum required head pressure for all nodes. Tables 1 and 2 presents input data to the problem.

Fig. 1 Hanoi water distribution network [10]

Pipe Size Design Optimization of Water Distribution Networks …

1061

Table 1 Pipe data—Hanoi water distribution network [11] Node No.

Demand (m3 /h)

1 (source)

−19,940

1

100

890 850 130 725 1005 1350 550 525 525 500 560 940 615 280 310 865 1345 60 1275 930 485 1045 820 170 900 370 290 360 360 105 805

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

1350 900 1150 1450 450 850 850 800 950 1200 3500 800 500 550 2730 1750 800 400 2200 1500 500 2650 1230 1300 850 300 750 1500 2000 1600 150 860 950

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Link index

Length (m)

1062 Table 2 Pipe costs -Hanoi water distribution network [11]

P. Praneeth et al. Diameter (mm)

Unit cost ($ m-1)

304.8 406.4 508 609.6 762 1016.00

45.7 70.4 98.4 129.3 180.8 278.3

5.2 Two-Loop Network The optimization problem of the two loop network (Fig. 2) was first presented and solved by Alperovits and Shamir [12]. Network consists of 7 nodes connected by 8 pipes with each pipe length 1000 m. Value of 130 is fixed as Hazen Williams coefficient for all the pipes and 30 meters is fixed as minimum required head pressure for all nodes. With 14 commercially available diameters, resulting in network yields in 148  1,475789,056 solutions. Tables 3 and 4 presents data for the problem.

Fig. 2 Two loop network [8]

Pipe Size Design Optimization of Water Distribution Networks … Table 3 Network data of the two loop network [8]

Table 4 Pipe costs—Two loop network [8]

1063

Node

Demand (m3 /h)

Ground level (m)

1 (reservoir)

−1120.00

210

2 3 4 5 6 7

100 100 120 270 330 200

150 160 155 150 165 160

Diameter (mm)

Unit cost ($ m-1)

25.4 50.8 76.2 101.6 152.4 203.2 254 304.8 355.6 406.4 457.2 508 558.8 609.6

2 5 8 11 16 23 32 50 60 90 130 170 300 550

5.3 Hanoi Water Distribution Network WCA NET model is applied to Hanoi water distribution network. The optimum cost of $6,124,284 is found in 197 iterations for the parameters n  500 (population), N sr  4 (number of streams and rivers) d max  100 (evaporation condition), and max iterations  1000. The Number of function evaluations (NFEs) are 98,500. The convergence of the optimal solution to this problem is shown in Fig. 3. The results produced by WCANET were promising and were very close to the best solution from the available literature i.e. $6.081million. In Table 5, it can be noted that there are other optimum cost designs like [1, 6], etc. which have the optimum value of $6.056 million and $6.073 million. But it is to be noted that they have pressure violations and don’t meet the requirement of minimum 30 m pressure head at all nodes (Tables 6 and 7).

1064

P. Praneeth et al.

Fig. 3 Convergence plot (Hanoi water distribution network) Table 5 Comparison of Cost and NFEs obtained from previous literature to present study (Hanoi water distribution network) S. No. Authors Optimization NFEs Cost in million algorithm ($) 1

Present study (WCANET)

WCA

98,500

6.124

2

Vasan and Simonovic [3]

DE

56,201

6.195

3

Suribabu [4]

DE

48,724

6.081

4

Suribabu and PSO Neelakantan [13]

6600

6.081

5

Eusuff and Lansey (2003) [6]

Shuffled leapfrog

7

Savic and Walters [1]

Genetic algorithm

26,987

6.073*

100,000

6.073*

*Solutions which have pressure violations

5.4 Two Loop Network WCA simulation optimization model is applied to two-loop network. The optimum cost of $419,000 is found in 22 iterations for the parameters n  100 (population), N sr  4 (number of streams and rivers) d max  100 (evaporation condition), and max iterations  1000. The optimal link diameters found were [457.2, 254, 406.4, 101.6,

Pipe Size Design Optimization of Water Distribution Networks …

1065

Table 6 Comparison of solutions obtained from previous literature to present study (Hanoi water distribution network) Pipe No. Savic and Suribabu (DE) Vasan and Present study Walters (GA) Simonovic (DE) (WCANET) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

1016 1016 1016 1016 1016 1016 1016 1016 762 762 762 609.6 406.4 406.4 304.8 406.4 508 609.6 609.6 1016 508 304.8 1016 762 762 508 304.8 304.8 406.4 406.4 304.8 304.8 406.4 508

1016 1016 1016 1016 1016 1016 1016 1016 1016 762 609.6 609.6 508 406.4 304.8 304.8 406.4 609.6 508 1016 508 304.8 1016 762 762 508 304.8 304.8 406.4 304.8 304.8 406.4 406.4 609.6

1016 1016 1016 1016 1016 1016 1016 1016 762 762 762 609.6 406.4 406.4 304.8 406.4 508 609.6 609.6 1016 508 304.8 1016 762 762 508 304.8 304.8 406.4 406.4 304.8 304.8 406.4 508

1016 1016 1016 1016 1016 1016 1016 1016 1016 762 609.6 609.6 508 304.8 304.8 304.8 508 508 508 1016 508 304.8 1016 762 762 508 304.8 304.8 406.4 304.8 304.8 1016 406.4 609.6

Cost in million

$6.195

$6.081

$6.195

$6.124

1066

P. Praneeth et al.

Table 7 Comparison of NFEs obtained for global optimum of $419,000 from previous literature to present study (two loop network) S. No. Authors Optimization Number of function algorithm evaluations 1

Present study (WCANET)

WCA

2200

2

Suribabu [4]

DE

4750

3

Suribabu and Neelakantan [11]

PSO

5138

5

Liong and Atiquzzaman [7]

Shuffled complex

1019

6

Eusuff and Lansey [6] Shuffled leapfrog

11,155

8

Savic and Walters [1]

65,000

GA

Fig. 4 Convergence plot (two loop network)

406.4, 254, 254, 25.4]. The number of function evaluations NFEs  2200. The convergence of the optimal solution to this problem is shown in Fig. 4. The WCANET model reached the global optimum of $419,000 in 2200 average function evaluations, which is extremely low and efficient in comparison with other optimization algorithms. The WCANET model produced better results when the population size is increased to 500 and it reached the optimum cost of $419,000 in 500 function evaluations.

Pipe Size Design Optimization of Water Distribution Networks …

1067

6 Conclusion Efficiency of WCANET model has been successfully tested on two well-known benchmark problems (1) Hanoi water distribution network and (2) two loop network. WCANET model showed promise by achieving the global minimum in two loop network test case and very close to the global minimum in case of Hanoi water distribution network test case. The WCANET showed high computational capabilities by achieving the desired results in fewer function evaluations compared to other available models.

References 1. Savic, D.A., Walters, G.A.: Genetic algorithms for least cost design of water distribution networks. J. Water Resour. Plan. Manage. 123(2), 67–77 (1977) 2. Abebe, A., Solomatine, D.: Application of global optimization to the design of pipe networks. In: Proceedings of International Conference Hydroinformatics, No. August, pp. 1–8 (1998) 3. Vasan, A., Simonovic, S.P.: Optimization of water distribution network design using differential evolution. J. Water Resour. Plan. Manage. 136(2), 279–287 (2010) 4. Suribabu, C.R.: Differential evolution algorithm for optimal design of water distribution networks. J. Hydroinformatics 12(1), 66–82 (2010) 5. Zecchin, A., Maier, H., Simpson, A., Leonard, M., Nixon, J.: Ant colony optimization applied to water distribution system design: comparative study of five algorithms. J. Water Resour. Plan. Manage. 133(1), 87–92 (2007) 6. Eusuff, M.M., Lansey, K.E.: Optimization of water distribution network design using the shuffled frog leaping algorithm. J. Water Resour. Plan. Manag. 129(3), 210–225 (2003) 7. Liong, S.-Y., Atiquzzaman, M.: Optimal design of water distribution network using shuffled complex evolution. J. Inst. Eng. 44(1), 93–107 (2004) 8. Ezzeldin, R., Djebedjian, B., Saafan, T.: Integer discrete particle swarm optimization of water distribution networks. J. Pipeline Syst. Eng. Pract. ASCE 5(1), 04013013 (2014) 9. Eskandar, H., Sadollah, A., Bahreininejad, A., Hamdi, M.: Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110–111, 151–166 (2012) 10. Sung, Y.-H., Lin, M.-D., Lin, Y.-H., Liu, Y.-L.: Tabu search solution of water distribution network optimization. J. Environ. Eng. Manage. 17(3), 177–187 (2007) 11. Fujiwara, O., Khang, D.B.: A two-phase decomposition method for optimal design of looped water distribution networks. Water Resour. Res. 26(4), 539–549 (1990) 12. Alperovits, E., Shamir, U.: Design of optimal water distribution systems. Water Resour. Res. 13(6), 885–900 (1977) 13. Suribabu, C.R., Neelakantan, T.R.: Design of water distribution networks using particle swarm optimization. Urban Water J. 3, 111–120 (2006)

An Improved Authentication and Data Security Approach Over Cloud Environment Ramraj Dangi and Satish Pawar

Abstract Cloud computing and its distributed network is a proper solution for any data distribution. Cloud computing is the latest trend for deploying application and data sharing. Security is always an issue which always needs enhancement to protect from the intrusion and attackers. Two-factor data mechanism is given in latest algorithm for providing high-end security in the cloud system. There is a hardware device for key invocation is presented. It is able to provide security for authentication. The limitation of this approach is to carry additional hardware at all the places. Loosing of the device may interrupt data access. In this paper, we proposed an approach which helps in data security with three-way authentication. Additionally, this approach helps in replacing device dependency using a secure model. The proposed approach helps in effective three-factor security with low computational parameters. The computed parameter with different file size shows the efficiency of our proposed work over existing algorithm scenario. A three-factor authentication approach is effective while looking at security aspect compare to previously defined authentication techniques in cloud security. Keywords Cloud security · Two-way authentication · Storage services · Key management and key distribution

1 Introduction Cloud computing is an era of computing, it refers to the delivery of computing as a service rather than a product. Using cloud, we can use the applications as utilities over the Internet and it also allows the user to create, configure, and customize R. Dangi (B) · S. Pawar Computer Science & Engineering Department, Samrat Ashok Technological Institute, Vidisha, India e-mail: [email protected] S. Pawar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_100

1069

1070

R. Dangi and S. Pawar

application online. Cloud computing provides us a very important feature of storage [1, 2] through which we can store a pool of data over cloud which is mostly managed by third parties. Data is available on the cloud can access from anywhere at any time only a strong network connection is required [3]. The very important feature of cloud storage is sharing of data between multiple users. Like a user stores his data on the cloud and another user is able to access his data from cloud storage through the network. As cloud storage has many advantages but the concern of cloud storage is to secure data from unauthorized access. To improve security-related concern the concept came into existence is two-factor securities [4–6].

1.1 Naïve Approaches Here we are discussing some naïve approaches which enhance the security protection, and we will see why these approaches did not get good flexibility. (a) Break the secret key and use two keys: In this approach a key split into two parts and store one part in computer and another part in the security device. The message did not decrypt without both parts of keys. The security of message depends on whole secret key [7, 8] and one major thing is to always carry the security device with us to store the second part of the key. The problem with this approach is if the adversary gets either part of the key then he can decrypt some part of the message that is harmful to us. (b) Two-level encryptions: In this approach double encryption is used first encryption performs with the public key (identity of the receiver), and after first level encryption, the next level encryption is performed. At the receiver end, one security device is used to keep the secret key of first encryption and the second secret key is stored in computer storage. This approach also has some problems for example: If the security device is lost then it is very difficult to get the original message. Same as previous approach it is very difficult to carry a security device always.

1.2 Our Contribution Our contribution to our proposed work is to make the two-factor security more flexible and robust. Our concept has many new features, and these are: (a) The very important feature of our work is to remove device dependency at second-level encryption, so there is no need for revocation in our work that saves a huge amount of time. (b) We use two-level encryptions with IBE (Identity-based encryption) [9] which means sender only uses the identity of the receiver (name or email-id) no other information receiver is required.

An Improved Authentication and Data Security Approach …

1071

(c) In our proposed work we introduce a very important feature of email-based One Time Password (OTP) system to make our system more secure and flexible. (d) In our system, we use hashing technique for second level decryption instead of device security, so we do not need revocation that makes our system faster and flexible. (e) The cloud cannot decrypt the data at any time because we upload data to a cloud after first level encryption.

2 Related Work Here we discuss some related work with our proposed work, and we will explain why they are not good and flexible as of our system.

2.1 Two-Factor Security Using Two Secret Keys Two-level encryption mainly based on two types first certificate less and second certificate based [10]. In certificate-based system, a user chooses his public key with a secret key and the authority generate the partial key based on the identity of the user. Encryption or signature verification both needs a public key and user identity. At the receiver end decryption or signature verification requires both partial key and secret key. So this system requires costly certificate validation process. In certificate less [11], encryption we remove costly validation but still it is not convenient it does not follow identity-based encryption.

2.2 Two-Factor Security with Online Authority In this approach, an online mediator is required for the secure transaction. The online mediator is known as SEM (Security Mediator) [12, 13] and it provides security capabilities. In this approach, every transaction is dependent on SEM. If SEM is revoked someone then that user cannot get the message. So in SEM system if the network is not good then also we suffer to complete the transaction.

2.3 Two-Factor Security with Security Device Two-way security authentication is an important aspect of data storage and access [14, 15]. In this system, two-level encryption has performed the PKG gives the key for first level encryption to the sender and sends a secret key to the receiver for

1072

R. Dangi and S. Pawar

first level decryption and then the receiver uploads the file at the cloud. For second level encryption, SDI gives another key for second level decryption, and it sends the decryption key to receiver’s security device. Without either key, the message did not get decrypted. The one of the big concern of this system is if the device is stolen or lost then we cannot get the message back. This problem is solved in next work that is two-factor securities with revocation.

2.4 Two-Factor Security with Security Device with Revocation This system is same as the previous system, but it has one additional feature called revocation [16, 17]. It means when the device is stolen or lost then the receiver sends a message to SDI and then SDI updates the previous algorithm and generates a new security device for decryption of the message. This system seems to be good, but it takes double time if the device is lost that is the reason we make independent device system.

3 Problem Formulation We saw many useful approaches to cloud security, but there are still some limitations which need to be overcome and find an optimal solution. The limitations of the existing approach are 1. Carrying an extra hardware: A very big problem with the previous system is to carry hardware. In a previous system for the second level security, a security device is used. A trusted third-party SDI (secure device issuer) issues a secret key for second level decryption which is stored in the security device. So, if the device is stolen then it is very difficult to get the original message back. If the device is lost, then receiver sends a message to SDI, and then SDI updates the previous algorithm and generates a new device. This process makes this system costly. 2. Maintenance cost for the system is high because it uses revocation technique: There is no doubt that revocation is a very important feature but here we do not need of revocation because we are making independent device system. So revocation is just increasing the time and makes the system complex. 3. A high computation usage of security algorithm, which makes the system slower: In the previous system they use a complex and big algorithm for encryption that increases the execution time and overall cost of the system. 4. Still a big concern in two-level securities and to solve this, we introduce a new concept of email-based OTP system [18].

An Improved Authentication and Data Security Approach …

1073

4 Proposed Work We proposed a system which removes all the previous limitations. Before explaining our work in detail, we first give an intuition on it. In our work, we have the following entities. 1. Private Key Generator (PKG): It is a trusted third party which generates a secret key for every user. 2. Data Manager (DM): It is a second trusted third party, which contains all the auditing details of all users. 3. Sender (Alice): Sender sends the data to the receiver, he only knows the identity (email-id) of the receiver no other information of the receiver is required. It uploads the data to a cloud after first level encryption. 4. Receiver (Bob): Every receiver has a unique identity (email-id or name), the receiver first gets the OTP on his email after successful submission of OTP, the receiver can login into the cloud server, and he can download the file from the cloud server and then perform two-level decryption on it. 5. Cloud server: Cloud server plays a very important role in our system. The server stores ciphertext, which is downloaded by the receiver. At Sender End: 1. Generate key: In this step, we give receiver’s email-id as an identity to PKG (Private Key Generator). This will give the combination of public key and private key to perform encryption and decryption. 2. Select the file: In second step we select the file which we want to send the receiver. 3. First level encryption: In this step, we perform encryption by ECC algorithm to encrypt the message. We give the selected file and public key as an input to ECC algorithm and get the encrypted file (C1) as an output. Then we upload the file to cloud server. 4. Generate the hash value: In this step, we give the log file of the receiver to SHA-2 as an input and get the hash value of that log file as output which we will use as a key to perform second level decryption. 5. Second-level encryption: In this step, we perform second-level decryption through RSA algorithm. We give the C1 (encrypted file) and the hash value (work as a key) to RSA as an input and then get the encrypted file C2 as output. At Receiver End: 1. Generate the OTP: In this step, the receiver gets the OTP on his email-id. So when receiver enters the right OTP then only he can log in to get the file sent by the sender. 2. Generate the hash value: Here, we generate a hash value of receiver’s log file through SHA-2. Which we will use as a second key to decrypt the message.

1074

R. Dangi and S. Pawar

Fig. 1 a, b Flow diagram from sender’s end and receiver’s end for data storage Fig. 2 Comparison graph analysis of the proposed approach

3. Perform first-level decryption: In this step, we perform RSA decryption algorithm. We give C2 and hash value as an input to RSA algorithm and then get the decrypted file C1 as output. 4. Second-level decryption: In this step, we perform second-level decryption through ECC decryption algorithm. We give c1 and private key as an input to ECC algorithm and get the original file as an output. In Fig. 1a above, the complete scenario of sender side to encrypt the data and uploading on the server is performed. And in Fig. 1b decryption process is discussed over encrypted data. A complete flow and implementation show the efficiency of our system (Fig. 2).

5 Experimental Setup and Result Analysis Experiment Framework: The proposed cloud monitoring and framework needed an enhancement, which is performed using Java Apache framework. A utilization of Cloud-Sim 3.0.3 API with the Canvas API for graphical representation is used. The advantage of given work: 1. One of the best advantages of our proposed work is system device independent. It means that there is no need for a device for second-level security.

An Improved Authentication and Data Security Approach … Table 1 Computation time using our authentication technique

1075

File upload size (in MB)

Computation time (in ms) Proposed algorithm

5 10 20 75 100

57 98 154 236 298

2. Another advantage of this system is that there is no need for revocation, so the total computation time and overall cost of the system are very low in comparison to previous work. 3. We introduce a new concept of email-based OTP in this paper that makes our system more secure than previous approaches. 4. In this system, we use very effective and fast algorithms to perform encryption and decryption that makes our system more flexible. Result Analysis: Inputs: There are different data file formats which used for input processing and data storage over the cloud component designed by us. Outputs: Data outputs of secured encrypted manner, effective data management and accessing mechanism are applied. Further computation time, computation cost is computed over the processed data system (Table 1).

6 Conclusion In this paper, we develop an interface which is providing three-level security with very secure email-based OTP system. In our proposed work, we focus to make very flexible and reliable interface through which anyone can share data over cloud server. In the previous paper, they provide two-level security, but the loophole in that paper was the interface was dependent on the device that makes system costly and timeconsuming. So, our main aim is to make a device independent system with very high-security data sharing approach.

References 1. Chen, H.C.H., Hu, Y., Lee, P.P.C., Tang, Y.: NCCloud: A net-work-coding-based storage system in a cloud-of-clouds. IEEE Trans. Comput. 63(1), 31–44 (2014) 2. Wang, C., Wang, Q., Ren, K., Cao, N., Lou, W.: Toward secure and dependable storage services in cloud computing. IEEE Trans. Serv Comput. 5(2), 220–232 (2012)

1076

R. Dangi and S. Pawar

3. Acharya, S., Polawar, A., Baldawa, P., Junghare, S., Pawar, P.Y.: Internet banking two factor authentication using smartphone. IJSER, IJSER 4(3), March edition (2013). ISSN 2229–5518 4. Chu, C.-K., Chow, S.S.M., Tzeng, W.-G., Zhou, J., Deng, R.H.: Key-aggregate cryptosystem for scalable data sharing in cloud storage. IEEE Trans. Parallel Distrib. Syst. 25(2), 468–477 (2014) 5. Ferretti, L., Colajanni, M., Marchetti, M.: Distributed, concurrent, and independent access to encrypted cloud databases. IEEE Trans. Parallel Distrib. Syst. 25(2), 437–446 (2014) 6. Yang, K., Jia, X., Ren, K., Zhang, B., Xie, R.: DAC-MACS: effective data access control for multiauthority cloud storage systems. IEEE Trans. Inf. Forensics Secur. 8(11), 1790–1801 (2013) 7. Dodis, Y., Kalai, Y.T., Lovett, S.: On cryptography with auxiliary input. In: Proceedings of 41st Annual ACM Symposium Theory Computing, pp. 621–630 (2009) 8. Naor, M., Segev, G.: Public-key cryptosystems resilient to key leakage. In: Proceedings of 29th Annual International Cryptology Conference, pp. 18–35 (2009) 9. Matsuo, T.: Proxy re-encryption systems for identity-based encryption. In: Proceedings pf 1st International Conference Pairing-Based Cryptography, pp. 247–267 (2007) 10. Liu, J.K., Zhou, J.: Efficient certificate-based encryption in the standard model. In: Proceedings of 6th International Conference Security Cryptography Network, pp. 144–155 (2008) 11. Hwang, Y.H., Liu, J.K., Chow, S.S.M.: Certificateless public key encryption secure against malicious KGC attacks in the standard model. J. UCS 14(3), 463–480 (2008) 12. Yang, K., Jia, X., Ren, K., Zhang, B., Xie, R.: DAC-MACS: effective data access control for multiauthority cloud storage systems. IEEE Trans. Inf. Forensics Secur. 8(11), 1790–1801 (2013) 13. Yap, W.-S., Chow, S.S.M., Heng, S.-H., Goi, B.M.: Security mediated certificate less signatures. In: Proceedings of 5th International Conference Applied Cryptography Network Security, pp. 459–477 (2007) 14. ManavSinghal and ShashikalaTapaswi: Software tokens based two factor authentication scheme. Int. J. Inf. Electr. Eng. 2(3), 383–386 (2012) 15. Sanka, S., Hota, C., M. Rajarajan, M. Secure data access in cloud computing. In: Proceedings of the 4th IEEE International Conference on Internet Multimedia Services, December 2010 16. Liu, J.K., Liang, K., Susilo, W., Liu, J., Xiang, Y.: Two-factor data security protection mechanism for cloud storage system. IEEE Trans. Comput. 65(6) 2016 17. Seo, J.H., Emura, K.: Efficient delegation of key generation and revocation functionalities in identity-based encryption. In: Proceedings of Cryptographers Track RSA Conference, pp. 343–358 (2013) 18. Lacona, L.J.: Lamport’s one-time password algorithm. A design pattern for securing client/service interactions with OTP. http://www.javaworld.com/article/2078022/open-sou rce-tools/lamport-s-one-time-password-algorithm-or-don-t-talk-to-complete-strangers.html (2009)

Second Derivative-Free Two-Step Extrapolated Newton’s Method V. B. Kumar Vatti, Ramadevi Sri and M. S. Kumar Mylapalli

Abstract In this paper, the two-step-extrapolated Newton’s method (TENM) developed by Vatti et al. is considered and this method is further studied without the presence of second derivative. It is shown that this method has same efficiency index as that of TENM. Numerical examples show that the new method can compete with the other methods. Keywords Iterative method · Nonlinear equation · Newton’s method Convergence analysis · Higher order convergence AMS Subject Classification 41A25 · 65K05 · 65H05

1 Introduction Solving nonlinear equations is one of the most important problems in numerical analysis. For finding the zero’s of a nonlinear equation f (x)  0

(1)

where f : D ⊂ R → R is a scalar function on an open interval D and f (x) may be algebraic, transcendental, or combined of both.

V. B. Kumar Vatti · R. Sri Department of Engineering Mathematics, Andhra University, Visakhapatnam, India e-mail: [email protected] R. Sri e-mail: [email protected] M. S. Kumar Mylapalli (B) Department of Mathematics, Gitam University, Visakhapatnam, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_101

1077

1078

V. B. Kumar Vatti et al.

The Eighteenth Order Convergent Method (EOCM) developed by Vatti et al. [1] is given by For a given x0 , compute xn+1 by the iterative schemes wn  x n −

f (xn ) f  (xn )

  f (wn ) 1  yn  wn −  f (wn ) 1 − ρˆn 2   1 2 f (yn ) xn+1  yn −  √ f (yn ) 1 + 1 − 2ρn (n  0, 1, 2, . . .)

(2)

Where ρˆn 

f (wn ) f  (wn ) [ f  (wn )]2

(3)

ρn 

f (yn ) f  (yn ) [ f  (yn )]2

(4)

and

The 18th order three-step Predictor-corrector Newton’s Halley method (PCNH) developed by Mohamed and Hafiz (see [2–11]) is given by: For a given x0 , compute xn+1 by using wn  x n −

f (xn ) f  (xn )

2 f (wn ) f  (wn ) 2[ f  (wn )]2 − f (wn ) f  (wn ) f (yn ) [ f (yn )]2 f  (yn ) −  yn −  f (yn ) 2[ f  (yn )]3 (n  0, 1, 2, . . .)

yn  wn − xn+1

(5)

The ninth-order two-step Predictor-Corrector Halley method (PCH) developed by Mohamed and Hafiz [10] is given by For a given x0 , compute xn+1 by using 2 f (xn ) f  (xn ) 2[ f  (xn )]2 − f (xn ) f  (xn ) f (yn ) [ f (yn )]2 f  (yn ) −  yn −  f (yn ) 2[ f  (yn )]3 (n  0, 1, 2, . . .)

yn  xn − xn+1

(6)

Second Derivative-Free Two-Step Extrapolated Newton’s Method

1079

The ninth-order two-step Extrapolated Newton’s method (TENM) developed by Vatti et al. [12] is given by For a given x0 , compute xn+1 by using   1 f (xn )  yn  xn −  f (xn ) 1 − ρˆn 2   1 2 f (yn ) xn+1  yn −  √ f (yn ) 1 + 1 − 2ρn (7) (n  0, 1, 2, . . .) Where ρˆn 

f (xn ) f  (xn ) [ f  (xn )]2

(8)

ρn 

f (yn ) f  (yn ) [ f  (yn )]2

(9)

and

In Sect. 2 of this paper, we consider the second derivative-free two-step extrapolated Newton’s method and discuss the convergence criteria of this method in Sect. 3. Few numerical examples are considered in the concluding section for comparison.

2 Second Derivative-Free Two-Step Extrapolated Newton’s Method (STENM) Considering the ninth-order two-step Extrapolated Newton’s method (TENM) (7) with (8) and (9), and expanding f (yn ) about xn and neglecting the higher powers, we obtain (yn − xn )2  f (yn )  f (xn ) + (yn − xn ) f  (xn ) + f (xn ) 2!     2 − f (xn ) f (xn )   f (xn ) + f f  (xn ) + (x ) n f  (xn ) 2 f 2 (xn ) on taking yn − xn  in which case

− f (xn ) f  (xn )

(10)

1080

V. B. Kumar Vatti et al.

ρˆn → 0 Now, f (yn ) 

f (xn ) 2

· ρˆn , where ρˆn  ρˆn 

f (xn ) f  (xn ) , [ f  (xn )]2

which gives

2 f (yn ) f (xn )

Therefore, (8) takes the form ρˆn 

2 f xn −

f (xn ) f  (xn )

(by 2.1)

f (xn )

(11)

Similarly, we can have ρn 

2 f yn −

f (yn ) f  (yn )



f (yn )

(12)

and rewriting Eqs. (11), (12) in (7), we thus have the following algorithm: Algorithm 2.1 For a given x0 , compute xn+1 by the iterative schemes   f (xn ) 1  yn  xn −  f (xn ) 1 − ρˆn 2   1 2 f (yn ) xn+1  yn −  √ f (yn ) 1 + 1 − 2ρn (n  0, 1, 2, . . .)

(13)

where ρˆn and ρn are as given in (11) and (12). This algorithm can be called as Second derivative-free two-step Extrapolated Newton’s Method (STENM) and it requires six functional evaluations.

3 Convergence Criteria Theorem 3.1 Let x ∗ ∈ D be a single zero of a sufficiently differentiable function f : D ⊂ R → R for an open interval D and let x0 be in the vicinity of x ∗ , then the Algorithm 2.1 has ninth-order convergence. Proof Let x ∗ be a single zero of (1) and xn  x ∗ + en

(14)

Second Derivative-Free Two-Step Extrapolated Newton’s Method

1081

Then, f x∗  0

(15)

If xn be the nth approximate to the root of (1), then expanding f (xn ) about x ∗ using Taylor’s expansion, we have e3 e3 f (xn )  f x ∗ + en f  x ∗ + n f  x ∗ + n f  x ∗ + . . . 2! 3!    ∗  ∗ f f 1 1 (x ) (x ) 3 1 f v (x ∗ ) 4  ∗ 2 e + e + e + ...  f x en + 2! f  (x ∗ ) n 3! f  (x ∗ ) n 4! f  (x ∗ ) n   f  x ∗ en + c2 en2 + c3 en3 + c4 en4 + . . . (16) where cj 

1 f j (x ∗ ) · , ( j  2, 3, 4, . . .) j! f  (x ∗ )

and  f  (xn )  f  x ∗ 1 + 2c2 en + 3c3 en2 + 4c4 en3 + · · ·

(17)

Now, ⎤ 2 3 2 e e − c e − 2c − 2c − n 2 n 3 f (xn ) 2 n  ⎣ ⎦ 3 4 f  (xn ) 3c4 − 7c2 c3 + 4c2 en + o en5 ⎡ ⎤ 2 3 ∗ 2 x e + c e + 2c − 2c + 2 3 f (xn ) n 2 n xn −   ⎣ ⎦ 3 4 f (xn ) 3c4 − 7c2 c3 + 4c2 en + o en5 ⎡

⎤ 2 3 2 e e + 2c − 2c + c 2 3 n n 2   ⎥ ⎢ f (xn ) ⎥ ⎢  f  x ∗ ⎢ 3c4 − 7c2 c3 + 4c23 en4 + ⎥ f xn −  ⎦ ⎣ f (xn ) o en5

(18)

(19)



and

(20)

1082

V. B. Kumar Vatti et al.

ρˆn 

2 f xn −

f (xn ) f  (xn )



f (xn ) ⎤ c2 en + 2c3 − 3c22 en2 + 3c4 − 10c2 c3 + 7c23 en3 + ⎥ ⎢  2⎣ 4 5 ⎦ 2 4 2 2 9c2 c3 − 4c2 c4 − 5c2 − 2 c3 − c2 en + o en

  2 S1 en + S2 en2 + S3 en3 + S4 en4 + . . . ⎡

(21)

where S1  c2 , S2  2c3 − 3c22 , S2  2c3 − 3c22 , S3  3c4 − 10c2 c3 + 7c23 , 2 S4  9c22 c3 − 4c2 c4 − 5c24 − 2 c3 − c22 Now, 

ρˆn 1− 2

−1

−1  1 − S1 en + S2 en2 + S3 en3 + S4 en4 + . . . ⎡ ⎤ 1 + S1 en + S12 + S2 en2 + 2S1 S2 + S13 + S3 en3 + ⎦  ⎣ S4 + S22 + 2S1 S3 + 3S12 S2 + S14 en4 + · · ·

Multiplying (18) with (22), we obtain ⎡ ⎤ −1  2 3 2 e e − c e − 2c − 2c − n 2 3 ρˆn f (xn ) n 2 n 1−  ⎣ ⎦· 3 4 f  (xn ) 2 3c4 − 7c2 c3 + 4c2 en + o en5 ⎡ ⎤ 1 + S1 en + S12 + S2 en2 + 2S1 S2 + S13 + S3 en3 ⎦ ⎣ + S4 + S22 + 2S1 S3 + 3S12 S2 + S14 en4 + . . .  en + −c22 en3 + o en4 . . .

(22)

(23)

Thus, using (14) and (23), we have yn  x ∗ + T

(24)

T  c22 en3 .

(25)

where

Now, expanding f (yn ), f  (yn ) about x ∗ by using (24), we obtain  f (yn )  f  x ∗ T + c2 T 2 + c3 T 3 + c4 T 4 + · · ·

(26)

Second Derivative-Free Two-Step Extrapolated Newton’s Method

1083

 f  (yn )  f  x ∗ 1 + 2c2 T + 3c3 T 2 + 4c4 T 3 + 5c5 T 4 + · · ·

(27)

Now, ⎡ ⎤ T − c2 T 2 − 2c3 − 2c22 T 3 − f (yn )  ⎣ ⎦ f  (yn ) 3c4 − 7c2 c3 + 4c23 T 4 + o en5 ⎡ ⎤ 3 2 ∗ 2 x T + c T + 2c − 2c + 2 3 f (yn ) 2  ⎣ yn −  4 ⎦ 3 f (yn ) 3c4 − 7c2 c3 + 4c2 T + o en5 

f (yn ) f yn −  f (yn )



(28)

(29)

⎤ ⎡ 3 2 2 c T + 2c − 2c + T 2 3 2  f  x∗ ⎣ ⎦ 3c4 − 7c2 c3 + 4c23 T 4 + o en5

(30)

and ρn 

2 f yn −

f (yn ) f  (yn )



f (yn ) ⎤ c2 T + 2c3 − 3c22 T 2 + 3c4 − 10c2 c3 + 7c23 T 3 + ⎥ ⎢  2⎣ 4 5 ⎦ 2 4 2 2 9c2 c3 − 4c2 c4 − 5c2 − 2 c3 − c2 T + o en

  2 P1 T + P2 T 2 + P3 T 3 + P4 T 4 + · · · ⎡

(31)

where P1  c2 , P2  2c3 − 3c22 , P3  3c4 − 10c2 c3 + 7c23 , 2 P4  9c22 c3 − 4c2 c4 − 5c24 − 2 c3 − c22 Now, ⎡

2 P1 2  ⎢ 1 − 2P1 T + 2 − 2 − P2 T + 2 −P3 − P1 P2 − 1 − 2ρn  ⎢

⎣ P2 +2 −P4 − 22 − P1 P3 − 23 P12 P2 − 58 P14 T 4 + · · · and,

P13 2



⎤ T3 ⎥ ⎥. (32) ⎦

1084

V. B. Kumar Vatti et al.



⎤ 2 P1 P13 2 T T3 ⎥ T − + P − P + P P + 1 − P 1 2 3 1 2  ⎢ 2 2 ⎥ 1 + 1 − 2ρn  2⎢

⎦ ⎣ P2 − P4 + 22 + P1 P3 + 23 P12 P2 + 58 P14 T 4 + · · · 

 (33) 1 + 1 − 2ρn  2 1 + M1 T + M2 T 2 + M3 T 3 + M4 T 4 + · · · ⎡

where M1  −c2 , M2  M4  8c22 c3 −

5 2 9 c − 2c3 , M3  − c23 + 8c2 c3 − 3c4 , 2 2 2

37 4 c + c2 c4 − 4c32 8 2

Now again, ⎡ 2 3⎤ 2 3  −1  1 ⎣ 1 − M1 T + M1 − M2 T + 2M1 M2 − M3 − M1 T ⎦ 1 + 1 − 2ρn  2 + M22 + 2M1 M4 − M4 − 3M12 M2 + M14 T 4 + · · · 

 1 1 + N1 T + N2 T 2 + N3 T 3 + N4 T 4 + · · · 2

(34)

where 3 3 N1  c2 , N2  − c22 + 2c3 , N3  − c23 − 4c2 c3 + 3c4 , 2 2 107 4 2 2 N4  −28c2 c3 + c + 5c2 c4 + 8c3 8 2 From (28) and (34), we obtain ⎫ ⎧ ⎪ ⎪ T + [N1 − c2 ]T 2 + ⎪ ⎪ ⎪ ⎪

 3 ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪ N T − N c + 2c − 2c 2 1 2 3 ⎪ ⎪ 2 ⎪ ⎪ ⎪ ⎡ ⎤ ⎪ ⎬ ⎨    −1 f (yn ) N3 − N2 c2 + N1 2c22 − 2c3 2·  · 1 + 1 − 2ρˆn  ⎣ ⎦Tn4 ⎪ ⎪ f (yn ) ⎪ ⎪ ⎪ ⎪ + +7c c − 4c3 − 3c ⎪ ⎪ ⎪ ⎪ 2 3 4 2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ +o T 5 ⎡ 2 ⎤   2c N − N c + N − 2c 3 2 2 1 3 5 2 ⎦Tn4 + o T 5 (35)  T + [0]T 2 + − c22 T 3 + ⎣ 3 2 +7c2 c3 − 4c2 − 3c4

Second Derivative-Free Two-Step Extrapolated Newton’s Method

1085

Hence using (14) and (35), we have  

 5 xn+1  x ∗ + T − T − c22 T 3 + · · · 2 5  x ∗ + c22 T 3 + · · · 2 3 5 ∗  x + c22 c22 en3 + · · · 2 5 i.e., en+1  c28 en9 + o en10 · · · 2

(36)

Hence, we have en+1 α en9 . Therefore, the Algorithm 2.1 has ninth-order conver√ 6 gence and its efficiency index is 9  1.442, which is the same as that of the method (6) and better than those of the methods (2) and (5), both have efficiency index as √ 8 18  1.435.

4 Numerical Examples We consider the same examples considered by Mohammed and Hafiz [11] and compared STENM with PCH, PCNH, EOCM, and TENM methods. The computations are carried out by using mpmath-PYTHON software programming and comparison of number of iterations for these methods are obtained such that |xn+1 − xn | < 10−201 and | f (xn+1 )| < 10−201 (Table 1). It is evident from these tabulated values that STENM is superior to the method (6) considering the number of iterations and the rate at which STENM converged and the convergence rate is almost same as that of the methods (2), (5) and (7). Of these methods, STENM is free from second derivatives.

1086

V. B. Kumar Vatti et al.

Table 1 Comparison of different methods Method Initial The equation f (x)  0 No. of guess x0 and its root by respective iterations methods 14 PCH PCNH EOCM TENM STENM PCH PCNH EOCM TENM STENM

5 5 3 4 4

−3.27E−201 −3.27E−201 −3.27E−201 3.1E−200 3.1E−200

−5.75E−199 −5.75E−199 1.07E−198 − 5.75E−199 −5.75E−199

5 4 3 4 4

−6.53E−201 −6.53E−201 9.31E−200 −3.75E−200 −3.75E−200

−7.67E−200 −7.67E−200 −7.67E−200 1.72E−199 1.72E−199

f (x)  sin2 x − x 2 + 1 1.4044916482153412 1.4044916482153412 1.4044916482153412 1.4044916482153412 1.4044916482153412

2

f (x)  x 2 − e x − 3x + 2 5 4 4 4 4

2.45E−201 2.45E−201 2.45E−201 1.63E−201 1.63E−201

−3.27E−201 −3.27E−201 −3.27E−201 −3.27E−201 −3.27E−201

1.7

0.2575302854398607 0.2575302854398607 0.2575302854398607 0.2575302854398607 0.2575302854398607 f (x)  cos x − x 0.7390851332151606 0.7390851332151606 0.7390851332151606 0.7390851332151606 0.7390851332151606

5 4 3 4 4

−4.9E−201 −4.9E−201 −4.9E−201 −3.27E−201 −3.27E−201

2.45E−201 2.45E−201 2.45E−201 2.45E−201 2.45E−201

5 4 3 4 4

−4.9E−200 −4.9E−200 −4.9E−200 6.8E−200 6.8E−200

−2.1E−199 −2.1E−199 −2.1E−199 1.2E−198 1.2E−198

PCH PCNH EOCM TENM STENM PCH PCNH EOCM TENM STENM 2 PCH PCNH EOCM TENM STENM

| f (xn+1 )|

f (x)  x 3 + 4x 2 − 10 1.3652300134140968 1.3652300134140968 1.3652300134140968 1.3652300134140968 1.3652300134140968

1.3

|xn+1 − xn |

f (x)  x 3 − 10 2.1544346900318837 2.1544346900318837 2.1544346900318837 2.1544346900318837 2.1544346900318837

Second Derivative-Free Two-Step Extrapolated Newton’s Method

1087

References 1. Vatti, V.B.K., Sri, R., Mylapalli, M.S.K.: Eighteenth order convergent method for solving non-linear equations. Orient. J. Comp. Sci. Technol. 10(1), 144–150 (2017) 2. Argyros, I.K., Khattri, S.K.: An improved semi local convergence analysis for the Chebyshev method. J. Appl. Math. Comput. 42(1, 2), 509–528 (2013). https://doi.org/10.1007/s12190-01 3-0647-3 3. Hafiz M.A.: Solving nonlinear equations using Steffensen-type methods with optimal order of convergence. Palestine J. Math. 3(1) (2014) 113–119 4. Hafiz, M.A.: A new combined bracketing method for solving nonlinear equations. J. Math. Comput. Sci. 3(1) (2013) 5. Hafiz, M.A., Al-Goria S.M.H.: Solving nonlinear equations using a new tenth-and seventhorder methods free from second derivative. Int. J. Differ. Equ. Appl. 12(3), 169–183 (2013). https://doi.org/10.12732/ijdea.v12i3.1344 6. Noor, K.I., Noor, Shaher, M.A., Momani, S.: Modified householder iterative methods for nonlinear equations. Appl. Math. Comput. 190, 1534–1539 (2007). https://doi.org/10.1016/j.amc. 2007.02.036 7. Khattri, S.K., Log, T.: Constructing third-order derivative-free iterative methods. Int. J. Comput. Math. 88(7), 1509–1518 (2011). https://doi.org/10.1080/00207160.2010520705 8. Khattri, S.K.: Quadrature based optimal iterative methods with applications in high precision computing. Numer. Math. Theor. Meth. Appl. 5, 592–601 (2012) 9. Khattri, S.K.: Trond Steihaug, Algorithm for forming derivative-free optimal methods. Numer. Algorithms (2013). https://doi.org/10.1007/s11075-013-9715-x 10. Mohamed Bahgat, S.M., Hafiz, M.A.: New two-step predictor-corrector method with ninth order convergence for solving nonlinear equations. J. Adv. Math. 2, 432–437 (2013) 11. Mohamed Bahgat S.M, Hafiz, M.A.: Three-step iterative method with eighteenth order convergence for solving nonlinear equations. Int. J. Pure Appl. Math. 93(1) 85–94 (2014) 12. Vatti V.B.K., Sri R., Mylapalli, M. S. K.: Two step extrapolated Newton’s method with high efficiency index. J. Adv. Res. Dyn. Control Syst. 9(5), 08–15 (2017)

Review of Deep Learning Techniques for Gender Classification in Images Neelam Dwivedi and Dushyant Kumar Singh

Abstract Automatic gender classification from the face images is a challenging as well as demanding task. It has many applications in the fields of biometrics, security, surveillance, human–computer interaction, etc. Gender recognition requires powerful features of images. Researchers working in this area had proposed different methods of extracting features from image for gender recognition. Some of such features are the Local Binary Patterns (LBP), Scale-Invariant Feature Transform (SIFT), Histogram of Oriented Gradients (HOG), weighted HOG, COSFIRE filter, etc. Beyond this, Convolutional Neural Network (CNN) nowadays are getting widely used for feature extraction and classification in different vision applications. Here, a review is proposed on the use of CNN for gender recognition. The review is supported by results derived through the experiments performed on GENDER-FERET face dataset. Keywords Gender recognition · Classification · CNN · Deep learning

1 Introduction Gender recognition from the face images is used to play a significant role in computer vision. Gender identification system can be used at many gender-restricted places, such as a female compartment in trains, temples, gender-specific advertisement, etc. Gender identification system may be integrated with other automated systems such as face recognition, human–computer interaction, etc., to solve other problems. Changes in illumination, occlusion, noise, age, and ethnicity are some other factors that affect the accuracy of gender recognition. N. Dwivedi (B) · D. K. Singh CSED, Motilal Nehru National Institute of Technology Allahabad, Allahabad, India e-mail: [email protected] D. K. Singh e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_102

1089

1090

N. Dwivedi and D. K. Singh

Gender classification problem has two classes, i.e., male or female. Majority of gender classification approaches work in three phases: feature extraction, classifier training, and recognition task with the help of trained classifier. Features for gender recognition are extracted either by appearance-based methods or by geometric methods. Appearance-based methods consider the whole image instead of only the local features corresponding to different parts of the face. While, in geometric-based methods, the geometric features like the face length, width and distance between eyes, etc., are taken into consideration. Nearest neighbor, linear discriminant analysis, and other binary classifiers are used in the literature for the classification of gender. Here, CNN-based classification technique is reviewed for automatic gender recognition on GENDER-FERET face dataset. This paper is organized as follows: In Sect. 2, related work is presented. In Sect. 3, we presented the methodology of the proposed approach and described the proposed CNN architecture. Experimental analysis with different models of CNN and finally, the conclusion is presented in Sects. 4 and 5 respectively.

2 Related Work Gender recognition problem is investigated over the last few decades. Some of the state-of-the-art techniques for gender classification are presented in this section. Levi et al. [1] proposed an approach for age recognition system using deep convolutional neural network architecture. This architecture performs well even with the lesser amount of learning data. Wang et al. [2] proposed gender recognition system using dense Scale-Invariant Feature Transform (d-SIFT) and shaped contexts of the face images. The proposed scheme delivers high accuracy for gender recognition even if the faces are not separated. Ullah et al. [3] proposed a spatial Weber’s Local Descriptor (WLD) for gender classification with FERET dataset. The accuracy of their method is high and the time complexity is found low. Brunelli and Poggio [4] fused 16 geometric features of an image and utilize them to train two parallel competitive networks, i.e., one network each for male and female. To test the approach, 168 images are used and classification accuracy claimed by the authors is 79%. Lahoucine et al. [5] proposed a method for gender classification and face recognition using two types of facial curves: radials and levels. These facial curves provide local facial shape information and hence used as geometric features. To learn these features, fitting curves using adaptive boosting are used. Precise Patch Histogram (PPH) technique is used in [6], which is based on active appearance model. In this method, the images are modeled by the patches around the coordinates of certain landmarks. A very different discriminative face feature is extracted and utilized for gender classification as per proposal in [7]. This method achieves high classification accuracy, but it takes more computational time and memory. Flora et al. [8] proposed dynamic motion features for gender recognition. These features are extracted from

Review of Deep Learning Techniques …

1091

face motion information using principal component analysis (PCA) and classification is performed using Support Vector Machine (SVM). Guo et al. [9] used another feature extraction method, called biologically inspired features (BIFs), and a SVM on the MIT database. Using these methods, they reported that the correct recognition accuracy was 80%. Nguyen et al. [10] used HOG method [11] for feature extraction and SVM classification technique for gender classification. Recognition accuracy of the combined images is better than using visible-light only or thermal images only.

3 Methodology Flowchart of the proposed approach is shown in Fig. 1. Here, gender recognition is performed through images using CNN. First of all, feature is extracted through images of persons using CNN. Train set and a test set of images are taken from GENDERFERET dataset. Now, these features act as an input for the classification layer of the CNN. Once, the model is trained with all the input images existing in the training set then classifier is enabled. These classifiers are further used for recognizing the gender of the new image. CNN is a learning-based method for image classification and is used in various applications like face recognition, signature verification etc.

Fig. 1 Flowchart of the proposed approach

1092

N. Dwivedi and D. K. Singh

3.1 CNN-Based Feature Extraction Method for Gender Recognition In the literature, it is found that researchers working in the field of gender recognition through images used different feature extractors such as LBP, HOG, BIFs, and weighted HOG to extract the features. Further, they used these features to train different classifiers such as neural network, support vector machines (SVM), etc. for the said task. These trained classifiers are further used for classifying the gender of an unknown image. But the major drawback of aforementioned approaches is that the same feature extractor is used at all locations in the images. Since different information is present in the different part of the images and it cannot be captured through the same descriptor. Also, the design of the feature extractors is based on the observation and knowledge of the designers on a specific problem. Therefore, the extractors merely capture several aspects of the problem. For example, the LBP method counts the number of uniform and nonuniform image texture features in an image [10, 12] but it cannot capture the edges and its strength. HOG feature descriptor captures the edges and edge strengths [10, 13–15] only. BIFs method extracts the image features using different bandwidth and texture direction using Gabor filters [9]. Hence, to minimize these problems, there is the requirement of an architecture that can be used to extract the features of an image by applying different feature extraction methods at a different part. Here, for this reason, we have used CNN architecture that extracts the feature by learning-based method. The proposed CNN architecture consists of five convolutional layers and one fully connected layers. The detailed descriptions of the network are given in Table 1. In this architecture different numbers of filters are used at different convolutional layers. Also, different filter size is used at different convolutional layers. NA stands for not available in Table 1. The main structure of a CNN is convolutional layers followed by the rectified linear units (ReLUs) and pooling layers. In the literature, researchers claimed that the performance of CNN architecture in many of the computer vision applications are better compared with traditional methods. For example, performance of the CNN method is good in case of handwriting recognition [16], image classification [17], face recognition [18], image-depth estimation from a single color image [19], person re-identification [20, 21], etc. Here, in this proposed approach image of a person is taken as input. In this architecture, input image of size 384 × 256 pixels is given to the first layer a convolutional layer which contains 64 filters of size 3 × 3 pixels at a stride of 1 pixel both directions: horizontal and vertical. The proposed CNN structure is robust to image translation and the 64 feature maps are fed to a max-pooling layer. 64 feature maps of size 192 × 128 × 64 pixels are produced by the first layer as shown in Table 1. To fine-tune it, the second layer with 128 filters of size 3 × 3 × 128, a stride of 1 pixel, and padding of 1, followed by another max-pooling layer is placed after the first layer. 128 feature maps of size 96 × 64 × 128 pixels are obtained using these two layers, as shown in Table 1. Three additional convolution layers are used for better learning of CNN as shown in Table 1. Third, fourth and fifth convolutional

Review of Deep Learning Techniques …

1093

Table 1 Detailed structure description of proposed CNN architecture for base model Layer name

Number of filters

Filter size

Stride size

Padding size

Output size

Input layer

NA

NA

NA

NA

384 × 256 × 3

Convolutional layer 1 Rectified linear unit Max pooling layer 1

64 NA 1

3×3×3 NA 2×2

1 NA 2

1 NA 0

384 × 256 × 64 384 × 256 × 64 192 × 128 × 64

Convolutional layer 2 Rectified linear unit Max pooling layer 2

128 NA 1

3 × 3 × 64 NA 2×2

1 NA 2

1 NA 0

192 × 128 × 128 192 × 128 × 128 96 × 64 × 128

Convolutional layer 3 Rectified linear unit Max pooling layer 3

128 NA 1

3 × 3 × 128 NA 2×2

1 NA 2

1 NA 0

96 × 64 × 128 96 × 64 × 128 48 × 32 × 128

Convolutional layer 4 Rectified linear unit Max pooling layer 4

256 NA 1

3 × 3 × 128 NA 2×2

1 NA 2

1 NA 0

48 × 32 × 256 48 × 32 × 256 24 × 16 × 256

Convolutional layer 5 Rectified linear unit Max pooling layer 5

256 NA 1

3 × 3 × 256 NA 2×2

1 NA 2

1 NA 0

24 × 16 × 256 24 × 16 × 256 12 × 8 × 256

NA

NA

NA

2

Fully NA connected layer Softmax layer

NA

NA

NA

NA

2

Classification layer

NA

NA

NA

NA

2

layers contain 128 filters of size 3 × 3 × 128, 256 filters of size 3 × 3 × 128, and 256 filters of size 3 × 3 × 256, respectively. 256 feature maps of size 12 × 8 pixels are obtained by first five convolutional layers. These feature maps are fed to the one fully connected layer (i.e., output layer) that includes 2 neurons because the proposed architecture is designed for the gender recognition problem for assuming two classes: male and female.

1094

N. Dwivedi and D. K. Singh

4 Experimental Analysis Experiments are performed on Windows 10 operating system with 16 GB of physical memory having Intel Core i7 processor. For simulating the proposed architecture in this paper, MATLAB R.2017a is used. Here, FERET dataset [22], is used for training and testing of the architecture. FERET dataset is publicly available with the name GENDER-FERET. In Fig. 2, some samples of faces available in GENDER-FERET dataset are shown. This dataset contains 946 frontal faces (473 male, 473 female). Experiments have been performed in five different phases. In Phase 1, 2, and 4 the dataset is divided into training set: 237 men and 237 women images, and test set: 236 men and 236 women images. In Phase 3 the divided dataset is further divided into training set: 150 men and 150 women images, and a test set: 236 men and 236 women images. In phase 5 the dataset is divided into a training set: 323 men and 323 women images, and a test set: 150 men and 150 women. In both the training and the test sets there are faces with different expressions, and backgrounds. The face of a person is represented either in the training set or the test set but not in both.

Fig. 2 Faces available in GENDER-FERET dataset

Review of Deep Learning Techniques …

1095

Table 2 TPR, FPR, precision and accuracy for phase I of the experiment TPR (%) FPR (%) Precision (%) Female Male

97.46 20.77

79.22 2.53

55.79 88.88

Table 3 TPR, FPR, precision, and accuracy for phase II of the experiment TPR (%) FPR (%) Precision (%) Female Male

47.67 97.04

2.95 52.32

94.16 64.97

Accuracy (%) 59.61 59.61

Accuracy (%) 72.36 72.36

4.1 Evaluation Parameters Used for Measuring the Performance To evaluate the performance of the different architectures of CNN, we have used True Positive Rate (TPR), False Positive Rate (FPR), Precision and Accuracy. Here, we have used these parameters separately for male and female. If these parameters are calculated for male then “true value” is assumed for male and “false value” for female and vice versa. TPR, FPR, Precision, and Accuracy are formulated as the equations given below [23]. TPR  TP/(TP + FN)

(1)

FPR  FP/(FP + TN)

(2)

Precision  TP/(TP + FP)

(3)

Accuracy  (TP + TN)/(TP + TN + FP + FN)

(4)

4.2 Results and Discussions We have performed experiments in five different phases by changing some parameters in CNN architecture. The architecture of base model is shown in Table 1. For phase I, five convolutional layers are selected. 16, 32, 64, 128, and 256 filters have been used in convolutional layers from 1 to 5 followed by two fully connected layers which have 512 and 2 neurons. Remaining parameters are same as the base model. For phase II of the experiment, six convolutional layers are selected. 16, 128, 128, 256, 256, and 512 filters have been used in convolutional layers from 1 to 6 respectively. CNN architectures for Phase III, IV and V are the same as the base model shown in Table 1. TPR, FPR, Precision, and Accuracy (in %) for each phase is shown in Tables 1, 2, 3, 4 and 5 respectively. From the Tables 1 and 2, it can be observed that accuracy is increased in phase II of the experiment when the number of convolutional layer is increased. It can also be

1096

N. Dwivedi and D. K. Singh

Table 4 TPR, FPR, precision, and accuracy for phase III of the experiment TPR (%) FPR (%) Precision (%) Female Male

86.49 81.81

18.18 13.50

82.99 85.50

Table 5 TPR, FPR, precision, and accuracy for phase IV of the experiment TPR (%) FPR (%) Precision (%) Female Male

72.57 97.89

2.10 27.00

97.19 78.37

Accuracy (%) 84.18 84.18

Accuracy (%) 85.44 85.44

Fig. 3 TPR, FPR, precision, and accuracy for all five phases of the experiment

observed that, TPR for a female is decreased while for a male it increased. It means, for the used dataset when the number of convolutional layers increases behavior of CNN architecture is moved towards male biased which was earlier female biased. For the third phase of the experiment, again five convolutional layers have been used with two softmax layer, but filters in each layer are higher than the phase 1. Here, accuracy is further increased by increasing number of filters and also TPR for male and female both are similar. It means behavior of this architecture is not biased towards any particular gender (male or female). From Tables 4, 5 and 6 it can be observed that accuracy of CNN increases by increasing number of training images and decreases by decreasing number of images with the same architecture. It can be also concluded from these tables that behavior of CNN for these cases (Phase III, Phase IV, and Phase V) is not biased towards any particular gender (male or female). From Figs. 3 and 4, the following conclusions can be drawn: 1. Classification accuracy is highest in Phase 5 of the experiment. 2. TPR is highest in phase 1 when female is considered as true class while lowest when male is considered as true class. It means, for phase 1 of the experiment CNN architecture is biased toward female class.

Review of Deep Learning Techniques …

1097

Fig. 4 TPR, FPR, precision, and accuracy for all five phases of the experiment Table 6 TPR, FPR, precision, and accuracy for phase V of the experiment TPR (%) FPR (%) Precision (%) Female Male

95.33 85.33

2.10 4.66

86.33 85.33

Accuracy (%) 90.33 90.33

3. In case of phase 2 of the experiment, when the male is considered as true class then its TPR is high while in case of female is considered as true class then its TPR is very low. It means, in this case CNN is biased towards male class. 4. In case of phase 3, phase 4, and phase 5 of the experiment TPR is comparable for both cases either female is taken as true class or male is taken as a true class. Hence, it can be concluded that CNN architecture used in phase 3 of the experiment is not biased towards any particular gender. Hence, it is the best architecture among other architectures used in this paper for the gender identification problem through images.

5 Conclusions In this paper, gender recognition using different convolutional neural network (CNN) architecture is compared. One of them is taken as a base architecture and by changing different parameters like number of fully connected layers; the number of filters different models are created. During training, features are directly extracted from the image by CNN. Again for testing of the new image feature is extracted first by same CNN architecture and then its class is predicted by the trained CNN Model. The accuracy of each of the model is compared and found that accuracy of the classifier increases with increase in a number of fully connected layers and number of filters both. Model with the best accuracy is further selected for analyzing the effect of

1098

N. Dwivedi and D. K. Singh

changes in the number of training images. During these experiments, it can be concluded that accuracy of classifier increases with increase in the number of training images. Maximum accuracy obtained in this paper is 90.33% with the base model architecture of CNN. Training time and testing time of the classifier increases exponentially by increasing number of training/testing images or by increasing number of filters. This is another conclusion that can be drawn from these experiments. Experimentally, it can be also concluded that CNN architecture provides in this paper for gender identification outperforms over other architecture both for feature extraction and gender recognition through image.

References 1. Levi, G., Hassner, T.: Age and gender classification using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 34–42 (2015) 2. Wang, J.-G., Li, J., Yau, W.Y., Sung, E.: Boosting dense SIFT descriptors and shape contexts of face images for gender recognition. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 96–102. IEEE (2010) 3. Ullah, I., Hussain, M., Ghulam, M., Aboalsamh, H., Bebis, G., Mirza, A.M.: Gender recognition from face images with local wld descriptor. In 2012 19th International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 417–420. IEEE (2012) 4. Poggio, B., Brunelli, R., Poggio, T.: HyberBF Networks for Gender Classification (1992) 5. Ballihi, L., Amor, B.B., Daoudi, M., Srivastava, A., Aboutajdine, D.: Boosting 3-D-geometric features for efficient face recognition and gender classification. IEEE Trans. Inf. Forensics Secur. 7(6), 1766–1779 (2012) 6. Shih, H.-C.: Robust gender classification using a precise patch histogram. Pattern Recogn. 46(2), 519–528 (2013) 7. Wu, M., Zhou, J., Sun, J.: Multi-scale ICA texture pattern for gender recognition. Electron. Lett. 48(11), 629–631 (2012) 8. Flora, J.B., Lochtefeld, D.F., Bruening, D.A., Iftekharuddin, K.M.: Improved gender classification using nonpathological gait kinematics in full-motion video. IEEE Trans. Hum.-Mach. Syst. 45(3), 304–314 (2015) 9. Guo, G., Mu, G., Fu, Y.: Gender from body: a biologically-inspired approach with manifold learning. In: Asian Conference on Computer Vision, pp. 236–245. Springer, Berlin, Heidelberg (2009) 10. Nguyen, D.T., Park, K.R.: Body-based gender recognition using images from visible and thermal cameras. Sensors 16(2), 156 (2016) 11. Nguyen, D.T., Park, K.R.: Enhanced gender recognition system using an improved histogram of oriented gradient (HOG) feature from quality assessment of visible light and thermal images of the human body. Sensors 16(7), 1134 (2016) 12. Nguyen, D.T., Cho, S.R., Pham, T.D., Park, K.R.: Human age estimation method robust to camera sensor and/or face movement. Sensors 15(9), 21898–21930 (2015) 13. Cao, L., Dikmen, M., Fu, Y., Huang, T.S.: Gender recognition from body. In: Proceedings of the 16th ACM International Conference on Multimedia, pp. 725–728. ACM (2008) 14. Nguyen, D.T., Park, K.R.: Enhanced gender recognition system using an improved histogram of oriented gradient (HOG) feature from quality assessment of visible light and thermal images of the human body. Sensors 16(7), 1134 (2016) 15. Singh, D.K.: Gaussian elliptical fitting based skin color modeling for human detection. In 2017 IEEE 8th Control and System Graduate Research Colloquium (ICSGRC), pp. 197–201. IEEE (2017)

Review of Deep Learning Techniques …

1099

16. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 1097–1105 (2012) 18. Taigman, Y., Yang, M., Ranzato, M.A., Wolf, L.: Deepface: Closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014) 19. Liu, F., Shen, C., Lin, G.: Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5162–5170 (2015) 20. Ahmed, E., Jones, M., Marks, T.K.: An improved deep learning architecture for person reidentification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3908–3916 (2015) 21. Cheng, D., Gong, Y., Zhou, S., Wang, J., Zheng, N.: Person re-identification by multi-channel parts-based cnn with improved triplet loss function. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1335–1344 (2016) 22. Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET evaluation methodology for facerecognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000) 23. Agarwal, A., Gupta, S., Singh, D.K.: Review of optical flow technique for moving object detection. In: 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I), pp. 409–413. IEEE (2016)

A Teaching–Learning-Based Optimization Algorithm for the Resource-Constrained Project Scheduling Problem Dheeraj Joshi, M. L. Mittal and Manish Kumar

Abstract In this paper, a recently introduced population-based metaheuristic known as teaching–learning-based optimization algorithm (TLBO) is used to find solution for the resource-constrained project scheduling problem (RCPSP). The RCPSP is considered in its basic form, wherein activities are non-preemptive in nature and their execution is possible in a single mode only. The scheduling objective chosen is minimizing the makespan or total project duration. TLBO algorithm in its original form employs two phases, namely the teacher phase and the learner phase to reach a global optimum solution. In order to increase the exploitation and exploration capabilities of the basic TLBO the concepts of elitism and mutation as used in genetic algorithm (GA) have been introduced. An activity list representation is used to represent a learner (solution) and to derive the schedule from this activity list a serial schedule generation scheme is used as a decoding procedure. It has been found after the computational experiment on a test problem from the literature that proposed TLBO gives results competitive to the other metaheuristics like GA and particle swarm optimization (PSO). In addition, it offers the inherent advantage of less parameter to tune and can, therefore, be used as an effective method to solve RCPSP and its other variants as well. Keywords Resource-constrained project scheduling · Metaheuristic Teaching–learning-based optimization

D. Joshi (B) Department of Mechanical Engineering, Swami Keshvanand Institute of Technology, Management & Gramothan, Jaipur, India e-mail: [email protected] D. Joshi · M. L. Mittal · M. Kumar Department of Mechanical Engineering, Malaviya National Institute of Technology, Jaipur 302017, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_103

1101

1102

D. Joshi et al.

1 Introduction Most of the business organizations carry out developmental activities to remain competitive in the market. These developmental activities are generally carried out through projects that need effective planning and control for their successful completion. The techniques of project planning namely PERT and CPM that came into practice in the late 1950s consider the unlimited availability of resources for execution of activities, which is an unrealistic assumption. In practice, real-life problems related to R&D, maintenance, construction, software development etc. require resources that are usually scarce and this gives rise to what is known as resource-constrained project scheduling problem (RCPSP). A project is a temporary endeavor which consists of different activities interconnected by precedence relations arising due to technological or other constraints. The process of scheduling aims to find the start and finish times of the activities in such a manner that the resource and precedence constraints are fulfilled and the stated objective (s) is optimal. A number of researchers have contributed significantly in the area of RCPSP. To have a comprehensive overview and nature of problem, the survey by Hartmann and Briskorn [11] is worth mentioning. Initial solution techniques and approaches for RCPSP were mainly based on optimal or exact approaches. However, exact approaches were found to be incompetent to provide good solutions for practical sized and this has motivated a lot of researchers in the past few decades to search for the near-optimal or heuristic solution approaches for solving RCPSP. Earlier heuristics for the RCPSP are priority rule based and can be broadly categorized into constructive heuristics and improvement heuristics. The former methods are single-pass in nature wherein priority values are assigned to the activities either statically or dynamically. Heuristics, in general, are incompetent to deal efficiently the practical optimization problems which may be nonlinear, discontinuous and multi-modal in nature. To overcome these difficulties, lately metaheuristics have been widely used in different fields and RCPSP is no exception. These heuristics are generally inspired by nature or based on some physiological phenomenon. Simulated annealing (SA) was successfully applied for RCPSP by Boctor [2], Cho and Kim [5] and Bouleimen and Lecocq [3]. Tabu search for RCPSP has been successfully applied by Verhoeven [17], Thomas and Salhi [16], and Bukata et al. [4]. In the works of [18] two different representations of particle swarm optimization (PSO) namely priority-based and permutation-based representations were considered and their relative performance was analyzed. Among the most widely applied population-based metaheuristic in RCPSP is a genetic algorithm (GA). Some of these applications include GAs of Hartmann [9, 10], Alcaraz and Maroto [1], Mendes et al. 13]. In recent years, few other types of metaheuristics have also been applied for RCPSP. Fang and Wang [7] employed shuffled frog-leaping algorithm (SFLA) to solve the RCPSP. Eshraghi [6] introduced differential evolution (DE) algorithm embedded with local search techniques. It was found that results were competitive with GA if parameters are properly tuned. Giran et al. [8] employed harmony search (HS) algorithm and solved scheduling problems related to construction projects.

A Teaching–Learning-Based Optimization Algorithm …

1103

Rao et al. [15] introduced a recent algorithm based on the teaching–learning process commonly seen in classrooms and hence named it as teaching-learning-based optimization (TLBO) algorithm. The algorithm proved very competitive particularly for various mathematical benchmark functions as well as for mechanical design optimization problems of continuous nature. Zheng et al. [20] employed it for the multi-skill version of RCPSP and also found it effective than other metaheuristics available in the literature. In addition, Zheng et al. [19] also applied it on stochastic RCPSP. The TLBO has not been tested on a general class of RCPSP where resources possess single skill and activities are non-preemptive in nature. Looking to this perspective, the current work presents a TLBO algorithm on single-mode RCPSP. To increase the performance of conventional TLBO, the techniques of elitism and mutation, as used in GA, have also been incorporated. The algorithm is tested on a benchmark test problem taken from literature. The results are quite promising in terms of solution quality with the added advantage of requiring very few parameters to be tuned unlike metaheuristics like GA, PSO etc. The remaining of this paper is presented in the following sequence. The next section formally introduces the RCPSP. The basic philosophy of TLBO and its implementation to the RCPSP is discussed in Sect. 3. The experimental results on a test project instance [18] are exhibited in Sect. 4. The conclusions from the results of the study have been exhibited in Sect. 5.

2 Problem Description In this paper, we consider RCPSP in which project activities are to be scheduled for optimizing a given objective subject to resource availability and precedence constraints. To represent a project, an activity-on-node (AON) network is widely used which has a set of n activities J  {1, 2, 3,…, n}. The set P corresponds to the various precedence relations between the activities which may arise due to technological constraints. The activities 1 and n represent the start and end activities of the project. These activities are dummy in nature, i.e., they do not consume any time or resource for their execution (d i  0  d n ). The resources are considered to be renewable in nature and are available on per period basis. A particular resource type is represented by k such that k  1, 2, 3,…, K with K being the total types of resources. For its execution, an activity j requires r jk units of resource type k during each period of its execution. Rk is the total amount of resource type k available per period. F i represents the finish time of an activity i, whereas A(t) represents the active activities set at any period t with H as the planning horizon. The activity duration d i and other parameters like start and finish time as well as resource requirements are assumed integer and nonnegative in nature. The scheduling objective is to finish the project at its earliest. In other words, the total project duration or makespan has to be minimized. In the light of above description, we can represent the mathematical formulation of RCPSP as

1104

D. Joshi et al.

To minimize Fn Sub to: F j −Fi ≥ di

 A(t)

(i, j) ∈ P

rik ≤ Rk t  1, . . . ., H ; k  1, 2, . . . .K Fi ≥ 0 i  {1, 2, 3, . . . , n}

(1) (2) (3) (4)

Here, Eq. (1) is the objective function which aims to minimize the completion time of the end activity and thus makespan of the project. Precedence constraints on activities are enforced by constraint (2) whereas constraint (3) ensures that resource requirements at any time t do not exceed the available capacity. As stated earlier, the decision variables are a nonnegative integer which is ensured by constraint (4).

3 TLBO Implementation for the RCPSP The TLBO as introduced by Rao et al. [15] mimics the usual process of teaching and learning commonly seen in classrooms. It is a population-based algorithm and utilizes a group of students called learners as an initial population to reach a global optimum. The algorithm, as mentioned earlier, consists in general two phases namely the teacher phase and the learner phase. In this work, we however employed two additional dimensions in TLBO namely elitism and mutation to increase its exploitation and exploration capabilities. The details of the proposed TLBO and its implementation mechanism can be better understood by a flowchart as depicted in Fig. 1. The literature [15] reveals that the TLBO is basically conceptualized for the mechanical design optimization problems having design parameters which are continuous in nature. Looking to this aspect, some modifications must be done before it is applied to the RCPSP where decision variables are integers. This section describes the issues related to individual representation and fitness function evaluation in the proposed TLBO.

3.1 Encoding and Decoding We select the activity list representation by Hartmann [9] to encode an individual learner or student. For a given project of N activities, a random list of N individuals (λ  j1 , j2 , …, jN ) is generated where the index values in the list represent the corresponding priority of activities for execution. The initial population is generated by this method. The makespan or total project duration is analogous to the fitness function in RCPSP. In order to decode a given individual (activity list) into a schedule, a serial schedule generation scheme (SGS) [12] is used. In this scheme, each activity

A Teaching–Learning-Based Optimization Algorithm …

1105

Generate randomly the initial population of learners

Identify the best learner as teacher

Perform two-point crossover between each learner and teacher to generate a new learner

Perform two-point crossover between each learner

Teacher Phase

Learner Phase

and randomly chosen another learner

Perform mutation on the population

Self-study phase

Replace worst solutions with elite solutions

Elitism

Update teacher

Is the termination criterion satisfied?

Output teacher

Fig. 1 Flowchart for the proposed TLBO

is chosen one by one by the given priority list and scheduled at its earliest possible start time respecting the precedence and resource constraints.

1106

D. Joshi et al.

3.2 Teacher Phase We determine the makespan for all the individuals (learners) from the initial population using SGS and set the best learner with minimum makespan as the teacher. To employ the teacher phase in our algorithm, two-point crossover [9] has been adopted in the following manner. Let λ1 and λ2 represent the learner and the teacher respectively which is nothing but activity list. We generate two random integers u1 and u2 in the range [1, n] where n represents the number of activities. Using Eqs. (5)–(7) and the values u1 and u2 , we determine a new activity list λnew which is the output of the crossover. Figure 2 shows an example of 2-point crossover driven teacher phase for u1  4 and u2  7 and n  10. : λ1j , 1 ≤ j ≤ u 1 λnew j    new λnew : λ2k , k  min{k λ24 ∈ / λnew }, u 1 + 1 ≤ j ≤ u 2 1,............, λu 1 j    new : λ1k , k  min{k λ1k ∈ / λnew λnew 1,.............., λu 2 }, u 2 + 1 ≤ j ≤ n j

(5) (6) (7)

This new student λnew is accepted if it gives a lesser makespan as compared to old, else the old student is retained in the system.

3.3 Student Phase In this phase as explained earlier, a student tries to improve his or her knowledge or marks by mutual interaction with other randomly chosen learners. To accomplish this, we have employed the two-point crossover technique (as used in teacher phase) on two randomly chosen learners. The new student (activity list in this case) is used to calculate the new makespan using SGS. If the value obtained is better than the

8

3

1

5

4

7

5

2

8

3

1

5

6 10 4 u1=4

7

2

9

2

10

λ1 = learner

6

8

1

3

9

λ2= teacher

7

2

6

9

10

u2=7

Fig. 2 An illustration of two-point crossover in teacher phase

λnew = new learner

A Teaching–Learning-Based Optimization Algorithm …

1107

previous value, we accept it in the new the population, else the previous student is retained in population.

3.4 Self-study and Elitism The concept of elitism has been widely used in genetic algorithms and other metaheuristics. Rao and Patel [14] incorporated and comprehensively tested elitism to improve the performance TLBO. This has been applied to retain some good individuals from one generation to other and it avoids premature convergence of algorithm. In addition, Zheng and Wang [19] introduced self-study phase while applying TLBO for RCPSP with ordinal interval numbers. The self-study is analogous to a mutation in GA and it enhances exploration capabilities of algorithm. Inspired by these research, the proposed TLBO algorithm uses a random mutation applied by Boctor [2] in his simulated annealing approach for RCPSP. More specifically, we select two different activities in the activity lists randomly and swap their positions. Inspired by the experimental conclusions by Rao and Patel [14], an elite size of 4 is chosen in the proposed TLBO.

4 Computational Results The behavior of developed TLBO has been tested on a typical project instance from Zhang et al. [18]. It is a project with high network complexity consisting of 25 activities and two dummy activities. There are three types of renewable resources with per period availability of each resource being 6. A network diagram as depicted in Fig. 3 shows the precedence relations between the activities and corresponding resources required by them. The optimum project completion time for the problem has been found 64 days [18]. The proposed algorithm for this problem is coded in MATLAB R2008a (Version 7.6) and run in Windows 7 having 2.0 GHz processor and 2.00 GB RAM. To compare the results of proposed TLBO from literature, the maximum schedules generated are kept under 1000. The population size chosen is 50 and number of iterations is fixed at 10. Zhang et al. [18] compared the results of their proposed PSO algorithm with GA and three priority based heuristic rules as shown in Table 1. The proposed TLBO in this paper also found the optimum duration of 64 days in less than 5 iterations when tested on this problem as shown in Table 1. Unlike PSO, GA and other evolutionary metaheuristics, the TLBO does not need too many algorithmspecific control parameters to tune. To improve the performance of TLBO one has to control only usual algorithm-specific parameters like population size and a number of generations.

1108

D. Joshi et al.

Fig. 3 A typical project instance [18] Table 1 Comparison of the proposed TLBO with heuristics and metaheuristics Approach of scheduling Project duration MITF (minimum total float)

74

SAD (shortest activity duration)

71

MILFT (minimum late finish time)

67

GA PSO TLBO (this paper)

64 64 64

5 Conclusions The study proposes an improved TLBO algorithm based on teaching-learning method is proposed for a single-mode RCPSP. A solution or individual is encoded by an activity list based on which feasible schedules are generated using SGS. Besides the conventional teacher and learner phase, two additional features namely self-study phase and elitism are also included in the algorithm to increase its exploitation and exploration capabilities. From the study, it is noted that results obtained are competitive to the other metaheuristics procedures and much superior to priority rule-based heuristics. The paper thus presents the TLBO algorithm as an alternative and efficient optimization method to find solutions for the classical RCPSP. In addition, it offers a reduced computational effort in parameter tuning in contrast to many other optimization algorithms. Further

A Teaching–Learning-Based Optimization Algorithm …

1109

work can be addressed to develop a more improved form of TLBO for multi-mode and other versions of RCPSP. In addition, more comprehensive experimentations can be conducted to test the behavior of the algorithm on RCPSP instances with different levels of complexity.

References 1. Alcaraz, J., Maroto, C.: A robust genetic algorithm for resource allocation in project scheduling. Ann. Oper. Res. 102(1–4), 83–109 (2001) 2. Boctor, F.F.: Resource-constrained project scheduling by simulated annealing. Int. J. Prod. Res. 34(8), 2335–235 (1996) 3. Bouleimen, K., Lecocq, H.: A new efficient simulated annealing algorithm for the resourceconstrained project scheduling problem. Eur. J. Oper. Res. 149(2), 268–281 (2003) 4. Bukata, L., Sucha, P., Hanzalek, Z.: Solving the resource constrained project scheduling problem using the parallel tabu search designed for the CUDA platform. J. Parallel Distrib. Comput. 77, 58–68 (2015) 5. Cho, J.H., Kim, Y.D.: A simulated annealing algorithm for resource constrained project scheduling problems. J. Oper. Res. Soc. 48(7), 735–744 (1997) 6. Eshraghi, A.: A new approach for solving resource constrained project scheduling problems using differential evolution algorithm. Int. J. Ind. Eng. Comput. 7(2), 205–216 (2016) 7. Fang, C., Wang, L.: An effective shuffled frog-leaping algorithm for resource-constrained project scheduling problem. Comput. Oper. Res. 39(5), 890–901 (2012) 8. Giran, O., Temur, R., Bekdas, G.: Resource constrained project scheduling by harmony search algorithm. KSCE J. Civil Eng. 21(2), 479–487 (2017) 9. Hartmann, S.: A competitive genetic algorithm for resource-constrained project scheduling. Naval Res. Logistics 45(7), 733–750 (1998) 10. Hartmann, S.: A self-adapting genetic algorithm for project scheduling under resource constraints. Naval Res. Logistics 49(5), 433–448 (2002) 11. Hartmann, S., Briskorn, D.: A survey of variants and extensions of the resource-constrained project scheduling problem. Eur. J. Oper. Res. 207(1), 1–14 (2010) 12. Kolisch, R.: Serial and parallel resource–constrained project scheduling methods revisited: Theory and computation. Eur. J. Oper. Res. 90(2), 320–333 (1996) 13. Mendes, J.J.M., Goncalves, J.F., Resende, M.G.C.: A random key based genetic algorithm for the resource constrained project scheduling problem. Comput. Oper. Res. 36(1), 92–109 (2009) 14. Rao, R.V., Patel, V.: An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. Int. J. Ind. Eng. Comput. 3(4), 535–560 (2012) 15. Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43(3), 303–315 (2011) 16. Thomas, P.R., Salhi, S.: A tabu search approach for the resource constrained project scheduling problem. J. Heuristics 4(2), 123–139 (1998) 17. Verhoeven, M.G.A.: Tabu Search for resource-constrained scheduling. Eur. J. Oper. Res. 106(2–3), 266–276 (1998) 18. Zhang, H., Li, H., Tam, C.: Particle swarm optimization for resource-constrained project scheduling. Int. J. Project Manage. 24, 83–92 (2006) 19. Zheng, H., Wang, L.: An effective teaching–learning-based optimization algorithm for RCPSP with ordinal interval numbers. Int. J. Prod. Res. 53(6), 1777–1790 (2015) 20. Zheng, H., Wang, L., Zheng, X.L.: Teaching–learning-based optimization algorithm for multiskill resource constrained project scheduling problem. Soft. Comput. 21(6), 1537–1548 (2017)

A Tabu Search Algorithm for Simultaneous Selection and Scheduling of Projects Manish Kumar , M. L. Mittal, Gunjan Soni and Dheeraj Joshi

Abstract In this paper, the problem of simultaneous selection and scheduling of projects is considered. Problem considers the time-sensitive profits, interdependencies, fixed planning horizon, and due dates of the projects. A 0–1 integer programing model is presented for the problem. The objective of the model is to maximize the total expected profit from the portfolio. The problem being NP-hard; a solution approach is required to solve the problem optimally in reasonable computational time. Thus, a TS algorithm is developed to solve the problem with a new move strategy. This strategy provides a structured neighborhood, which varies with the size of the problem. The algorithm is applied to three different sizes of the tabu list to find the right setting of tabu length to balance the exploration and exploitation during the search. Computational experiments are used to compare these three forms of the proposed TS algorithm. Fifteen test instances with different complexity levels have been developed to check the performance of the proposed TS algorithm. From the results, it is clear that tabu search is quite promising to solve the problem. Finally, some future directions are suggested for the problem. Keywords Project selection and scheduling · 0–1 integer programing · Meta-heuristic methods · Tabu search

1 Introduction Project portfolio selection and scheduling problem (PPSSP) is the problem of selecting the right mix of projects from the pool of available candidate projects and scheduling them simultaneously. The earlier approach of project selection focuses on the

M. Kumar (B) · M. L. Mittal · G. Soni · D. Joshi Department of Mechanical Engineering, Malaviya National Institute of Technology, Jaipur 302017, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_104

1111

1112

M. Kumar et al.

selection of projects and then to schedule them in light of available resources, time, and other constraints. In this exercise of serial selection and scheduling, sometimes it is not feasible to schedule all the projects. To deal with this infeasibility, few of the projects may be dropped/replaced, schedule may be relaxed, or resource limits may be extended, which result in a suboptimal portfolio [3, 4]. Thus, project scheduling needs to be considered simultaneously with project selection. This combined problem is known as project portfolio selection and scheduling problem (PPSSP). Various objectives may be considered for PPSSP but maximizing the expected benefit or NPV has been the most popular and primary objective. Chen and Askin [2] and Tofighian and Naderi [13] have taken project return as time dependent. There are different types of interdependencies between the projects that have been considered by several authors [6, 11, 8, 13, 9]. Although consideration of project interdependencies causes the decision-making more complex, it tends to increase the overall benefit of the portfolio. The resource-constrained project scheduling problem (RCPSP) is an NP-hard problem [5], thus the PPSSP would be even more complex. Various heuristics [2–4] have been developed in the literature. Efforts have also been made to develop metaheuristics for PPSSP [6, 1, 8, 13]. Tabu search (TS) developed by Glover [7] has been applied by various authors, e.g., Lambrechts et al. [10], Mika et al. [12], and Waligóra [14] for RCPSP. TS algorithm is easy to fit as it can be directly applied to the problem and parameter tuning is relatively simple. In this study, a 0–1 integer model is formulated for the resource-constrained PPSSP. The objective is to maximize the total expected profit from the portfolio. A tabu search algorithm is proposed to solve the problem. The algorithm is developed with three different tabu lengths. Three sets of the problem, each containing five instances, are solved using the proposed TS algorithm. The contents of the paper are organized as follows. Mathematical formulation of the PPSSP is presented in Sect. 2. In Sect. 3, the procedure of the proposed Tabu search algorithm is presented. Section 4 discusses the procedure for the computational experiment to check the performance of the algorithm. Finally, in Sect. 5, conclusions based on computational experience and some further research directions are presented.

2 Problem Statement In this paper, PPSSP is formulated in a 0–1 integer programing model, which selects a subset of projects from the available candidate projects in light of time, resource, and other constraints and determines the schedule of the projects simultaneously in order to maximize the expected profit of the portfolio. The expected profit of a project is considered to be dependent on the completion time of the project. The model ensures the completion of projects on or before the due dates. Let there be N number of candidate projects available for selection over a planning horizon of T time periods. Interdependencies between the projects are considered. K types of

A Tabu Search Algorithm for Simultaneous Selection …

1113

resources are to be allocated between the selected projects. Resources are limited and may be both renewable and nonrenewable. Mathematical model: Decision variables and coefficients of the mathematical model are listed as follows: Decision variable: X it  1 if project i is selected and starts in period t = 0 otherwise. Technological coefficients and parameters: N  number of candidate projects; (i  1, 2, …, N). K  number of resource types; (k  1, 2, …, K). T  time periods; (t  1, 2, …, T ). Pit  expected profit if project i starts in period t. d i  duration of the project i. DDi  due date of the project i. r ik  requirement of resource type k for project i in each time period. Rkt  resource availability of type k in period t. e  project mutually exclusive to project i. E i  set of projects mutually exclusive to project i. Objective function:

Max W 

−di +1 N T  i1

Pit ∗ X it

(1)

i1

Constraints: T −di +1 t1

X it ≤ 1 ∀i

(t ∗ X it + di − 1) ≤ D Di ∀i ⎞ min{t,T N  −di +1} ⎝ X i j ⎠ ∗ rik ≤ Rkt ∀k, t i1 jmax{1,t−di +1} ⎛

T −di +1 i1

X it +

T −de +1

X et ≤ 1 ∀i, e ∈ E i

(2) (3) (4)

(5)

i1

X it  {0, 1} ∀i, t ≤ T − di + 1

(6)

Equation (1) is the objective function for maximization of the total expected profit from the portfolio. The expected profit from each project in each period is known a priori. Constraint (2) defines the start time and ensures the completion of the selected project within the planning horizon. Constraint (3) ensures the completion of the selected project on or before the due date. The limits on the resource availability are taken care of by Constraint (4). Interdependencies between the projects are handled by Constraint (5). Constraint (6) defines the binary nature of decision variable.

1114

M. Kumar et al.

Pseudo code for TS algorithm Initialization of algorithm Generate an initial solution (S0) randomly or using some heuristics. Define neighbourhood generation scheme, tabu list size (TL), aspiration criterion and termination criterion (ITRMax) Set Best solution (SBest) = Current solution (S) = Initial solution (S0) Tabu list (TL) = {} Iterations (ITR) = 0 If ITR ≠ ITRMax, S → SN (generate neighbourhood solutions) For S ϵ SN If S TL If (fitness (S) > fitness (SBest)) SBest = S End End End End Return SBest Fig. 1 Pseudocode for TS algorithm

3 Tabu Search Algorithm for PPSSP In this section, a TS algorithm is proposed for PPSSP. Tabu Search (TS) is an artificial intelligence technique designed to avoid recycling of solutions. It is known for its strong local search, which uses a memory to avoid trapping in local optima. It is initially developed by Glover [7]. TS, is an improvement of meta-heuristics, starts with an initial solution that is then improved upon iteratively by accepting some of the non-improving moves to avoid local optima. Non-improving moves are accepted sometimes to explore the larger area of the solution space. A memory structure called tabu list is used to restrict some of the recent moves for few iterations. The trade-off between the exploration and exploitation of solution space is controlled by the size of tabu list. In the proposed TS algorithm for PPSSP, a solution representation scheme is developed to fit the algorithm to the problem. Initial feasible solution is generated randomly with respect to the constraints. Then a move operator is used to search the local neighborhood of the solution iteratively. The best neighborhood solution is accepted and updated at the end of the iteration. In case of non-improving solution, a solution which is not restricted by tabu list is accepted. The procedure continues until some suitable predefined stopping criteria are met. The proposed tabu search (TS) algorithm is illustrated by means of a pseudocode given in Fig. 1. The main components of the proposed algorithm are presented as follows:

A Tabu Search Algorithm for Simultaneous Selection …

1115

3.1 Encoding Scheme (Representation of Solution) An encoding scheme is required to fit a meta-heuristic to a specific problem. Encoding scheme is problem specific and helps to handle the solutions in algorithm and programing environment. Encoding scheme given by Ghorbani and Rabbani [6] is used for the proposed TS algorithm. According to this scheme, a P x T matrix represents the solution to problem having P candidate projects and a planning horizon of T time periods. Table 1 represents an example encoding scheme (4 × 9). In this scheme, there are four projects represented by each row and nine time periods represented by each column. If an element ait of the matrix takes the value 1 that means the project i is selected and started in period t. The last filled element of each row represents the due date of that project. In given example Projects 1, 3, and 4 have been selected and started at times 2, 1, and 4 having due dates 7, 6, and 9, respectively.

3.2 Initial Solution An initial feasible solution is required to start the algorithm. It is generally generated randomly or using a certain heuristic rule. For the proposed TS algorithm, the initial solution is generated randomly satisfying the constraints.

3.3 Neighborhood Solution Neighborhood solutions are needed to propagate the search procedure. Size of the neighborhood depends on the size of problem instance. The best neighborhood solution, satisfying the tabu list constraint and aspiration criteria, is selected for the next iteration. During local search, neighborhood solutions of the current solution can be obtained using the move operator. Two move operators can be used for the proposed TS algorithm which are as follows: Project swap: In this operator, two projects are selected randomly and swapped to get a new neighborhood solution. Start time change: In this operator, a start time of a project is changed randomly. In this study, the start time change operator is used to generate the neighborhood solutions. The size of the neighborhood list equals the number of projects in the instance. For each neighborhood solution, the start time of a project is changed randomly. There is no need to control the size of the neighborhood as it varies with the size of the problem instance. This operator also ensures the generated solutions to be in the vicinity of the current solution. The scheme of neighborhood generation is shown in Fig. 2.

Projects (p)

0 0 0 0

1

0 0 1 0 0 0 Time periods (t)

0

Table 1 Encoding scheme for a feasible solution 0 0 1

0 0 0 0

0 0 0 0

0

0 0

0 0

0

0

1116 M. Kumar et al.

A Tabu Search Algorithm for Simultaneous Selection …

1117

Fig. 2 Neighborhood generation scheme

3.4 Tabu List This list is used to prevent the search from being trapped into local optima. Solution found at each move is added to this list. This move is restricted until it remains in the list. The old moves are discarded as the new move comes in the list based on the first-in first-out rule. The size of the tabu list controls the amount of exploration and exploitation during the search.

3.5 Aspiration Criteria Aspiration criterion avoids the tabu constraint known as non-tabu criterion. A move belonging to tabu list can be selected if it satisfies the aspiration criteria. Most widely used aspiration criterion is to select the move, which gives the better solution than the found so far. This criterion is used in the proposed TS algorithm also.

3.6 Termination Criteria Generally, an algorithm is terminated after a predetermined number of iterations or after a certain number of non-improving iterations. For the proposed TS algorithm, predetermined number of iterations is used as the stopping criterion.

1118

M. Kumar et al.

Table 2 Instance generation scheme Instance Factors type Number of projects

Project durations

Types of resources required

Resource Resource requirement availability of each project

Interdependency level (%)

Low complex

3–10

1–4

1–2

10–15

40–60

5

Medium complex

8–15

4–7

2–4

10–15

30–45

15

High complex

10–20

7–10

4–5

10–15

20–30

25

4 Computational Experiences The performance of the proposed TS algorithm for PPSSP is evaluated in this section. The algorithm is tested on a total of 15 instances of different complexities. TS algorithm is used with algorithm and generating schemes have been coded in MATLAB 7.12 environment on a system with Core i3 and Windows 8.1 using 4 GB of RAM. The instance generation scheme, parameter setting for TS, and performance evaluation criterion are described as follows.

4.1 Test Problems Fifteen problem instances have been generated in this study for evaluating the proposed TS algorithm. The instance generation scheme is similar to the scheme proposed by Tofighian and Naderi [13] with consideration of due dates of the projects. The instance generation scheme is given in Table 2. The complexity of the problem instances varies with the amount and number of resources required, number of candidate projects, project durations, resource availability, level of interdependencies, and due dates. Three types of problem instances with low, medium, and high complexity have been generated. The length of planning period is decided randomly on the basis of maximum of any project’s duration and sum of duration of all the projects. Planning period length is taken as 80–100, 60–80, and 50–60% of sum of duration of all the projects for low, medium, and high complex instances, respectively. Due date of a project is taken randomly between project duration and end of the planning period. Each project’s expected profit (time dependent) values are generated randomly using uniform distribution over U (100,999).

A Tabu Search Algorithm for Simultaneous Selection …

1119

4.2 Parameter Settings The tabu list size and neighborhood of current solution are needed to be tuned carefully for efficient working of TS algorithm. For the proposed TS algorithm, neighborhood size is taken the same as the number of projects in the problem instance. The algorithm is run for three different tabu list sizes. The sizes of the tabu list are taken to be 0.4, 0.5, and 0.6 times the number of projects in the problem for TS_1, TS_2, and TS_3 algorithms, respectively. Minimum size of the tabu list is fixed at two. In case of fraction value of tabu length, truncation and rounding off are applied. Each algorithm is run for 100 iterations.

4.3 Comparison Method The performance of the three different forms of proposed TS algorithm is compared using best value obtained by any of these forms. For each problem instance, the percentage deviation is calculated by expression (7) as follows:. PD 

(B − X ) ∗ 100, B

(7)

where X is the objective function value obtained for a problem instance using an algorithm and B is the best value found for the problem by any of the three forms of TS algorithm.

5 Results and Discussions Each TS algorithm is run for 10 times on each problem instance and outputs are averaged. The results of the three TS algorithms are compared using the percentage deviations. The output percentage deviations are shown in Table 3. Overall and instance type-wise average percentage deviation values are presented in Table 4. From the Tables 3 and 4, it is clear that TS_1 outperforms TS_2 and TS_3. TS_1 performs outstanding for medium and high-complex instances. For low-complex instances, TS_2 performs better than other two because the problem instances are small in size and the suitable tabu length is 50% of the problem size. A tabu length of 40% of the problem size corresponding to TS_1 gives better results as the problem size increases. From results, it can be claimed that too short and too large size of the tabu list leads to inferior results. Small tabu length may cause the search to be trapped in local optima and very large tabu length might restrict the search from entering into the area of global optimum. The trade-off between exploration and exploitation, while searching the solution space can be made using tabu length. The computational effort needed also varies with the size of the tabu list. Finally,

1120

M. Kumar et al.

Table 3 Outputs of the proposed TS algorithm Instance type Instance TS_1 Low

Medium

High

TS_2

TS_3

Low_complex_01

0

0

0

Low_complex_02

0

0

0

Low_complex_03

0.6382

0

0.2946

Low_complex_04

0

0.2961

0.3058

Low_complex_05

0

0

0

Medium_complex_01

0.3114

0

0.3902

Medium_complex_02

0.2183

0

1.1483

Medium_complex_03

0

3.111

11.073

Medium_complex_04

0.9673

1.0542

0

Medium_complex_05

1.8046

2.6025

0

High_complex_01

0.0577

0

0.0997

High_complex_02

0

0.0577

4.6803

High_complex_03

0

3.2102

3.1037

High_complex_04

0.3895

0

0.3895

High_complex_05

0

3.1532

3.1563

Table 4 Average of percentage deviations for different sets of problems

Instance

Algorithms TS_1

TS_2

TS_3

Low Medium High

0.1276 0.6603 0.0894

0.0592 1.3535 1.2842

0.1201 2.5223 2.2859

All

0.2925

0.899

1.6428

the proposed TS algorithm appears to be quite promising for solving the PPSSP. However, other solution approaches are needed to compare the efficiency of the proposed TS algorithm.

6 Conclusions This study considers the problem of simultaneous selection and scheduling of projects. In this process, only those projects which are feasible to scheduled optimally are selected. A 0–1 integer programming model is presented for the problem considering the total expected profit as the objective to be maximized. Problem considers the time-sensitive profits, interdependencies, fixed planning horizon, and due dates of the projects. A TS algorithm has been developed for solving the problem. The algorithm is applied to three different sizes of the tabu list. Start time change scheme

A Tabu Search Algorithm for Simultaneous Selection …

1121

has been used to generate the feasible neighborhood solutions respecting the resource availability, interdependencies, and due date constraints. Fifteen test instances with different complexity levels have been developed to check the performance of the proposed TS algorithm. From the results, it is clear that TS algorithm with a tabu length of 40% of the problem size outperforms the other two. This work can be extended to consider the multiple objectives of importance and dynamic nature of the problem. The hybrid algorithm can be developed for the problem to improve the solution quality and computational effort.

References 1. Amirian, H., Sahraeian, R.: Solving a grey project selection scheduling using a simulated shuffled frog leaping algorithm. Comput. Ind. Eng. 107, 141–149 (2017) 2. Chen, J., Askin, R.G.: Project selection, scheduling and resource allocation with time dependent returns. Eur. J. Oper. Res. 193(1), 23–34 (2009) 3. Coffin, M.A., Taylor, B.W.: Multiple criteria R&D project selection and scheduling using fuzzy logic. Comput. Oper. Res. 23(3), 207–220 (1996) 4. Coffin, M.A., Taylor III, B.W.: (1996b). R&D project selection and scheduling with a filtered beam search approach. IIE Trans. 28(2), 167–176 5. Demeulemeester, E.L., Herroelen, W.S.: (2006). Project scheduling: A Research Handbook, vol. 49. Springer 6. Ghorbani, S., Rabbani, M.: A new multi-objective algorithm for a project selection problem. Adv. Eng. Softw. 40(1), 9–14 (2009) 7. Glover, F.: Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 13(5), 533–549 (1986) 8. Huang, X., Zhao, T.: Project selection and scheduling with uncertain net income and investment cost. Appl. Math. Comput. 247, 61–71 (2014) 9. Huang, X., Zhao, T., Kudratova, S.: Uncertain mean-variance and mean-semivariance models for optimal project selection and scheduling. Knowl.-Based Syst. 93, 1–11 (2016) 10. Lambrechts, O., Demeulemeester, E., Herroelen, W.: A tabu search procedure for developing robust predictive project schedules. Int. J. Prod. Econ. 111(2), 493–508 (2008) 11. Liu, S.S., Wang, C.J.: Optimizing project selection and scheduling problems with timedependent resource constraints. Autom. Constr. 20(8), 1110–1119 (2011) 12. Mika, M., Waligora, G., We˛glarz, J.: Tabu search for multi-mode resource-constrained project scheduling with schedule-dependent setup times. Eur. J. Oper. Res. 187(3), 1238–1250 (2008) 13. Tofighian, A.A., Naderi, B.: Modeling and solving the project selection and scheduling. Comput. Ind. Eng. 83, 30–38 (2015) 14. Waligóra, G.: Discrete–continuous project scheduling with discounted cash flows—A tabu search approach. Comput. Oper. Res. 35(7), 2141–2153 (2008)

A Survey: Image Segmentation Techniques Gurbakash Phonsa and K. Manu

Abstract Image segmentation is a core step of image processing. Image segmentation is an image processing technique distinct as in which we divide the image into multiple regions in forms of pixel or break down the image into different objects and regions. The main aim of image segmentation is to represent an image on noise-free form. The main use of image segmentation is to detect the objects, relevant data in digital image and their boundaries. These techniques split the image into small regions in order to analyse them and it is also helpful to distinguish the different types of object in single image. Till date, various image segmentation approaches are suggested by researchers with specific aspects or little diversity and commitment for betterment. Researchers work continuously to optimize techniques and enhancing them in order to make image more recognizable, i.e. more smother and noise free. In this paper, as per our study, different researcher works on image processing approaches like thresholding, neural network, edge base segmentation, region base segmentation, etc. are presented. Keywords Segmentation techniques · Edge detection · Boundary-based segmentation · Region-based segmentation

1 Introduction Image segmentation [1, 2] is defined as the process of splitting the single digital image into multiple regions or segments, i.e. set of pixels in segments or regions is similar to some criteria such as intensity and colour in order to identify objects, to

G. Phonsa (B) Mewar University, Chittorgarh, Rajasthan 312901, India e-mail: [email protected] K. Manu Lovely Professional University, Jalandhar 144403, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_105

1123

1124

G. Phonsa and K. Manu Image accepting

Feature Measurement

Data out

high Image Analysis

middle

Image process

Object Representation

Image Segment

Data-In

low Fig. 1 Image engineering process

locate objects and boundaries [3] of image. The main aim of image segmentation is to represent digital image into more and more simpler form (Smooth and noise free). Image segmentation is used to find the objects, boundaries in image and line in digital image. Practical application of image segmentation ranges from medical applications (Tumours, pathologies, measurement of tissue volume, computer surgery, treatment, diagnosis and planning), filtering the image noise, face and fingerprint reorganize, locate objects from satellite images, etc. Segmentation itself is the process of splitting or dividing the object in image into smaller objects [4] of parts. In image segmentation process, the output will come in the form of pixels that cover together to form entire image. Each pixel in segment is compared by the predefined properties (colour and intensity of other pixels). There is no particular level of dividing the image into subparts; this all depends until the problem being resolved, i.e. only stop the segmentation when object in image or application isolated. There is level is image segmentation in image processing which is known as image Engineering. Image engineering is divided into three levels [1, 5] as shown in Fig. 1. Image techniques [6] or image engineering [7] can be clubbed together in a general framework called image engineering. Image engineering can be defined as that which contains three layers to process the image: • Image understanding, • Image analysis and • Image processing.

1.1 Image Processing This is the bottom-level operations. These operations are performed on pixel level. This layer takes the image and modified that version image into another form or perform some transformations between the images and also improves the input image’s visual effect.

A Survey: Image Segmentation Techniques

1125

1.2 Image Analysis This is the middle operations of this structure that focus on measuring. The PCA (Principle Components Analysis) produces a net set of images from given set.

1.3 Image Understanding In these top-level operations, each target is studied more further and connecting each other as well explanation of original images.

1.4 Image Segmentation As we discussed before, a segmentation itself is the process of splitting or dividing the image into smaller [4] parts. In image segmentation edges, boundaries, regions are identified for processing. In image segmentation process, the output will come in the form of pixels that cover together to form entire image. Segmentation has various techniques, and these are used in image engineering.

2 Different Image Segmentation Techniques See Fig. 2.

2.1 Edge Detection In this [6], researchers defined the boundaries between two regions. When we apply edge detection method to an image, it leads to curves that defined boundaries of object and surface. Edge detection can be defined as that each object is surrounded by a closed border which is visible and can be detected in the intensity value of the image. This segmentation technique plays a prominent role in pattern reorganization, image analysis, image processing and computer vision. Edge detection for segmentation having some basic steps which is described as below: • Filtering, • Enhancement and • Detection. Edge detection methods.

1126

G. Phonsa and K. Manu

Image Segmentation

Layer based Segmentation

Block based segmentation

Edge or boundary based segmentation

Region based Segmentation Region

Methods

Soft Computing

Genetic

Roberts

Neural

Prewitt

Fuzzy logic Sobel

Clustering Thresholding Normalized Split & Merge

Fig. 2 Segmentation techniques

Robert Edge Detection: Robert edge detection is used as an operator in image processing for edge detection. It was that first edge detection operator was proposed by Lawrence Robert in 1963. This operator performs simple, quickly compute, on an image 2-D spatial gradient measurement. So, the image highlights regions of spatial gradient which often corresponds to its edges. Sobel Edge Detection: Sobel is mostly used edge detection operator. It is sometimes called Sobel filter. In image processing, Sobel edge detector defines two types of masks: one is horizontal and other is vertical. These masks used matrices for edge detection. Sobel edge detector operator defines gradient between horizontal and vertical (rows and columns) using discrete values. This operator mainly used on image with small value, separable value and integer filter value. Prewitt Edge Detection: Prewitt edge detection is one of the oldest and understandable edge detector operators. In this estimate, the magnitude and orientation and edge detection need different types of gradient edge detection instead of timeconsuming calculations. This prewitt operator has minimum eight possible orientations and estimated 3 × 3 neighbourhood for eight directions.

A Survey: Image Segmentation Techniques

1127

2.2 Thresholding Methods Thresholding method [8–10] is most important segmentation technique and most used technique in image segmentation. In this segmentation method, we convert the greyscale image into binary form. For segmentation, binary image includes required data regarding shape and location of object in image. A conversion from grey image to binary form is valuable for the reason that it reduces the difficulty of data. Threshold methods are as follows: A Global Threshloding: Global thresholding is one of the most used methods in image processing. It is defined as intensity value in which input image must have two peak values and these values are kept up a correspondence to signals from background and image objects. A Variable Thresholding: In this thresholding method, we separate out image foreground objects from background and it depends on dissimilar pixel intensities of each region in image. Adaptive thresholding defines T function (x, y). A regional and local thresholding is based upon neighbourhoods pixels in (x, y) form [1, 6]. A Multiple Thresholding: In this method, we segment grey image into the form of various separate regions. A multiple thresholding is different than local and variable thresholding. It also defines one-to-many threshold for given input image and divides image into definite brightness regions in image and corresponds to background and several objects.

2.3 Region Segmentation In this region-based [6, 8] segmentation, we segment the similar image into various regions. It is also used to determine the region directly. Partitioning is done by using grey values of the image pixels. Two basic-type techniques of region-based segmentation are the following: A Region Growing method: It is region-based segmentation scheme in which pixels group or subregions are changed into large size regions predefined criteria. In this region, growing pixels begin with set of seed points, and in this approach the corresponding regions grow via appending every seed points; those neighbouring pixels have related similar properties like greyscale, colour, texture, shape, etc. A Splitting and Merging: In this method, image is taken as a single region and then repeatedly breaks down the image until no more further breakdown is possible. After the splitting process, merging process takes place. In merging technique, region is merged with the adjacent regions. The method starts with small regions and then merges the regions which have similar characteristics like greyscale and variable. Quadtree (Splitting data structure) is used here for merging regions if they are similar and adjacent the merging technique is repeated until max possible merge.

1128

G. Phonsa and K. Manu

2.4 Clustering Method Clustering [3] is most popular algorithm in image segmentation. It classifies data and patterns in categories. Two popular examples of clustering methods are K-means and fuzzy c-means [3]. Both methods are capable to produce divide of images under conditions including cluster number and for recovering cluster algorithm, we find out cluster number and centres depend on decision graphs.

3 Experiment and Results 3.1 Edge Detection Method Robert Detection: This method is used as an operator for edge detection. It presents quick, easy to calculate, 2-D spatial gradient dimension (measurement) on an image. This operator mainly highlights image regions with spatial frequency in which often correspond to its edges and greyscale image used as an input to the operator. In this method, the output signifies estimated absolute magnitude of spatial gradient. Input the image at the points given as follows [1, 6]. Prewitt Edge Detection: In this edge detector operator, we estimate the edge magnitude and orientation in an image. The prewitt edge detection needs different types of gradient edge detection instead of time-consuming calculations. Gradient edge detection estimates the orientation as of magnitude in x and y directions. This prewitt operator has minimum eight possible orientations and estimated 3 × 3 neighbourhoods for eight directions. The calculated convolution masks are given in Figs. 3, 4 and 5 [1].

Fig. 3 Robert detection mask

-1

0

0

0

+1

+1

Gx Fig. 4 Edge detection prewitt mask

-1 0 Gy

A Survey: Image Segmentation Techniques

1129

[Gx]

[Gy]

Fig. 5 Edge detection Sobel mask [1] (a)

(b)

(c)

(d)

Fig. 6 Results of edge detection technique. a Input image, b Sobel, c canny, d prewitt method

Sobel Edge Detection: Sobel is also edge detection operator; usually, this technique is used to estimate absolute gradient magnitude at every point in grey image(input image); it presents image with 2-D spatial gradient measurement and so highlights regions of high spatial rate of recurrence(frequency) corresponding to its edges in image. The Sobel operator consists of convolution mask of 3 × 3 pair as shown in Fig. 6. Edge detection: Experimental result.

3.2 Soft Computing Techniques Fuzzy-based Approach: Fuzzy approach is soft computing technique. Edge detection fuzzy logic [11, 12] defines different possibilities. In fuzzy logic, one technique describes a membership function representing the degree of each neighbourhood. Fuzzy approach can just perform true fuzzy logic if it is furthermore used to change membership values. This soft computing technique is fast but the performance is inadequate. Fuzzy rule IF THEN shows edge detection plus neighbourhood of central pixel of the image and pixels are separated into fuzzy set. In this method, homogeneity is evaluated to experiment the similarity of two regions through the segmentation process [13, 14]. For this method, we use if-then rules which are defined as shown in Fig. 7. Genetic Algorithm Approach: A genetic algorithm [1, 6] has three major steps: Selection operator is the first phase; in this process, it evaluates every individual from the entire population and kept only the fittest ones [2]. It also kept the less fit ones apart from fittest one, and the less fit is selected as according to small probability

1130

G. Phonsa and K. Manu

[15] and discard the other from the population. After the selection operator crossover process takes place in this picture, it combines the two individuals to have one individual which might better from previous one. The mutation operator’s main purpose is to maintain the population differentiated enough during the optimization process by bringing changes in a small number of chromosomes units. Neural Network Approach: This neural network technique is different from artificial intelligence techniques in terms of their ability to generalize and learning. This approach is made from several elements that are connected by links with variable weights. Artificial neural networks (ANN) are commonly used for pattern recognition. The neural network is soft computing technique that works in layers: • Input layer, • Hidden layer and • Output layer. In input layer, input provided to neuron is normalized in [0–1]. Also, the value of each neuron’s output is between [0–1]. All three layers are fixed numbers of neurons, number of neuron depends on the size of image, i.e. equal to image’s size (I * J). All neurons have one primary connection with weight equal to 1. All the neurons in all layers are interconnected with previous layer or neuron of one layer is connected to particular neuron of the previous layer with its order neighbour (Figs. 8 and 9).

IF

THEN

ELSE Fig. 7 Central neighbourhood pixels

IF

THEN

A Survey: Image Segmentation Techniques

1131

Fig. 8 ANN (neural network segmentation process)

(1)

(2)

(3)

Fig. 9 First-, second-, third-order neighbourhood

(a)

(b)

(c)

(d)

Fig. 10 a Input image (original), b K-means, c FCM, d Proposed method

Clustering: Clustering [3] is most popular algorithm in image segmentation. It classifies data and patterns in categories. Two popular examples of clustering methods are K-means and fuzzy c-means [3]. Both methods are capable to produce a divide of images under conditions including cluster number and for recovering cluster algorithm we find out cluster number and centres depend on decision graphs. For representation, colour feature of input image is shown in Fig. 11 [3] (Fig. 10).

1132

G. Phonsa and K. Manu

(a)

(b)

Fig. 11 a Original image, b after segmentation

3.3 Split and Merge In this method, image is taken as a single region and then repeatedly breaks down the image until no further breakdown is possible. After the splitting process, merging process takes place. In merging technique, region is merged with the adjacent regions. The method starts with small regions and then merges the regions which have similar characteristics like greyscale and variable. Quadtree (Splitting data structure) is used here for merging regions if they are similar and adjacent the merging technique is repeated until max possible merge. Entire process is in three steps: • Splits [1] the image until possible region. • Initialize the neighbours list of each split. • After initializing, the neighbours of each split, merging process take place and merge the split regions. Advantage: The main advantage of IQM (Improved Quadtree) is that it reduces the number of neighbour length problem during the merging process and guaranteed connected regions. Region growing: It is a basic region [8] based on image segmentation method. It is also called pixel-based image segmentation as it contains the selection of initial seed points. This method examines the initial seed point’s neighbour and determines whether the pixel’s neighbour should be added to the existing region or not. This process repeatedly works until matching or right neighbour are detected. This method is similar to data clustering concept. Concept of seed point: In this method, the first step is to choose the set of seed point for initializing the process. Selecting the basic seen point depends on user criteria (for example, pixel evenly spaced on grid, number of pixel in particular greyscale range, etc.). After initializing the first seed points, then the regions are grown from these seen points to adjacent point according to membership measurement (intensity, greyscale, colour, etc.) between them. Advantage: Guaranteed connected regions, multiple selection criteria and gives good noise-free image in result. Region growing segmentation result is.

A Survey: Image Segmentation Techniques

(a)

(b)

1133

(c)

Fig. 12 a Original image, b Greyscale image, c Binary image

3.4 Thresholding Threshold-based [16] method is operative and simple segmentation method in which grey image is converted into binary image by matching their intensities with one or more intensity thresholds. At present, threshold-based methods are classified into global, local thresholding and variable thresholding [10, 17]. Global thresholding is best when there is homogeneous intensity in image or homogeneous contrast between the objects and high background. It is difficult in threshold selection when the image’s contrast is low. Figure 12 shows the result of thresholding.

4 Comparison of Various Segmentation Methodologies Through this paper, we focused to show the different techniques by different researchers used for image segmentation. The tabular representation is in abstract form of researcher approaches suggested by them for the image segmentation (Table 1).

5 Conclusion We done a survey on image segmentation techniques. And these segmentation techniques mainly are based on edge and boundary. In segmentation, simply we represent the image into more understandable noise-free form. It is basically used to detect the objects, boundaries and other relevant data in the digital image. The overview of segmentation methodologies applied for digital image processing is explained briefly. The techniques mentioned in this survey paper are used in numerous advanced mission for identification of regions images object. The main focus is on the study of soft computing approach to edge detection for image segmentation. The soft computing

1134

G. Phonsa and K. Manu

Table 1 Comparison of different researcher works Approach Contribution Research purpose Result

Brief conclusion

Fuzzy logic

Li [11]

The main objective of this paper is to propose a new level set formulation by with region competition mainly for tract and detects the arbitrary combination of particular image or objects of image components

Study was recognized based on fuzzy region competition. A new formulation is well suited to Gaussian mixture modelling. Different types of probability or Bayesian clustering approximation function for selective level segmentation set

Up to this paper, fuzzy logic approach defined selective level set using region competition. But we find that it generally does not hold in most selective segmentation

Fuzzy logic

Alshennawy and Ayman [12]

A fuzzy logic reasoning make approach is planned for edge detection in the form of digital image without finding out the value of threshold

The fuzzy is a powerful tool for knowledge-based expert system. In this paper set of fuzzy rules are enhanced solutions to develop quality of edges

In this paper, fuzzy logic enhanced the quality of edges. But we still find some problems to detect the noise from the original image

Neural networks

Du and Gao [18]

By using multi-focus image fusion algorithm, image segmentation is applied to identifying decision map (decision map used as image segmentation) between focused and defocused regions

MSCNN soft computing method could correctly and multi-focus boundary between the focused and defocused parts and then get a more accurate decision map from the source images

This latest paper on multiscale convolutional neural network. But there still exist some defects related to binary segmented map and images are from time to time misclassify and it leads to the holes and small regions in segmentation map (continued)

A Survey: Image Segmentation Techniques Table 1 (continued) Approach Contribution

1135

Research purpose Result

Brief conclusion

Neural network

Zhuang et al. [19] The main principle of this paper to apply segmentation on colour image by using pulse coupled neural network with multichannel linking furthermore feeding fields for image

Result shows the performance of MPCNN for image segmentation of noisy image while neural circuits recover the processing speed drastically

We find that PCNN and MPCNN methods are linking and feeding fields for colourful image. But the problem is still there related with processing speed

Clustering

Chen et al. [3]

The purpose k-means and Fuzzy c-means both methods are capable to produce divide images under conditions including cluster number and for recovering cluster algorithms using this algorithm we find out cluster number and centres

For verify dc for the same input, image will generate mutually consistent results. k-means and Fuzzy c-means both methods are capable to produce divide images under conditions including cluster number and centres

In this paper, we find cluster number and centres based on decision graph, which is collected with distance and density. But still exist problem in experiment to select the value of parameter density and distance automatically based on the input image

Clustering

Nan et al. [20]

By using canny edge detection method from the image finds adaptive threshold value and edge extraction operations are performed

Both fuzzy c-means clustering method and means shift approach provide better results in canny edge detection

We find that both fuzzy c-means clustering method and means shift approach provide better results in canny edge detection. But still some problems are there related to edge detection (continued)

1136

G. Phonsa and K. Manu

Table 1 (continued) Approach Contribution

Research purpose Result

Brief conclusion

Region growing

Hore et al. [8]

In this paper Ostu’s method, iterative method and thresholding are used to achieve the best possible threshold value. And in this also defined the homogeneity based on pixels intensity

The result is based on parameters(Fscore, precision, recall). To measure the result proposed method match with qualitative analysis considered as ground truth and ideal edge

Up to this paper, we find that in this thresholding method traverse neighbourhood pixels around seed location to perform the region grow algorithm of twodimensional seeded, but still some defects exist when find the intensity between current pixel and neighbouring pixels

Edge detection

Senthilkumaran and Rajesh[6]

Using soft computing technique improve the efficiency of image segmentations

Soft computing techniques (Genetic algorithms, fuzzy logic, neural network based approach) are implemented on real-life examples and results show the efficiency of image segmentation

Up to this paper using soft computing techniques, we find that the result shows efficiency of images segmentation. But in this paper, still some problems related to efficiency of image segmentations when we applied soft computing techniques

Edge detection

Alshennawy and Ayman [12]

A fuzzy logic reasoning make approach is planned for edge detection in the form of digital image without find out the value of threshold

The fuzzy is powerful tool for knowledge-based expert system. In this paper set of fuzzy rules are enhanced solutions to develop quality of edges

In this paper, fuzzy logic enhanced the quality of edges. But we still find some problems to identify the noise from the input image (continued)

A Survey: Image Segmentation Techniques Table 1 (continued) Approach Contribution

1137

Research purpose Result

Brief conclusion

Edge detection

Muthukrishnan and Radha[4]

The main purpose is edge detection to detect the correct image without noise from the input image using discontinuity intensity level

This edge detection technique detects the correct image without noise from the input image. The edge detection method based on discontinuity intensity level

Up to this paper, we find that edge detection discontinuity intensity level remove the noise, but the main problem still in this paper is related to discontinuity intensity and overlapping objects

Thresholding

Hore [8]

In this paper Ostu’s method, iterative method and thresholding are used to achieve the best possible threshold value. And in this also defined the homogeneity based on pixels intensity

The result is based on parameters (F-score, precision, recall). To measure the result proposed method match with qualitative analysis considered as ground truth and ideal edge.

Up to this paper we find that in this thresholding method traverse neighbourhood pixels around seed location to perform the region grow algorithm of twodimensional seeded, but still some defects exist when finding the intensity between current pixel and neighbouring pixels

Thresholding

Li et al. [10]

In this paper, taking Kapur’s entropy is same as optimal objective function, Grey wolf optimizer with modified as tool and used fuzzy membership functions for local information aggregation

The experiment result defines multilevel thresholding is improved. because difficult object functions solved by nondeterministic methods

In this paper define multilevel thresholding but complex objective functions which are solvable only by nondeterministic methods

(continued)

1138

G. Phonsa and K. Manu

Table 1 (continued) Approach Contribution

Research purpose Result

Brief conclusion

Thresholding

Bhandariet al. [17]

Cuckoo search (CS), Wind-driven optimization (WDO) are two algorithms used for multilevel thresholding. These are two swarm intelligence based algorithms used to overcome problems which are related to thresholding

Kaptur’s entropy reveals that the two algorithms WDO and CS be able to efficiently and accurately used in problems which are related to multilevel thresholding.

We find that the two algorithms WDO and CS be able to efficiently and accurately used in problems which are related to multilevel thresholding. But still there are some problems left in multilevel thresholding

Watershed

Ghoshale et al. [21]

Mainly watershed algorithm is applied for describing various edge sharpening filters and also find out the effect on the output image

For better result, they use special type of morphology for filtering and organizing elements which are used for smoothing shape and remove small hole

We find that watershed is new segmentation technique in which combine preprocessing and postprocessing of image objects to generate final results

techniques are fuzzy-based approach, genetic algorithm based approach and neural network based approach which are applied and also experimented.

References 1. Zaitouna, N.M., Aqel, M.J.: Survey on image segmentation techniques (2015) 2. Khan, M.W.: A survey: image segmentation techniques. Int. J. Future Comput. Commun. 3.2, 89 (2014) 3. Chen, Z. et al.: Image segmentation via improving clustering algorithms with density and distance. Proc. Comput. Sci. 55, 1015–1022 (2015) 4. Muthukrishnan, R., Radha, M.: Edge detection techniques for image segmentation. Int. J. Comput. Sci. Inf. Technol. 3(6), 259 (2011) 5. Ajam, A., et al.: A review on segmentation and modeling of cerebral vasculature for surgical planning. IEEE Access 5, 15222–15240 (2017) 6. Senthilkumaran, N., Rajesh, R.: Edge detection techniques for image segmentation—A survey of soft computing approaches. Int. J. Recent Trends Eng. 1(2), 250–254 (2009) 7. Kaur, A.: A review paper on image segmentation and its various techniques in image processing. Int. J. Sci. Res. (2012)

A Survey: Image Segmentation Techniques

1139

8. Hore, S., et al.: An integrated interactive technique for image segmentation using stack based seeded region growing and thresholding. Int. J. Electr. Comput. Eng. 6.6, 2773 (2016) 9. Smistad, E., et al.: Medical image segmentation on GPUs–A comprehensive review. Med. Image Anal. 20.1, 1–18 (2015) 10. Li, L., Sun, L., Kang, W., Guo, J., Han, C., Li, S.: Fuzzy multilevel image thresholding based on modified discrete grey wolf optimizer and local information aggregation. IEEE Access 4, 6438–6450 (2016) 11. Li, B.N., et al.: Selective level set segmentation using fuzzy region competition. IEEE Access 4, 4777–4788 (2016) 12. Alshennawy, A.A., Ayman, A.A.: Edge detection in digital images using fuzzy logic technique. World Acad. Sci. Eng. Technol. 51, 178–186 (2009) 13. Sandhar, R.K., Phonsa, G.: Distinctive feature mining based on varying threshold based image extraction for single and multiple objects. In: 2014 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT). IEEE (2014) 14. Phonsa, G., Sidhu, P.: Block based error correction code watermarking with video steganography using interpolation and LSB substitution Int. J. Appl. Eng. Res. (IJAER), 10(11) (2015) 15. Chen, Z. et al.: Image segmentation via improving clustering algorithms with density and distance. Proc. Compu. Sci. 55, 1015–1022 (2015) 16. Karthikeyan, B., Vaithiyanathan, V., Venkatraman, B., Menaka, M.: Analysis of image segmentation for radiographic images. Indian J. Sci. Technol. 5(11), 3660–3664 17. Bhandari, A.K., et al.: Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy. Expert Syst. Appl. 41.7, 3538–3560 (2014) 18. Du, C., Gao, S.: Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5, 15750–15761 (2017) 19. Zhuang, H., Low, K.-S., Yau, W.-Y.: Multichannel pulse coupled- neural-network-based color image segmentation for object detection. IEEE Trans. Ind. Electron. 59, 3299–3308 (2012) 20. Nan, L., Huo, H., Zhao, Y., Chen, X., Fang, T.: A spatial clustering method with edge weighting for Image segmentation. IEEE Geosci. Remote Sens. Lett. 10(5) (2013) 21. Ghoshal, D., Acharjya, P.P.: Effect of various spatial sharpening filters on the performance of the segmented images using watershed approach based on image gradient magnitude and direction. Int J. Comput. Appl. 82(6), 19–25 (2013) 22. Liu, J., et al.: A survey of MRI-based brain tumor segmentation methods. Tsinghua Sci. Technol. 19.6, 578–595 (2014) 23. Storath, M., et al.: Fast segmentation from blurred data in 3D fluorescence microscopy. IEEE Trans. Image Process. 26.10, 4856–4870 (2017) 24. Lalaoui, L., Mohamadi, T., Djaalab, A.: New method for image segmentation. Proc.-Soc. Behav. Sci. 195, 1971–1980 (2015) 25. Shimabukuro, Y.E., et al.: Estimating burned area in Mato Grosso, Brazil, using an object-based classification method on a systematic sample of medium resolution satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 8.9,4502–4508 (2015) 26. Hodneland, E. et al.: Segmentation-driven image registration-application to 4D DCE-MRI recordings of the moving kidneys. IEEE Trans. Image Process. 23.5, 2392–2404 (2014) 27. Cheng, C., et al.: Outdoor scene image segmentation based on background recognition and perceptual organization. IEEE Trans. Image Process. 21.3, 1007–1019 (2012) 28. Bhargavi, K., Jyothi, S.: A survey on threshold based segmentation technique in image processing. Int. J. Innov. Res. Dev. 3.12 (2014) 29. Hu, Z., et al.: Skin segmentation based on graph cuts. Tsinghua Sci. Technol. 14.4 478–486 (2009) 30. Muthukrishnan, R., Radha, M.: Edge detection techniques for image segmentation. Int. J. Comput. Sci. Inf. Technol. 3(6), 259 (2011) 31. Satyawana, S.: A review paper on image segmentation and object recognition procedures. Sci. Eng. Technol. 67 (2016) 32. Kumari, R., Sharma, N.: A study on the different image segmentation

1140

G. Phonsa and K. Manu

33. Chuang, K.-S., et al.: Fuzzy c-means clustering with spatial information for image segmentation. Computer. Med. Imaging Graph. 30.1, 9–15 (2006) 34. Ahmed, M.N., et al.: A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging 21.3, 193–199 (2002) 35. https://in.mathworks.com/discovery/image-segmentation.html 36. Phonsa G.: Efficient feature extraction using frequency based approach for computer vision

Analysis and Simulation of the Continuous Stirred Tank Reactor System Using Genetic Algorithm Harsh Goud and Pankaj Swarnkar

Abstract Continuous Stirred Tank Reactor (CSTR) is broadly used in process industry as well as chemical industry to higher biomass productivity and production efficiency. Reactor temperature is deviated from the reference value because dynamic characteristic of CSTR is highly nonlinear. Chemical reaction depends on the reactor temperature and due to temperature deviation may degrade the quality of the product. Designing a control system for the ability to reduce the effects of disturbances is a challenging control problem. In this paper, tuning of three Proportional–Integral–Derivative (PID) controller gains is carried out by Genetic Algorithm (GA). In our work, controller gains can be derived suitably by means of minimizing the objective function. The proposed schemes are exposed with the hybrid combinations PID-GA and schemes are compared with the conventional PID tuning method. Studies of proposed techniques give fast computation and reduction in the error. The CSTR system is simulated by the proposed hybrid control schemes. The performance analysis comprises study of without controller, conventional PID and PID-GA in Matlab environment. The simulation results represent the substantial improvement in terms of response of CSTR, Mean Square Error (MSE) and Integral Time Absolute Error (ITAE). Keywords Continuous stirred tank reactor (CSTR) · PID controller GA

H. Goud (B) · P. Swarnkar Department of Electrical Engineering, Maulana Azad National Institute of Technology, Bhopal, India e-mail: [email protected] P. Swarnkar e-mail: [email protected] H. Goud Department of Electronics and Communication, IES IPS Academy, Indore, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_106

1141

1142

H. Goud and P. Swarnkar

1 Introduction In many chemical productions, chemical and bioprocess engineering play a significant role [1]. In the recent years, research in bioprocess with the chemical engineering has become an area of interest for the advancement of the next-generation industrialization [2]. A mathematical model is required for control strategies to check stability and performance of the bioreactor processes. The most important type of steadystate bioprocesses is CSTR [3]. CSTR produces maximum biomass productivity [4]. CSTR is the most popular nonlinear model and it requires more effort for designing suitable controller [5]. During past few years, number of nonlinear controllers is proposed by researchers and control engineers [6]. PID controllers are suitable and feasible control solutions for many industrial processes [7]. Tuning is always the main focused area in controllers and thus several tuning rules are used which facilitates user to compute the PID parameters for simple to complex processes. In 1942, Ziegler and Nichols [8] have proposed effective tuning method which received great attention. Conventional PID controller with fixed parameters is linear and in particular symmetric, it is not always very effective. It may cause undesirable performance of nonlinear and time-varying systems. Soft computing based autotuning methods are used to overcome the limitations of conventional methods [9]. With the help of appropriate tuning methods, errors can be minimized and output approaches the given set point [10]. Algorithm which uses population-based, stochastic, deterministic and iterative optimization methods are termed as population-based tuning method [11]. It searches for optimal solution of given problem with effective adjusting scheme. Here, population comprises possible candidate solution to generate new offspring population with effective performance until it reaches its stopping criterion [12, 13]. The population-based optimization can be classified as Evolutionary Algorithms (EAs) and Swarm Intelligence (SI)-based algorithms [14, 15]. In this paper, the objective is to control the temperature in CSTR. In the proposed approach, the transient response of the CSTR plant is improved by designing suitable PID controllers using GA optimization algorithms by improving rise time, settling time and overshoot. The remainder of this research is organized as follows: Sect. 2 describes the extension of the system model of CSTR. Section 3, presents the control of CSTR using conventional PID controller. Section 4 summarizes the intelligent tuning methods. Section 5 summarizes the objective function. Detail comparative analysis based on intelligent tuning methods like GA is done in Sect. 6 based on the compressive simulation study. Finally, a brief conclusion is addressed in Sect. 7.

2 System Model Figure 1 shows a CSTR in which first-order chemical reaction A → B is occurring. The mathematical model of the above process is given by Roffel and Betlem [13]. The mass balance for component A can be given as

Analysis and Simulation of the Continuous Stirred Tank …

1143

Fig. 1 Schematic diagram of a continuous stirred tank reactor

Fin,Tin

F,T,C

V

dC A −E/RTC A  F(C Ain − C A ) − Vke dt

(1)

And the energy balance is ρV C p

dT −E/RT C A H + Q  FρC p (Tin − T ) + Vke dt

(2)

Reactor volume  V in m3 Outlet concentration of component A  C A in kg/m3 Inlet concentration of component A  C Ain in kg/m3 Total volumetric flow  F in m3 /s Pre-exponential constant  k s−1 Activation energy for the reaction  E in kJ/mol Reactor temperature  T in K Temperature of inlet flow  T in in K Density  ρ in kg/m3 Specific heat  cp in KJ/kg K Heat of reaction (exothermic)  H in KJ/kg Heat supplied to the reactor  Q in kJ/s CSTR transfer function is obtained by converting nonlinear Eqs. (1) and (2) to linear using the parameters give in Appendix. CSTRtf 

2.293s+9.172 s 2 +10.29s+25.17

3 Control of CSTR The schematic diagram of the feedback control loop of CSTR is as shown in Fig. 2. For the self-tuning control of CSTR, the tuning method PID-GA is shown to be mainly effective as it has an ability to optimally control multivariable system under a variety of constraints. In this paper, our main aim is to control the product temperature of CSTR. Initially, reference input r(t) is set and Output c(t) of CSTR is controlled with

1144

H. Goud and P. Swarnkar

Fig. 2 Schematic diagram of CSTR controller

PID controller via feedback channel. Thus, an error value is generated by comparing the input and output values of CSTR. Controlled output and error are optimized by formulating MSE and ITAE as objective function for GA methods. Corresponding output is the new set point or reference point for the input of CSTR. The CSTR is then manipulated correspondingly to set new value until the error vanishes away.

3.1 PID Controller The PID controllers are described and named according to their nature of gains and proportional parameters. The controller output is the function of these parameters: t u(t)  K p e(t) + K i

e(t)dt + K d

d e(t) dt

(3)

0

Kp Ki Kd t

Proportional gain, a tuning parameter Integral gain, a tuning parameter Derivative gain, a tuning parameter Time or instantaneous time [12].

These parameters collaborate to form the stability of CSTR, and setting up wrong values can result in undesired output. The regulation command tracking refers the wellness of controlled variables. The command tracking is determined on the proportions of rise time and settling time. We emphasize the applications of GA. These methods (inherited from nature) compute the value of K p , K i and K d based on their previous values. The following section depicts the mathematical model of their implementation.

Analysis and Simulation of the Continuous Stirred Tank …

1145

4 Proposed Strategies Most important part of a control system is to set the parameters of a PID controller. In this paper, a smart tuning was used to determine the parameters K p , K i and K d of the PID controller. Various steps involved in system identification are shown in Fig. 3. GA is an optimization method that lies on the platform of heuristic approach. Based on the proposal of Darwin’s principle of survival of the fittest, this method was introduced to commence optimization problems in the field of soft computing [13].

4.1 Genetic Algorithm Genetic algorithm (GA) is an optimization tool that lies on the platform of heuristic approaches. Based on the proposal of Darwin principle of fittest survival, this method was introduced to commence optimization problems in soft computing [14]. The first category of results is termed as initial population, and all the individuals are candidate solution. Simultaneous study of the population including all candidates and next phase of solutions are generated following the steps of GA [15]. An iterative application of operators on the selected initial population is the initiative process of GA. Further steps are devised based on valuation of this population. The typical routing of GA is described in the following pseudocode: 1. Randomly generate initial population.

Fig. 3 Flow diagram of CSTR control using smart controller

1146

2. 3. 4. 5. 6. 7.

H. Goud and P. Swarnkar

Employ fitness function for evaluation. Chromosomes with superior fitness are valued as parents. New population generation by parent’s crossover with probability function. Chromosome mutation with probability to defend system from early trap. Repeat step 2. Terminate algorithm based on satisfaction criteria.

5 Objective Function There is a fitness function with objective of minimum MSE and ITAE for PID to tune its K p , K i and K d values. Each chromosome (GA) of the population will be evaluated by the objective function only once. Chromosomes are formed by three values which correspond to the three gains to be adjusted in order to have a satisfactory behaviour of the PID regulator. To find the PID gain values, we derived a fitness function as below: 

Min f K p , K i , K d



 t       Tref − T( K p ,K i ,K d ) dt

(4)

0

where K p , K i and K d are the proportion, integral and derivative gain of PID, respectively, T ref is the desired value of CSTR temperature and T( K p ,K i ,K d ) is actual temperature of PID due to given value of K p , K i and K d . This function is in the form of ITAE which aims to settle system as fast as possible and thus it may have a higher overshoot value of T( K p ,K i ,K d ) which is considerable, which can be minimized by lowering the upper bound value of K p and this may increase the settling time.

6 Result Analysis This section presents the result obtained by Matlab programming and the detailed comparative analysis shows the effectiveness of the proposed control scheme. For the given value C A (concentration of component in the reactor)  0.114 lbmol/ft3 T j (jacket temperature)  100 °F T (Temperature of CSTR)  40 °F. Initial parameters for graphical analysis are considered as shown in Tables 1, 2 and 3.

Analysis and Simulation of the Continuous Stirred Tank …

1147

Table 1 Parameters of GA for the CSTR No. of population

50

No. of iteration Crossover operator

500 Arithmetic

No. of variables to be optimization

03

Table 2 The PID parameters of PID and GA-PID for the CSTR Parameters PID PID-GA Kp

5

9.992

Ki

50

99.9787

Kd

0.500

0.0115

Table 3 Comparison of different methods for MSE and ITAE Without controller PID MSE ITAE

983.2018 30455.1507

PID-GA

72.1939 3019.7268

69.7599 2921.0182

Response of CSTR

160

Temperature (F)

140 120 100 80 60 40

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Time (Sec) No Controller

PID Controller

GA-PID

Fig. 4 Performance comparison using temperature versus time graph at reference value 100 °F

6.1 CSTR Performances with Different Controllers Shown in Figures and Tables The time response curves shown in Fig. 4 show performance analysis of control mechanism of CSTR. A comparison has been made between without controller, conventional PID and GA-PID. Initially, the temperature of CSTR is taken as 40 °F. The response temperature is set to 100 °F. The error minimization takes a response time approximately 0.35 s for achieving desired temperature 100 °F in GA-PID.

1148

H. Goud and P. Swarnkar Response of CSTR

0.12

3

Concentration (lbmol/ft )

0.1 0.08 0.06 0.04 0.02 0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

4.5

5

Time (Sec) No Controller

PID Controller

GA-PID

Fig. 5 Comparison of concentration of CSTR system for different techniques Comparative Error Graph

60 40

Error

20 0 -20 -40 -60

0

0.5

1

1.5

No Controller

2

2.5

3

Time (Sec) PID Controller

3.5

4

GA-PID

Fig. 6 Performance comparison using error versus time graph for different techniques

The concentration response of CSTR system with and without controller, PID controller and GA-PID, has been shown in Fig. 5 PID controller shows stability at approximately 1.5 s, and GA-PID shows steady state at approximately 0.4 s as shown in Fig. 5. The error gets resolved near to zero after 0.5 s as revealed in Fig. 6. The error values of MSE and ITAE are high with 983.2018 and 30,455.1507, respectively. This makes clear that the CSTR plant with PID controller needs to be improved to further reduce the error. Figure 6 shows the impact of the manipulating variable.

Analysis and Simulation of the Continuous Stirred Tank …

1149

Table 4 Comparison of PID and PID -GA methods for response parameters Response parameter Without controller PID PID-GA Rise time (s)

0.4650

0.1600

0.0600

47.4243

25.7865

12.6682

Peak time (s)

0.8850

0.3550

0.1550

Settling time (s)

1.2049

1.5875

0.4267

Overshoot (%)

Table 4 reveals that settling time, rise time, overshoot and peak time are minimized due to the addition of PID controller and PID-GA. It is also noticed that the error values MSE and ITAE are also reduced. Table 3 shows the values of the time-domain parameters with PID controller.

7 Conclusion This paper presents a mechanism to control the product temperature of CSTR when there is a sudden change in the inflow temperature and flow rate. The controller needs to be tuned to keep the output temperature of CSTR at given set point. The comparison PID with the GA method shows that this process can improve the steadystate performance of the system in a better way by selecting an appropriate objective function. Objective function is optimized using artificial intelligence methods, and the steady-state performance of the CSTR is improved by optimizing rise time, settling time and overshoot using mathematical formulation of objective function by ITAE and MSE. It is noticed that the performance of PID controller is moderate in comparison with PID-GA.

1150

H. Goud and P. Swarnkar

Appendix

Parameters of CSTR in SI unit Variables

Values

Units

Ea

75.363

kJ/mol

K0

5.4E + 19

mol/s

dH

−10,467.05 × 10−2

kJ/mol

U

425.872

W/m2 K

Rho*C p

474.197

Kcal/m3

R

8.315

J/mol K

V

21.238

m3

F

0.0236

m3 /s

C af

2.114

mol/m3

Tf

288.70

K

A

113.43

m2

References 1. Pottmann, M., Seborg, D.E.: Identification of nonlinear processes using reciprocal multiquadric functions. J. Proc. Control 2(4), 189–203 (1992) 2. Wang, L., Willatzen, M.: Modeling of nonlinear responses for reciprocal transducers involving polarization switching. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 54(1), 177–189 (2007) 3. Zhao, Y., Skogestad, S.: Comparison of various control configurations for continuous bioreactors. Indust. Engng. Chern. Res 697–705 (1997) 4. Marsili-Libelli, S.: Parameter estimation of ecological models. Ecol. Model. 62(4), 233–258 (1992) 5. Bellgardt, K.H.: Bioreaction Engineering Modeling and Control, 1st edn. Springer, Berlin (2000) 6. Chang, W.: Nonlinear cstr control system design using an artificial bee colony algorithm. Simul. Model. Pract. Theory 31, 1–9 (2013) 7. Wallam, F., Memon, A.Y.: A robust control scheme for nonlinear non-isothermal uncertain jacketed continuous stirred tank reactor. J. Process Control 51, 55–67 (2017) 8. Jeng, J.C., Ge, G.P.: Disturbance-rejection-based tuning of proportional-integral-derivative controllers by exploiting closed-loop plant data. ISA Trans. 62, 312–324 (2016) 9. Nichols, N.B., Ziegler, J.G.: Optimal settings for automatic controllers. Trans. ASME J. Dyn. Syst. Meas. Control 115(2B), 220–222 (1993) 10. Kanth, V., Latha, K.: Optimization of PID controller parameters for unstable chemical systems using soft computing technique. Int. Rev. Chem. Eng. 3(3) 350–358 (2011) 11. Sánchez, H.S., Visioli, A., Vilanova, R.: Optimal Nash tuning rules for robust PID controllers. J. Franklin Inst. 354(10), 3945–3970 (2017) 12. Kashan, M.H., Nahavandi, N., Kashan, A.H.: DisABC: a new artificial bee colony algorithm for binary optimization. Appl. Soft Comput. 12(1) 342–352 (2012)

Analysis and Simulation of the Continuous Stirred Tank …

1151

13. Mengshoel, O.J., Goldberg, D.E.: The crowding approach to niching in genetic algorithms. Evol. Comput. 16(3) 315–354 (2008) 14. Holland, J.H.: Adaptation in Natural and Artificial Systems, 1st edn. The MIT Press, London (1992) 15. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning, 1st edn. Addison-Wesley, Boston USA (1989)

Fuzzy Logic Controlled Variable Frequency Drives Kartik Sharma, Anubhav Agrawal and Shuvabrata Bandopadhaya

Abstract In this paper, a fuzzy-based PWM controller has been proposed for performance and efficiency enhancement in variable frequency drives (VFD). The VFDs are used for controlling operation of induction motors which are generally nonlinear in nature. The nonlinear loads are major cause of harmonics in the power supply and needs attention. In the present paper, a fuzzy-based closed-loop system for PWMcontrolled VFD has been developed. The fuzzy rule base has been designed considering the instantaneous values of rotor speed for the speed change fuzzy input. The closed-loop motor control system is designed in MATLAB/Simulink environment for analysis. It is clear from the results that there is a significant improvement in the performance of closed-loop drive. Keywords Variable frequency drives · Fuzzy logic controller · Induction motor PWM generator

1 Introduction Speed control of induction motor has proved very effective for performance improvement and cost reduction [1]. The use of variable frequency drive is a very much efficient solution to save energy and reduce emissions leading to 53% reduction of CO2 emissions [2, 3]. The variable frequency drives benefits of the system through process, quality, and control advancements [4]. There are many drawbacks associated with the conventional drives like noise and harmonics on supply network, repair, and maintenance, motor insulation-related problems at high frequency [5]. A big drawK. Sharma (B) · A. Agrawal · S. Bandopadhaya School of Engineering and Technology, BML Munjal University Gurgaon, Gurgaon 122413, India e-mail: [email protected] A. Agrawal e-mail: [email protected] S. Bandopadhaya e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_107

1153

1154

K. Sharma et al.

back in the present induction motor speed control is the sensorless control for the estimation of the motor output. Considering the demerits of the variable frequency drives, the major goal is to build a decisive variable frequency drive which could regulate the speed of the motor by varying the frequency and the voltage. Fuzzy logic is an efficient tool for the motion control and also increases the quality of the control operations [6]. The fuzzy logic is an optimal artificial intelligence tool because of stability concerns making information processing better [7, 8]. The rule base is set to act on the inputs being taken from the motor speed, and the output is used to generate the reference signal for the PWM generator [9]. The PWM technology is preferred in the variable drives as it provides more sinusoidal current output to control frequency and voltage to the AC machine [10]. The fuzzy logic controller predicts the output for various operating conditions, thus providing optimum results [11]. Fuzzy logic controller in combination with the pulse width modulation generator is used as a single control unit, for the pulse generation to the inverter circuit due to its ability to handle nonlinearities [12–14]. The output from the PWM generator output in form of pulses is converted into AC source consumable by the 3-phase 4-pole induction motor using 2-level IGBT inverter. The main objective of the research is to develop a closed-loop model for motor control using fuzzy logic controller and PWM signal generator.

2 System Model In this section, the basic electric drive model is explained and later the proposed model is discussed.

2.1 Basic Block Diagram of Electrical Drives The electric drive we have discussed is a speed control mechanism for the induction motor by varying the frequency of the electrical power supplied to it. The electric drive, in general, is closed loop in nature. This scheme involves the speed control mechanism of the machine. AC motors are the commonly used machines, and the speed control of such motors is possible by varying the frequency of the AC supply. The basic block diagram of AC drive is shown in Fig. 1. The source block takes either DC or AC supply fixed depending upon the type of scheme. A two-stage conversion has fixed AC input supply (AC–DC–AC), while the inverter control has DC input supply (DC–AC). In this study, DC supply is given as input to the system. In this paper, DC supply is considered as input to the signals with the help of pulses. The power electronic converter converts the energy source into the form required by the motor. The power modulator balances the flow of power from source to motor. In the control unit, the reference signal (given by user) and the feedback signal (sensed by sensor) are compared and pulses are generated. The sensor is one of the major units

Fuzzy Logic Controlled Variable Frequency Drives

1155

Fig. 1 Basic drive block diagram

that accomplishes a closed-loop operation or sensing of essential parameters such as current, motor, voltage, speed, or torque. The control unit generates command that regulates the power modulator, and it ensures the stability and optimum performance of the whole system. The complexity of the control unit structure depends on the requirement of the performance. The limitations that are being associated with the electrical drives is the poor dynamic response and majorly conduction of harmonics to the control unit. The harmonics could led to the addition of losses and reduced efficiency. The proposed model mitigates the demerits of the conventional electrical drives and upgrades the ability of the whole system.

2.2 Detailed Block Diagram of the Proposed Model The model that is being proposed for the improvements in the performance of the variable frequency drive involves a fuzzy logic controller for PWM generation in a closed-loop operation. The PWM feeds pulse signal as an input to the 2-level inverter that sends waveform to the motor. The proposed model runs in a closed-loop operation. The feedback is taken from the motor regarding its speed, and then the signal is given in the form of sine wave to the PWM generator that sends pulses to the inverter for the motor operation. The PWM technique is most suitable for induction motor control as it easily powers AC devices with available DC current. The IGBTs are used to regulate frequency and voltage to the motor. The variation of duty cycle in PWM signal will appear to the induction motor as an AC signal. The advantage of using fuzzy logic controller is that it is more robust, customizable, fast, and dynamic, thus making the process like a human thinking process (Fig. 2).

1156

K. Sharma et al.

Fig. 2 Block diagram of proposed drive scheme

3 Fuzzy Logic Technique Fuzzy logic controller trains the system in such a way that the combination of inputs results in an output at the final step using the possible rule that is being fed in the system to act using “if-then” rules. The fuzzy logic system in any of the application always has three steps: Fuzzification—The fuzzification is the process of converting crisp sets into the fuzzy sets which is done using membership functions. Fuzzy rule interference—In this stage, “if-then” rules are used to correlate the output to the input fuzzy sets. The Mamdani fuzzy interference system is most commonly used due to its acceptance of human input wisely. Defuzzification—The final step is the defuzzification which deals to get a final output based on the fuzzy rule base system. The input we have specified for our fuzzy logic is the speed error, and output is frequency change. The value of speed error ranges from −1500 to 1500 rpm, whereas the frequency change value ranges from −50 to 50 Hz in the Gaussian function. The following steps show the design of fuzzy logic toolbox: The first step is creating toolbox by specifying the limited numbers of inputs and outputs (Fig. 3). Fuzzy logic membership function The membership functions must overlap for fuzzy set of rules (Figs. 4 and 5). Fuzzy logic rules The system predicts a suitable output from the input data based using the rule base (Fig. 6).

Fuzzy Logic Controlled Variable Frequency Drives

1157

Fig. 3 Fuzzy logic toolbox design

4 Results and Discussion The desired results were obtained when fuzzy logic controller was implemented for the pulse generation through the PWM controller. The results were compared with the conventional model and the conclusions were made (Figs. 7 and 8). The AC machine used is a 3-phase 4-pole squirrel cage induction motor operating at 4 KW and 50 Hz frequency. There are various parameters on which the performance of the drive is tested and analyzed. These included the rotor speed, electromagnetic torque, stator current, and rotor current (Figs. 9, 10, 11, and 12).

1158

K. Sharma et al.

Fig. 4 Fuzzy logic input membership functions

5 Conclusion The responses of the conventional drive system and the fuzzy logic controller were examined and presented. The comparisons were being made using the waveforms and response characteristics acquired. There are a better quality criteria when the model was operated using the fuzzy logic controller. The steady state is achieved earlier using the FLC controller, and the model is more stable than the conventional drive. The investigation emphasizes the use of fuzzy logic controller for reducing complexity and enhancing energy saving in the operations.

Fuzzy Logic Controlled Variable Frequency Drives

Fig. 5 Fuzzy logic output membership functions

1159

1160

Fig. 6 Fuzzy logic rule base

K. Sharma et al.

Fuzzy Logic Controlled Variable Frequency Drives

Fig. 7 Three-phase sinusoidal reference signal waveform

Fig. 8 PWM signal input to the induction motor

1161

1162

K. Sharma et al.

Fig. 9 Rotor speed comparison of the conventional method versus FLC

Fig. 10 Electromagnetic torque comparison of the conventional method versus FLC

Fuzzy Logic Controlled Variable Frequency Drives

1163

Fig. 11 Rotor current comparison of the conventional drive versus FLC

Fig. 12 Stator current comparison of the conventional drive versus FLC

References 1. Bose, B.K.: Recent advances and applications of power electronics and motor drives. In: The 7th WSEAS International Conference on Electrical Power Systems, High Voltage, Electrical Machines, Venice (2007) 2. Mohan, N., Robbins, W.P., Underland, T.M., Nilsen, R., Mo, O.: Simulation of power electronic and motion control systems—An overview. Proc. IEEE 8 (1994)

1164

K. Sharma et al.

3. Slemon, G.R.: Electrical machines for variable frequency drives. In: Proc. IEEE 8 (1994) 4. Lorenz, R.D., Lipo, T.A., Novotny, D.W.: Motion control with induction motors. Proc. IEEE 8 (1994) 5. Harashima, F.: Power electronics and motion control—A future perspective. In: Proc. IEEE 8 (1994) 6. Bolognani, S., Zigliotto, M.: Fuzzy logic control of a switched reluctance motor drive. IEEE Trans. Ind. Appl. 5 (1996) 7. Thomas, D.E., Armstrong-Helouvry, B.: Fuzzy logic control, a taxonomy of demonstrated benefits. Proc. IEEE (1995) 8. Tripura, P., Srinivasa Kishore Babu, Y.: Fuzzy logic speed control of three phase induction drive. World Acad. Sci. Eng. Technol. 5 (2011) 9. Deken, Kenneth F.: Dealing with line harmonics in PWM variable frequency drives. I&CS (1983) 10. Win, T., Sabai, N., Maung, H.N.: Analysis of variable frequency three phase induction motor drive. World Acad. Sci. Eng. Technol. 18 (2008) 11. Volosencu, C.: Stability analysis of a speed fuzzy control system for electrical drives. In: The 7th European Congress on Intelligent Techniques and Soft Computing EUFIT’99. ELITEFoundation, Aachen (1999) 12. C. Volosencu, Fuzzy Control of Electric Drives, Proceedings of the WSEAS International Conference, Santander, 2008 13. Mechernene, A., Zerikat, M., Hachblef, M.: Fuzzy Speed Regulation for Induction Motor Associated with Field-Oriented Control. IJ-STA 2(2), 804–817 (2008) 14. Tunyasrirut, S., Suksri, T., Srilad, S.: Fuzzy logic control for a speed control of induction motor using space vector pulse width modulation. World Acad. Sci. Eng. Technol. 25(14), 71–77 (2007)

Butterfly-Fat-Tree Topology-Based Fault-Tolerant Network-on-Chip Design Using Particle Swarm Optimization P. Veda Bhanu, Pranav Venkatesh Kulkarni, U. Anil Kumar and J. Soumya

Abstract As the number of Intellectual Property (IP) cores integrated on-chip is increasing, communication between them becomes challenge. To mitigate this issue, a packet-based switching technique known as Network-on-Chip (NoC) has been proposed. In the deep sub-micron technology, NoCs are more sensitive to fabrication variations and tolerance accumulation. Hence, there is a need to develop reliable and efficient Fault-tolerant NoC designs. This paper presents a novel fault-tolerant NoC design for Butterfly-Fat-Tree (BFT) topology with flexible spare core placement using a Particle Swarm Optimization (PSO) based metaheuristic technique. Experimentations have been performed on several benchmarks reported in the literature, (i) by varying the network size with fixed fault percentage in the network and (ii) by varying the percentage of faults while fixing the network size. The results show improvement in terms of communication cost in BFT networks. Keywords Network-on-chip · Butterfly-fat-tree · Fault tolerance · Particle swarm optimization · Communication cost

1 Introduction The advancements in silicon technology have increased the number of transistors being packed on Integrated Chips (ICs). With scaling of ICs in a System-on-Chip P. V. Bhanu · P. V. Kulkarni · U. Anil Kumar · J. Soumya (B) Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Hyderabad Campus, Pilani, Rajasthan, India e-mail: [email protected] P. V. Bhanu e-mail: [email protected] P. V. Kulkarni e-mail: [email protected] U. Anil Kumar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_108

1165

1166

P. V. Bhanu et al.

(SoC), the communication complexity has been increased [1]. Hence, there is a need to develop an efficient and reliable communication architecture. The traditional busbased architectures in SoCs have limited bandwidth capabilities and cannot handle the increased bandwidth requirements [2]. Due to these limitations, Network-onChip (NoC) has been proposed as a viable solution to address the current application requirements [3]. The major components of NoC consist of cores, routers and links. The communication in NoC is based on packet-based switching technique where different cores communicate using routers via links available in the architecture [4]. To design a NoC that can satisfy current incoming application requirement with high inter-core data bandwidth is very crucial. Due to this, there is a significant power dissipation in the system which may lead to failure. Hence, there is a need to develop reliable system from unreliable components without introducing excessive overhead. These highly scaled NoCs are prone to faults like transient, intermittent and permanent [5]. Transient faults can be caused by temporary interferences like crosstalk, voltage noises, radiations, etc. Intermittent faults can be caused by marginal or unstable hardware which occurs repeatedly at the same location, often in bursts. Permanent faults are caused by transistor or wires resulting in logic faults and delay faults, respectively. In this paper, we have considered permanent faults while trying to find the placement for spare core in Butterfly-Fat-Tree (BFT) topology-based NoC. This work considers only core failures; however, the spare core placement has been done by taking a faulty network as input, where some router positions are not available for mapping. We have also assumed that the most communicating core has the highest probability of failure [6]. However, other core failures in the given application core graph are considered as extension to our work. The core placement problem in NoC has been observed to be NP-hard [7]. Most of the works reported in the literature target mesh topology-based NoC design. BFT topology has the advantage of smaller diameter and high bisection width compared to mesh and mesh-of-tree architectures [8]. It also requires less area compared to mesh architecture and is easy to implement. This has motivated us to find the best suitable position for the spare core in BFT topology. We have experimented by varying the fault percentage in the network and by varying the BFT network size to check the applicability of our approach. The rest of the paper is organized as follows. Section 2 presents a brief literature survey. Section 3 presents an overview of BFT topology. Section 4 describes problem formulation. Section 5 presents a brief overview of PSO. Section 6 discusses the formulation of PSO for spare core placement. Section 7 recites the experimental results. Section 8 draws the conclusion.

2 Related Work There have been many works which have aimed to develop different techniques to map cores onto various topologies. Since mapping in NoC is NP-hard problem and hence, most of the techniques used are based on heuristics. In [9], a two-phase

Butterfly-Fat-Tree Topology-Based Fault-Tolerant …

1167

heuristic mapping algorithm is developed for mapping. The main idea behind this technique is to keep two most communicating cores on adjacent nodes and arrange the remaining clusters around these cores. In [10], a branch and bound algorithm was proposed to map cores onto a tile-based NoC architecture to optimize the total communication energy. A mapping algorithm was proposed in [11], to map cores onto mesh architecture having bandwidth constraints to minimize the average communication delay. The NMAP algorithm divided the traffic among different routes to reduce the bandwidth requirements and thus reducing latency. In [12], a SUNMAP tool was proposed which takes core graph as an input and automatically selects the best topology and produces the final mapping result while considering the various objectives like minimizing the communication delay, area and energy consumption subject to bandwidth and area constraints. A two-stage variation-aware mapping scheme was proposed in [13], for fault-tolerant NoC with redundant cores, where various task mapping solutions are generated at the design stage. While at the runtime, one of the mapping solutions is used based on the availability of the physical cores, narrowing down to the BFT topology, the approach followed in [14] assumes faultfree environment and maps cores based on Kernighan–Lin partitioning technique where the frequently communicating cores are placed in the same partition. Most of the works reported in the literature have considered mesh-based architectures for fault tolerance. There are no approaches reported in the literature considering faults in the NoC design based on BFT. Our work mainly concentrates on faults in NoC while performing core mapping in BFT topology using Discrete Particle Swarm Optimization (DPSO) technique to optimize communication cost.

3 An Overview of Butterfly-Fat-Tree Topology In a BFT structure, the cores or processing elements are placed at the leaf level router. Each router is labelled by a pair of coordinates namely S(a, b), where a denotes level of router and b represents the position of router in that level. 4 × 4 BFT architecture is shown in Fig. 1. The location of cores at leaf level is given by (0, 1), (0, 2), (0, 3), …, (0, N). In general, at the lowest level, i.e., leaf level, if there are N functional cores, then they are connected to N/4 routers.

4 Problem Formulation The set of tasks implemented by processing elements or IP cores in each application is represented in the form of core graph and it is defined as follows. Definition 1 The directed graph G(C, E) corresponding to the communication traces of the application. – Each vertex ci ∈ C corresponds to a core in the application core graph.

1168

P. V. Bhanu et al.

Fig. 1 4 × 4 butterfly-fat-tree architecture

– An edge eij represents communication from core ci to core cj . The edge is labelled with a value BW ij equal to the bandwidth requirement of the communication between ci and cj . Definition 2 The BFT network of size m × n with available number of routers to place the cores of the input application core graph. Using the above definitions, the problem of minimizing the communication cost has been solved by mapping cores (including spare core) of the application core graph onto available routers in the BFT network using DPSO technique. The communication cost is calculated as the product of number of hops required to communicate between cores ci and core cj and the corresponding Bandwidth requirement BW ij . It is given by the following Eq. (1): Communication cost =



(Number of Hops ∗ Bandwidth)

(1)

∀Edges

5 Particle Swarm Optimization Particle Swarm Optimization (PSO) is a population-based stochastic technique designed and developed by Eberhart and Kennedy in 1995 [15], inspired by social behaviour of bird flocking or fish schooling in search of food. In a PSO system, each particle is known as solution, multiple particles coexist and collaborate simultaneously. In a problem space, each particle will adjust its flying according to experience of its own and as well as its companions. The quality of each particle in the search space of PSO is determined by its fitness value. Since the PSO is successful in solving continuous domain, many researchers have applied the discrete version of PSO

Butterfly-Fat-Tree Topology-Based Fault-Tolerant …

1169

as well. This inspired us to apply Discrete Particle Swarm Optimization (DPSO) formulation to our approach, as part of the fault-tolerant NoC design. The DPSO basics are explained below. Each particle is treated as a point in n-dimensional space. In every run, the fitness function is evaluated by taking the position of the particle in solution space. The position of ith particle at kth iteration is denoted as pki . Each particle keeps track of its best value obtained so far. This is called as the personal best or local best of ith particle (pbest i ). Similarly, the best fitness value across the whole swarm is called global best (gbest k ) for the generation k. The new position of the particle can be calculated by i = (c1 ∗ I ⊕ c2 ∗ (pk → pbest i ) ⊕ c3 ∗ (pk → gbestk ))pki pk+1

(2)

In the above Eq. 2, a → b represents the minimum length sequence of swappings to be applied on components of a to transform it to b. For example, if a = [6, 7, 8, 9] and b = [9, 6, 7, 8], a →b = [swap (0, 3), swap (1, 3), swap (2, 3)]. The operator ⊕ is the fusion operator, applied on two swap sequences. The sequence of swaps in a is followed by the sequence of swaps in b is given by a ⊕ b. The constants c1 , c2 and c3 are the inertia, self-confidence and swarm confidence values, respectively. The quantity means that the swaps in the sequence a →b will be applied with a probability of ci . Swap sequences such as [swap (1, 1), swap (2, 2), swap (3, 3) …swap (n, n)] are known as identity swaps which is denoted by I. It corresponds to the inertia of the particle to maintain its current configuration. To generate a new particle pik+1 , the final swap sequence is applied on particle pik mentioned in Eq. 2. From [16], it can be found that the convergence condition for this PSO is given by (1 −



c1 )2 ≤ c2 + c3 ≤ (1 +



c1 )2

(3)

Accordingly, we have worked with various values of c1 , c2 and c3 . The results reported in this paper are based upon the values of c1 = 0.6, c2 = 0.2 and c3 = 0.2. This completes an overview of PSO.

6 PSO Formulation for Spare Core Placement In this section, we present PSO formulation to select router positions for all the cores along with spare core in the BFT network, such that communication cost is minimized. Input to our formulation is an application core graph and available router positions for mapping in the BFT network. Individual cores are assumed to be of uniform size in our formulation, for simplicity. However, nonuniform-sized cores can also be considered and may result in more area.

1170

P. V. Bhanu et al.

6.1 Particle Structure and Fitness Function A particle (P) is an array of size identical to the number of cores present in the core graph with an extra entry for the spare core. Every consecutive four index values of an array associate with one router, and the contents of an array give the core number in the application core graph. To accommodate a single spare core, it is assumed that a router is free to accommodate else an additional router is required. However, reckoning on the failure likelihood of the cores in core graph, the structure is extended to accommodate more number of spare cores. For 4 × 4 BFT network, the particle structure is as follows; every four indices are associated with one router. For example index 0, 1, 2, 3 to router r1. Similarly index 4 to 7 to router r2, index 8 to 11 to router r3, index 12 to 15 to router r4 and so on. For a particle Pi , fitness function is defined as the communication cost obtained by association of cores (including spare core) to different routers. Evolution of particles over generations depends on fitness function. C1 C3 C5 C13 C4 C2 C7 C8 C10 C9 C11 C15 C12 C6 C14 C16 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15             Router r1

r2

r3

r4

6.2 Local and Global Bests During the evolution process of a particle, the set of core positions which gives the minimum fitness function among all other core positions is known as local best (pbest). Global best (gbest) is defined as the minimum fitness function obtained for a particular generation. The evolution of particle is guided by (pbest and gbest). These values are changed if the corresponding values within the current iteration are lesser than the previous iteration.

6.3 Evolution of Generation Initially, random particles are created and fitness of each particle has been evaluated. The pbest and gbest values will be identical in the initial generation. Through series of swap operations, new generations are created. Accordingly, pbest and gbest are updated which are expected to give optimum results. Swap Operator: For the particle structure defined in Sect. 6.1, positions of the particle P with indices i and j are swapped. Note, the swap operator does not accept the index values which lies within four consecutive index numbers associated with same router number.

Butterfly-Fat-Tree Topology-Based Fault-Tolerant …

1171

The swap operator SO (3, 5) has been applied on P to swap positions 3 and 5 as shown below.

C1 C3 C5 C2 C4 C13 C7 C8 C10 C9 C11 C15 C12 C6 C14 C16

Swap Sequence: The sequence of swap operations applied on a particle is known as swap sequence. For example, if a swap sequence SS = SO (1, 15), SO (3, 12) has been applied on a particle Q to create a new particle Qnew as follows. Let the particle Q be

C16 C9 C5 C7 C4 C2 C6 C10 C8 C3 C11 C1 C12 C13 C14 C15

SO (1,15) on particle Q creates intermediate particle Qint :

C16 C15 C5 C7 C4 C2 C6 C10 C8 C3 C11 C1 C12 C13 C14 C9

SO (3,12) on Qint results in new particle Qnew :

C16 C15 C5 C12 C4 C2 C6 C10 C8 C3 C11 C1 C7 C13 C14 C9

This completes the evolution of generation.

7 Experimental Results In this section, we present the results obtained by our approach on several benchmark applications reported in the literature. We have implemented our approach on Linux platform based PC with Intel Xenon Processor E5-1650 V3, operating at 3.5 GHz and having 32 GB main memory, and results are organized as follows. Section 7.1 shows the communication cost for 8 × 8, 16 × 16 BFT network with no core faults.

1172

P. V. Bhanu et al.

Section 7.2 shows the communication cost in BFT topology by varying the network size (8 × 8 and 16 × 16) with 30% faults in the network. Section 7.3 shows the communication cost by varying the fault percentage (15, 30, 45 and 60%) in 8 × 8 BFT network.

7.1 Communication Cost Using our Approach in BFT Network Without any Faults In this section, we have calculated communication cost [by Eq. (1)] using our approach for 8 × 8 and 16 × 16 BFT networks without any faults in the network. This acts as the reference for comparison with spare core inclusion by varying the fault percentage and by varying the network size in BFT. Table 1 shows the communication cost of several benchmark applications with no core failure.

7.2 Communication Cost by Varying the BFT Network Size with Fixed Fault Percentage In this section, we have calculated communication cost by varying the BFT network size to 8 × 8 and 16 × 16 with fixed 30% faults in the network to check the scalability of our approach. Table 2 shows the communication cost calculated by varying the BFT network size on several application benchmarks reported in the literature. From Table 2, we can observe that in comparison with the fault-free approach results (shown in Table 1), the minimum percentages of increment in communication cost for 8 × 8 and 16 × 6 networks are 1.49 and 9.23%, respectively, and maximum percentage of increment for 8 × 8 and 16 × 16 networks are 8.25 and 42.30%, respectively. On an average, the communication cost overhead in 8 × 8 and 16 × 16 networks are 7.60 and 20.31%, respectively. Hence, it can be justified that with a minimal increment in communication cost, the spare core could be placed using our approach by providing fault tolerance. Table 1 Communication cost without any core faults in 8 × 8, 16 × 16 BFT network Application No. of cores Communication cost in 8 × 8 and 16 × 16 BFT MPEG MWD 263Encoder MP3Encoder 263Decoder VOPD

12 12 12 13 14 16

4461 1440 26.89 18.38 20.42 4444

Butterfly-Fat-Tree Topology-Based Fault-Tolerant …

1173

Table 2 Communication cost by varying BFT network size with 30% faults in the network Application No. of cores Communication cost 8×8 % of 16 × 16 % of incrementa incrementa MPEG MWD 263Encoder MP3Encoder 263Decoder VOPD a Over

12 12 12 13 14 16

4634 1536 29.31 19.49 20.73 4536 Average %

3.73 6.25 8.25 5.67 1.49 2.02 7.60

4915 2496 30.65 19.19 27.64 6157

9.23 42.30 12.22 4.22 26.12 27.82 20.31

communication cost reported in Table 1

7.3 Communication Cost by Varying the Fault Percentage in BFT Network In this section, we have calculated communication cost by varying the percentage of faults (15, 30, 45 and 60%) in 8 × 8 BFT network. This experiment demonstrates the applicability of our approach with increase in the fault percentage. Table 3 shows the communication cost calculated by varying faults in the network. As we can observe from Table 3, in comparison with the fault-free approach (reported in Table 1), the minimum percentages of increment in communication cost for 15, 30, 45 and 60% networks are 0, 1.49, 4.96 and 12.64%, respectively. Maximum percentages of increment in communication cost for 15, 30, 45 and 60% networks are 5.31, 8.25, 26.73 and 89.36%, respectively. On an average, the communication cost overhead in 15, 30, 45 and 60% networks are 2.82, 7.60, 13.27 and 45.45%, respectively, over the fault-free approach. This Table 3 Communication cost by varying the fault percentage in the 8 × 8 network

Application

MPEG MWD 263Encoder MP3Encoder 263Decoder VOPD

a%

No. of cores

12 12 12 13 14 16

15%

Communication cost % 30% % 45% % 60% Inc.a Inc.a Inc.a

% Inc.a

4634 1472 28.40 19.36 20.56 4444

3.73 2.17 5.31 5.06 0.68 0.00

24.44 82.28 38.62 12.64 25.36 89.36

Average %

2.82

4634 1536 29.31 19.49 20.73 4536

3.73 6.25 8.25 5.69 1.49 2.02 7.60

of increment over communication cost reported in Table 1

4694 1568 29.76 19.18 27.87 6007

4.96 8.16 9.64 4.16 26.73 26.01 13.27

5904 8128 43.81 21.04 27.36 41796

45.45

1174

P. V. Bhanu et al.

shows the applicability of our proposed approach with increase in the fault percentage in the network. With increase in the fault percentage, the number of positions available for mapping gets decreased, making the search space smaller. However, our approach still finds the spare core position with decrease in the search space. The overhead in the communication cost gets increased as the fault percentage increases, since the number of router positions available for mapping gets decreased.

8 Conclusion In this paper, we have proposed a discrete particle swarm optimization based metaheuristic technique to select the best position for the spare core in the BFT network. The applicability of our approach has been demonstrated by varying the fault percentage and also by varying the network size with fixed faults for different application benchmarks reported in the literature. Future works include addressing multiple core failures and proposing exact methods. Acknowledgements This work is partially supported by the research project No. ECR/2016/001389 dated 06/03/2017, sponsored by the Science and Engineering Research Board, Government of India.

References 1. Lee, H., Chang, N., Ogras, U., Marculescu, R.: On-chip communication architecture exploration: a quantitative evaluation of point-to-point, bus, and network-on-chip approaches. ACM Trans. Des. Autom. Electron. Syst. 12(3), 23 (2007) 2. Benini, L., De, Micheli G.: Networks on chips: a new SoC paradigm. IEEE. Comput. 35(1), 70–78 (2002) 3. Dally, W., Towles, B.: Route packets, not wires: on-chip interconnection networks. In: Proceedings of the 38th Design Automation Conference (IEEE Cat. No. 01CH37232) DAC 2001, pp. 684–689. Las Vegas, Nevada, USA (2001) 4. Soumya, J., Chattopadhyay, S.: Application-specific network-on-chip synthesis with flexible router placement. J. Syst. Archit. 59(7), 361–371 (2013) 5. Radetzki, M., Feng, C., Zhao, X., Jantsch, A.: Methods for fault tolerance in networks-on-chip. ACM Comput. Surv. 46(1), 8 (2013) 6. Khalili, F., Zarandi, H.: A fault-tolerant core mapping technique in networks-on-chip. IET Comput. Digital Tech. 7(6), 238–245 (2013) 7. Donath, W.: Complexity theory and design automation. In: Proceedings of the 17th Design Automation Conference, DAC ’80, pp. 412–419. Minneapolis, MN, USA (1980) 8. Sahu, P.K., Manna, K., Chattopadhyay, S.: Application mapping onto butterfly-fat-tree based network-on-chip using discrete particle swarm optimization. Int. J. Comput. Appl. 115(19), 13–22 (2015) 9. Koziris, N., Romesis, M., Tsanakas, P., Papakonstantinou, G.: An efficient algorithm for the physical mapping of clustered task graphs onto multiprocessor architectures. In: Proceedings of the 8th Euromicro Workshop on Parallel and Distributed Processing, 2000, pp. 1–8. IEEE, Rhodos, Greece (2000)

Butterfly-Fat-Tree Topology-Based Fault-Tolerant …

1175

10. Hu, J., Marculescu, R.: Energy-aware mapping for tile-based NoC architectures under performance constraints. In: Proceedings of the ASP-DAC Asia and South Pacific Design Automation Conference, 2003, pp. 233–239. IEEE, Kitakyushu, Japan (2003) 11. Murali, S., De Micheli, G.: Bandwidth-constrained mapping of cores onto NoC architectures. In: Proceedings Design, Automation and Test in Europe Conference and Exhibition, pp. 896901. IEEE, Paris (2004) 12. Murali, S., De Micheli, G.: SUNMAP: a tool for automatic topology selection and generation for NoCs. In: 41st Design Automation Conference, 2004, pp. 914–919. IEEE, San Diego, CA, USA (2004) 13. Zhang, L., Yang, J., Xue, C., et al.: A two-stage variation-aware task mapping scheme for faulttolerant multi-core Network-on-Chips. In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–4. IEEE, Baltimore, MD, USA (2017) 14. Sahu, P.K., Shah, N., Manna, K., Chattopadhyay, S.: An application mapping technique for butterfly-fat-tree network-on-chip. In: 2011 Second International Conference on Emerging Applications of Information Technology, pp. 383–386. IEEE, Kolkata, India (2011) 15. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: IEEE International Conference, pp. 1942–1948 (1995) 16. Luo, G., Zhao, H., Song, C.: Convergence Analysis of a Dynamic Discrete PSO Algorithm. In: First International Conference on Intelligent Networks and Intelligent Systems. IEEE, New York, pp. 89–92 (2008)

Big Data Classification Using Scale-Free Binary Particle Swarm Optimization Sonu Lal Gupta, Anurag Singh Baghel and Asif Iqbal

Abstract Due to the data explosion, Big Data is everywhere all around of us. The curse of dimensionality in Big Data has produced a great challenge for data classification problems. Feature selection is a crucial process to select the most important features to increase the classification accuracy and to reduce the time complexity. Traditional feature selection approaches suffer from various limitations, so Particle Swarm Optimization (PSO)-based feature selection approaches are proposed to overcome these limitations, but classical PSO shows premature convergence when the number of features increases or the datasets having more categories/classes. In this paper, topology-controlled Scale-Free Particle Swarm Optimization (SF-PSO) is proposed for feature selection in high-dimensional datasets. Multi-Class Support Vector Machine (MC-SVM) is used as a machine learning classifier and obtained results show the superiority of our proposed approach in big data classification. Keywords Big data · Feature selection · Evolutionary computation (EC) Particle swarm optimization (PSO) · Scale-Free binary particle swarm optimization (SF-BPSO)

1 Introduction We live in an era of Big Data, where data explosion has become ubiquitous. Big Data is the large amount of data being processed by the data mining environment. In other words, it is the collection of large and complex datasets that it becomes difficult to process using on-hand database management tools or traditional data processing applications. In the year 2001, Doug Laney was the first one in talking about three V’s in Big Data management, namely, volume, variety, and velocity S. L. Gupta (B) · A. S. Baghel Gautam Buddha University, Greater Noida 201308, India e-mail: [email protected] A. Iqbal PIRO Technologies PVT. LTD., New Delhi 110025, India © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_109

1177

1178

S. L. Gupta et al.

which was summarized in the year 2011 by Gartner [1] as high volume, high velocity, and high variety. Later on, researchers have added four more V’s, namely veracity, validity, viability, and volatility, so now Big Data is characterized by seven V’s known as Big Data spectrum. Volume is the prime characteristic of Big Data among its entire characteristics. More often volume is being interpreted as the amount of data but this is not the fact. The volume of big data can be in the form of amount or size or scale of the data [2]. The size of data may be in both the forms, horizontally and vertically. If it is horizontally, then it is the number of instances or number of records available in the dataset. If it is vertically, the size is the number of features or number of attributes present in the dataset. The total features contained in the dataset define the dimensionality of the dataset. The issues related to the volume of big data are processing performance, the curse of modularity, class imbalance, the curse of dimensionality, feature engineering, nonlinearity, etc. In machine learning classification problems, many of the datasets carry hundreds of thousands of features. In any classification task, feature selection is an utmost crucial process. All the features do not possess the discriminative power for classification process [3]. Only a few features are relevant and important for classification purpose, and rest of the features do not contribute anything as most of the times they are noisy and redundant. Further, to identify those few important features is a cumbersome task. Also, the large number of features increases the complexity of classifier in terms of time as well as space. So, the process of selecting a compact feature subset from the complicated list of extracted features to reduce the complexity of computation without hampering the accuracy of classification is called feature selection. In big data classification problems, the curse of dimensionality is a major hindrance for underlying machine learning classification model. The performance of classifier degrades with the increasing number of features in terms of accuracy and time. In general, there are total 2n possible solutions exist for a dataset with n features, which grows exponentially with the value of n. Due to large search space, traditional exhaustive feature selection techniques are unable to solve the problem of feature selection in a reasonable time with great classification accuracy. Traditional feature selection approaches suffer from various limitations. First, a predefined fixed number of most relevant features are required among all available features for classification model which is a major drawback of traditional approaches because it is very difficult to know the optimal size of features in advance. Second, to minimize the features redundancy, features are ranked on the basis of some informative measures and ten eliminated from the bottom. But, again it is not known how many features need to be eliminated. Third, different heuristics are utilized to select the appropriate features and that is also customized for individual classifiers too. So, it is very difficult to design a generalized model. Due to these limitations, Evolutionary Computation (EC)-based techniques are proposed to solve the feature selection problems. EC-based approaches are highly flexible and automatically select the optimal set of features without specifying the number of features. They work on the whole group of available features set, so feature

Big Data Classification Using Scale-Free Binary Particle …

1179

interaction is also considered during the optimal feature selection process. Another important benefit is that these approaches can be easily incorporated into any learning classifier just like plug-in and plug-out (plug and play). The size of features set is also handled efficiently as EC-based approaches are computationally fast. Some of these approaches are Particle Swarm Optimization (PSO) [4, 5], Ant Colony Optimization (ACO) [6], harmony search algorithm [7], and cuckoo search algorithm [8] which are used in the research community for feature selection in Big Data classification. PSO, proposed by Kennedy and Eberhart [9], is one of the most explored and extensively used EC-based approaches for feature selection because of its algorithmic simplicity, effectiveness, and higher computational efficiency [10]. PSO is a natureinspired algorithm in which flock of particles moves in a search space constrained by some parameters. Particles learn through the experience of neighboring particles in collaborating and adaptive manner and achieve the optimality. In PSO, particle’s learning is highly influenced by the neighboring particles according to the network topology used during the search space. The topological structure and the learning mechanism of particles, both the factors, have the great impact on the performance of the PSO algorithm [11]. Various topologies have been suggested by researchers but studies reveal that no single topology works best in all problems. The optimal topology is always problem dependent and different topologies needed for different problems. Two most commonly and widely used topologies are global best (gbest) and local best (lbest). The gbest topology is a fully connected graph in which each particle is connected with rest of the particles. In the lbest topology, each particle is connected to the limited number of neighboring particles only, generally two or three. Another major topology is ring topology in which each particle is connected to the two nearest neighboring particles. Various topologies like Von Neumann, stars, wheels, cycles, pyramid, and random are also used in literature. Recently, scale-free topologies, which were discovered from scale-free networks in natural systems, have been applied to the population of particles in PSO and the performance has been investigated. The studies establish the superiority of scale-free PSO in comparison to other topological structures. In this paper, we propose a feature selection approach using scale-free topological structures in binary PSO which is capable to tackle the high-dimensional datasets with higher classification accuracy at minimal number of features. The remainder of this paper is organized as follows. Section 2 explains the literature review of various feature selection approaches using PSO and its variants. The classical PSO, binary PSO, and our proposed SF-PSO are described in Sect. 3. In Sect. 4, different experimental results will be demonstrated to show the efficiency of our proposed approach. Finally, conclusions and scope of future works are included in Sect. 5.

1180

S. L. Gupta et al.

2 Literature Review In this section, a brief literature review of various feature selection approaches for big data classification using PSO and its variants is presented. Zhang et al. [12] study the scale-free topology in fully informed PSO using the modified BA influence model. The proposed algorithm was tested on different test functions which proved the proposed approach more effective and more natural. Fong et al. [13] propose a swarm search to find optimal features. They utilized many swarm-based algorithms like PSO, BAT, and WSA for selection of optimal feature set over some high-dimensional dataset. But their high dimensionality is only limited up to 875 only. A new PSO approach named as PSO-LSRG was developed by Tran et al. [4] which combines a new local search on pbest and a reset gbest mechanism for feature selection in high-dimensional datasets. A Selectively Informed Particle Swarm Optimization (SIPSO) was proposed by Gao et al. [14] in which particle’s learning is guided through the hub and non-hub particles. Hub particles are more influential and play the key role in convergence toward optimal solution, while non-hub particles are less influential and have much freedom to move in search space freely maintaining the diversity of the space. SIPSO was experimented on eight benchmark functions, and high-quality results were obtained with speedy convergence rate. The importance of topological structure in the performance of PSO was established by Liu et al. [11]. The selection of optimal topology is always problem specific and much dependent upon computational capacity. Based on the experimental results, some formulae were also developed to choose parameters for optimal topology. A variant of swarm optimization known as Competitive Swam Optimization (CSO) was proposed by Shenkai et al. [5] which select relevant features from a larger set of features. The experimental results show that the CSO outperforms the traditional PCA-based method and other existing algorithms. Fong et al. [15] developed a new feature selection algorithm named as Accelerated Particle Swarm Optimization (APSO) for Big Data stream mining. They introduced a lightweight incremental feature selection method which was able to select the features on the fly. Experiments were conducted on five different big datasets with a large number of features, and high accuracy was achieved within the reasonable time.

3 Proposed Approach In this section, we describe the classical PSO, binary PSO, and scale-free PSO in detail.

Big Data Classification Using Scale-Free Binary Particle …

1181

3.1 Classical PSO Particle swarm optimization is a problem-solving technique presented by Dr. Eberhart and Dr. Kennedy in 1995 [9] based on the social behavior of bird flocking or fish schooling. PSO uses a swarm of particles to represent the possible solutions and each particle i, associated with the swarm, is represented by a set of three vectors, the position vector X i  [xi1 , xi2 , . . . xi D ], the best position of the swarm Pi  [ pi1 , pi2 , . . . pi D ], and the velocity vector, Vi  [vi1 , vi2 , . . . vi D ]. The position and velocity vectors are initialized randomly within the search space. These particles move randomly within the search space in each iteration using two simple equations for updating position and velocity as follows:   vid (t + 1)  w ∗ vid (t) + c1 ∗ ∈1 ∗( pid − xid ) + c2 ∗ ∈2 ∗ pgd − xid xid (t + 1)  xid (t) + vid (t + 1)   maxiter − iter + wmax w  wmin − wmax * maxiter

(1) (2) (3)

where c1 and c2 are the acceleration coefficients, w is the inertia factor, and ∈1 and ∈2 are the uniformly distributed random numbers generated between 0 and 1. Index t denotes the number of iteration. For each iteration, every particle i is updated using Eqs. 1 and 2. The fitness function representing the given problem is calculated for each particle. The best position in each iteration is decided by calculating minimum or maximum of the fitness function depending on the nature of the problem. Position vectors are again updated using Eqs. 1 and 2, and the process is repeated until the stopping condition is satisfied.

3.2 Binary PSO Binary PSO is an extended version of PSO, developed by Kennedy and Eberhart and is used to solve the optimization problems in binary space [16]. The velocity of the particles in BPSO is described in terms of the number of bits changed in each iteration. Thus, if a particle does not move, it indicates all the bits are un-flipped while flipping all the binary bits results in moving the particle farthest. Equation (1) for updating of velocity will remain same in case of BPSO; however, Eq. (2) is redefined to update the position as Eq. (4) where S(.) denotes the sigmoid function and rand is a uniformly distributed random number over [0 1].  0 if rand ≥ S(vid (t + 1)) (4) X id (t + 1)  1 if rand < S(vid (t + 1))

1182

S. L. Gupta et al.

  S(vid (t + 1))  1/ 1 + e−(vid (t+1))

(5)

3.3 Scale-Free PSO In classical PSO, particles form a fully connected network and interact with each other at regular interval which is not the case in real-life networks. In 1999, Barabasi and Albert [17] proposed scale-free BA model and showed that many real-life networks and human-made networks are neither completely connected nor homogeneous regular network and rather they follow the scale-free topological structures. In scale-free networks, the degree of nodes exhibits the power law distribution, for example, citation networks, World Wide Web, Internet, software engineering, and online social networks. Only a few nodes which are densely connected are highly influential and rests of the nodes are of low degree and less influential. The impact of scale-free topologies on the performance of PSO was investigated and found much effective than canonical PSO. The scale-free network works on the principle of “growth” and “preferential attachment”. If in a network there are m0 connected nodes initially, and at any moment, a new node of degree m(m < m0 ) means node has m edges to different existing nodes in the network comes to join the network and then new node will connect to the any of existing ith nodes according to the probability Pi depending upon the degree K i of ith node and probability is given by Eq. 6 Ki Pi   j Kj

(6)

where j represents all existing nodes. Figure 1 depicts a scale-free network of 200 nodes in which the size of the nodes is in proportionate with the degree of nodes. Figure 2 shows the frequency of nodes of various degrees which is clearly following the power law distribution.

4 Experiment and Results In this paper, Scale-Free Binary Particle Swarm Optimization (SF-BPSO) based feature selection approach is proposed and the obtained results are compared with the Conventional Binary Particle Swarm Optimization (C-BPSO) to evaluate the performance of the proposed approach. Both SF-BPSO and C-BPSO approaches are tested on six high-dimensional datasets [18] in order to validate the models. Generally, feature selection approach becomes biased if the number of instances/features are very low [19], and it raises question about the validity of the model. Therefore, for this experiment, we have considered datasets having varying

Big Data Classification Using Scale-Free Binary Particle …

1183

Fig. 1 An example of scale-free network

Fig. 2 Power law distribution of degree of nodes in scale-free network

features, instances, and classes for proper validation. Description of datasets is given in Table 1 and parameters of both SF-BPSO and C-BPSO are shown in Table 2. If the proposed approach works efficiently on all datasets, it means the model has no biased problem and can work on any dataset. A sigmoidal function is used to convert the continuous values of both approaches into discrete values, and then these

1184

S. L. Gupta et al.

Table 1 Description of datasets S. no. Datasets

Total features

Total Total classes instances/samples

1 2

DLBCL Prostate cancer

5469 10,509

77 102

2 2

3 4

9 tumors 11 tumors

5726 12,533

60 174

9 11

5 6

SRBCT Lung cancer

2308 56

83 32

4 3

Table 2 Parameters of C-BPSO and SF-BPSO Parameter name Value Inertia factor Acceleration constants

ωmin  0.95 ωmax  0.99 c1  c2  2.05

PSO type C-BPSO SF-BPSO and C-BPSO

Number of particles

Vmin  −6 Vmax  6 50

SF-BPSO and C-BPSO

Maximum iteration

100

SF-BPSO and C-BPSO

Velocity

SF-BPSO and C-BPSO

discrete values select the features from the dataset. Once the features are selected, then the dataset with selected features is split into training and testing set. 70% of the dataset is used for training, while rest 30% is used for testing purpose. In this paper, Multi-Class Support Vector Machine (MC-SVM) classifier is used for the classification. Classification accuracy obtained from MC-SVM model along with feature selection information is the fitness function in both SF-BPSO and C-BPSO cases and our objective is to minimize the fitness function which ultimately returns the high classification accuracy at minimum features. Further, the classification accuracy in both the cases is also compared with the classification accuracy at full features. Table 3 shows the experimental results of C-BPSO and SF-BPSO. “Full” in Table 3 means classification accuracy using MC-SVM classifier at full features. The third column of Table 3 represents the best accuracy at “Full” features, selected features using C-BPSO, and selected features using SF-BPSO. The fourth column of Table 3 represents the average number of features selected in 30 independent runs using C-BPSO and SF-BPSO. According to Table 3, the average number of features selected in C-BPSO for all cases is smaller than the total number of features, while an average number of features in SF-BPSO for all cases are significantly smaller than the total number of features. Approximately, 20% features are selected for “Lung Cancer” while less than 1% features are selected for rest of the datasets using SF-BPSO approach. For C-BPSO, approximately 23% features are selected for “Lung Cancer” while approximately 40–50% features are selected for rest of the datasets. Along with the features, the classification accuracies in both C-BPSO and SF-BPSO are improved in all cases, while significant improvement can be observed for “Lung Cancer” and “9 Tumors”.

Big Data Classification Using Scale-Free Binary Particle … Table 3 Average results of MC-SVM for 30 independent runs Datasets Method Best accuracy (in %) DLBCL

Prostate cancer

9 Tumors

11 tumors

SRBCT

Lung cancer

Full C-BPSO SF-BPSO Full C-BPSO SF-BPSO Full C-BPSO SF-BPSO Full C-BPSO SF-BPSO Full C-BPSO SF-BPSO Full C-BPSO SF-BPSO

100 100 100 96.77 100 100 27.78 44.44 77.78 75 76.92 90.39 100 100 100 30 100 100

1185

Average features selected 5469 2359.6 5 10,509 4729.36 14.62 5726 2593.87 32.46 12,533 5996.5 74.11 2308 880.37 18.75 56 13 11.33

Since classification accuracies at “Full” features for “DLBCL” and “SRBCT” are 100%, and therefore no changes in classification accuracy are observed on these datasets. Approximately, 3% improvement in classification accuracy can be observed for “Prostate Cancer”, and approximately 17 and 50% improvement in classification accuracy are observed for “9 Tumors” using C-BPSO and SF-BPSO. In “11 Tumors” case, approximately 2 and 15% improvement in classification accuracy are observed for C-BPSO and SF-BPSO. However, both C-BPSO and SF-BPSO observed 70% improvement in classification accuracy for “Lung Cancer” dataset. The results indicate that SF-BPSO outperforms C-BPSO in selecting relevant features and maintaining the high level of classification accuracy.

5 Conclusion and Future Work A large number of features leads to over-fitting problem and have poor accuracy. Additionally, with a large number of features, the system becomes complex and time-consuming. Therefore, dimensionality reduction is the only solution to remove over-fitting problem as well as make the system computationally cheaper. In this paper, a new feature selection approach based on topology-controlled Scale-Free Binary Particle Swarm Optimization (SF-BPSO) is proposed. A multi-objective fitness function comprises selected features, and classification accuracy is considered to maximize the classification accuracy at the cost of minimum features. The proposed model is validated by testing on six high-dimensional datasets, and the classification

1186

S. L. Gupta et al.

accuracy at these datasets is also compared with the classification accuracy obtained through Classical Binary Particle Swarm Optimization (C-BPSO) as well as at “Full” features. We have considered datasets having varying features, instances, and classes in this experiment for proper validation of the proposed model. Results obtained from these models are very promising. Improvement in classification accuracy can be seen for both C-BPSO and SF-BPSO as compared to classification accuracy at “Full” features. Significant reduction in features can be observed in SF-BPSO as compared to C-BPSO. During experiment, it is also observed that C-BPSO performs less efficient in terms of classification accuracy if number of classes/categories are more, while SF-BPSO outperforms in all context. As shown in Table 3 that the classification accuracy for “9 Tumors” and “11 Tumors” are 44.44 and 76.92% for C-BPSO, it is 77.78 and 90.39% for SF-BPSO. Hence, from the experiment, we can conclude that SF-BPSO significantly reduces the features with the improvement in classification accuracy. This huge reduction in features makes SF-BPSO and MC-SVM model fast, efficient, and applicable for real-time application. In the future, impact of particle’s learning strategies on the performance of PSO will be investigated.

References 1. Beyer, M., Laney, D.: The importance of ‘big data’: a definition. Gartner Research, Stamford, CT, USA, Tech. Rep. G00235055 (2012) 2. L’heureux, A., Grolinger, K., Capretz, M.: Machine learning with big data: challenges and approaches. IEEE Access (2017) 3. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003) 4. Tran, B., Xue, B., Zhang, M.: Improved PSO for feature selection on high-dimensional datasets. In: Asia-Pacific Conference on Simulated Evolution and Learning, pp. 503–515 (2014) 5. Gu, S., Cheng, R., Jin, Y.: Feature selection for high-dimensional classification using a competitive swarm optimizer. Soft Comput. 1–12 (2016) 6. Li, Y., Wang, G., Chen, H., Shi, L., Qin, L.: An ant colony optimization based dimension reduction method for high-dimensional datasets. J. Bionic Eng. 10(2), 231–241 (2013) 7. Diao, R., Shen, Q.: Feature selection with harmony search. IEEE Transac. Syst. Man Cyber. Part B (Cybernetics) 42(6), 1509–1523 (2012) 8. Rodrigues, D., Pereira, L., Almeida, T., Papa, J., Souza, A., Ramos, C., Yang, X.-S.: BCS: a binary cuckoo search algorithm for feature selection. In: IEEE International Symposium on Circuits and Systems (ISCAS) (2013) 9. Kennedy, J., Eberhart, R.: Particle swarm optimization. IEEE International Conference on Neural Networks. 4, 1942–1948 (1995) 10. Engelbrecht, A.: Computational intelligence: an introduction. Wiley (2007) 11. Liu, Q., Wei, W., Yuan, H., Zhan, Z.-H., Li, Y.: Topology selection for particle swarm optimization. Inf. Sci. 363, 154–173 (2016) 12. Zhang, C., Yi, Z.: Scale-free fully informed particle swarm optimization algorithm. Inf. Sci. 181, 4550–4568 (2011) 13. Fong, S., Zhuang, Y., Tang, R., Yang, X.-S., Deb, S.: Selecting optimal feature set in highdimensional data by swarm search. J. Appl. Math. (2013) 14. Gao, Y., Du, W., Yan, G.: Selectively-informed particle swarm optimization. Scientific reports 5 (2015)

Big Data Classification Using Scale-Free Binary Particle …

1187

15. Fong, S., Wong, R., Vasilakos, A.: Accelerated PSO swarm search feature selection for data stream mining big data. IEEE Transac. Serv. Comput. 9(1), 33–45 (2016) 16. Kennedy, J., Eberhart, R.: A discrete binary version of the particle swarm algorithm. In: 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, vol. 5, pp. 4104–4108 (1997) 17. Barabasi, A.-L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999) 18. “UCI Machine Learning Repository,” [Online]. Available: http://archive.ics.uci.edu/ml/index. php. Accessed 9 Dec 2017 19. Kohavi, R., John, H.G.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997)

Face Recognition: Novel Comparison of Various Feature Extraction Techniques Yashoda Makhija and Rama Shankar Sharma

Abstract Face recognition system certainly recognizes a face in a picture. This involves extracting features of an image and then recognizing it, despite lighting, expression, pose, aging, and transformations (translate, rotate, and scale image) which is a tough task. In the following research paper, a comprehensive literature review of various kinds of technologies for feature extraction is listed. To present a comprehensive review, we classify residing feature extraction technologies along with detailed description of specific approaches within each classification. These strategies are grouped into four noteworthy classifications, specifically, feature-based, appearance-based, template-based, and part-based approaches. The motivation for our work is the unavailability of comprehensive and direct independent comparison of each one of the feasible algorithm executions in the previously available survey. After considerable exploration of these strategies, we analyze that various feature extraction technologies provide leading results for various applications of image processing. Keywords Feature extraction · Principal component analysis (PCA) · Linear discriminant analysis (LDA) · Elastic bunch graph matching (EBGM) · Local binary pattern histogram (LBPH)

1 Introduction Biometric-based procedures have shown up as the most encouraging alternative for recognizing people as of late, rather than confirming individuals and allowing them access to physical and virtual spaces by using PIN, passwords, plastic cards, tokens, keys, and some more; these techniques consider a person’s physiological and addiY. Makhija (B) · R. S. Sharma Rajasthan Technical University, Kota, Rajasthan, India e-mail: [email protected] R. S. Sharma e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_110

1189

1190

Y. Makhija and R. S. Sharma

tionally behavioral attributes to decide as well as find out his personality. It is difficult to remember passwords and PINs, and someone can steal or speculate them; keys, plastic cards, and tokens can be lost, overlooked, or copied; magnetic cards can end up getting corrupted and garbled. Nonetheless, a person’s natural qualities can´t be lost, overlooked, stolen, or fashioned. Face recognition is among the most important application of image processing. It is a problem that can be effectively and favorably performed by humans. This simplicity is hazardously misleading as automatic face recognition is a challenge that is still a long way from being resolved. Even after 30 years of thorough research and various reviews publicized in conferences associated with this category still, we are unable to claim that artificial intelligence systems can equivalently perform like humans. Facial recognition is a demanding and engaging issue that has captivated researchers from different work backgrounds: computer vision, pattern matching, neural adaptive network, psychology, and computer graphics. Therefore, this survey on facial recognition technologies is extensive and diverse. Face recognition has its applications in a wide range of industry areas like law enforcement, human–machine interaction, virtual reality, and surveillance. Therefore, the face recognition problem is not confined to computer vision, rather it is also applicable in pattern matching, neural networks, and computer graphics.

2 Related Work Face recognition is a very tough and challenging task. Also, its true challenge lies in the formation of an automated recognition system for recognizing faces while balancing the human ability. But there is a restriction on human’s capability when it deals with a huge amount of unknown face data. Therefore, an automatic recognition system with almost unbounded memory and high speed is required. The very first research in this area was initiated in the 1960s by Woodrow W. Bledsoe. Bledsoe together with Helen Chan and Charles Bisson started working on face recognition by using computer [10]. After that, Bledsoe implemented and drafted a semi-automated system. In 1973, Kanade developed and proposed a complete automatic face recognition system. An algorithm was used by him which automatically extracted sixteen facial features and achieved a success rate of 45–75% [14]. In 1980s, Mark Nixon established a geometric measurement for measuring the distance between the eyes [21]. He also directed a work on automatic gait recognition and also proposed aging to be also considered in biometrics. After that, the technique that stated to be a milestone in face recognition that utilized eigenfaces was brought in by Sirovich and Kirby in 1986 [17, 29]. Their technique was situated on Principal Component Analysis (PCA). But the performance of this method decreased when it experienced a higher change in pose and illumination. Kirby and Sirovich utilized PCA to portray faces which then was expanded by Turk and Pentland to recognize faces [30]. In PCA, we deal with the totality of data without paying any attention to the underlying structure whereas, in case of

Face Recognition: Novel Comparison of Various Feature Extraction Techniques

1191

Linear Discriminant Analysis (LDA) or Fisherface [4], a set of projection vectors is computed by using scatter matrices to minimize within-class scatter and to maximize between-class scatter. In LDA technique, computation to a greater extent was required and so Incremental Linear Discriminant Analysis (ILDA) [2] was formulated. Independent Component Analysis (ICA) [16] is the generalization of PCA. Bartlett and Sejnowski presented its utilization for face recognition. Gabor filters are used to extract features from the images by using texture components. Ebied et al. [11] established the use of linear and nonlinear techniques for feature extraction for the face recognition system. The Kernel-PCA is extended from PCA which represents nonlinear mapping in a higher dimensional feature space. Kamencay et al. [13] presented a new face recognition technique using SIFT-PCA method and also proposed the impact of graph-based segmentation algorithm on recognition rate. The feature-based method proposed by Wiskott, i.e., Elastic Bunch Graph Model (EBGM) [31] that depends on Gabor wavelets, has a great performance in general. Also, the issue of illumination and pose variation is also removed using this technique. The Support Vector Machines (SVM) method is a binary classification method [24]. The hidden Markov model for face recognition was conceptualized for the first time by Samaria [26, 27]. Then, it was later expanded for 2D Discrete Cosine Transform (DCT) and Karhunen–Loeve Transform by Nefian [9, 20]. Active Shape Models (ASM) and Active Appearance Models (AAM) were presented by Cootes [4, 7, 8] for face representation. KPCA, KFA [19], neural networks [18], hidden Markov model, LFA, and Laplacianfaces [12] are some of the methods which are executed for face recognition by various researchers in this field. Bay proposed [3] SURF descriptor as a novel-scale and rotation invariant detector. Khotanzad [15] proposed Zernike moments and pseudo-Zernike moments that extract global features and are inconsiderate to noise and other variations and shows good performance. The 2D images were having issues in face recognition because of the variations in illumination, pose, and expressions and therefore, 3D face recognition was presented. The first composition for identifying human based on 3D faces was published by Cartoux [5].

3 Face Recognition System Structure Recognition of a face usually has three phases. Detection of the face is the first step in a complex background and localization of its accurate position. Extraction of the facial features is the next step followed by normalization to associate the face with the saved face images in training phase. Ultimately, classification or matching occurs on the basis of the standards that the comparison of the database images and the test image crosses a threshold (Fig. 1). The input to a face recognition process is either a picture or a video clip which results in either identification or verification of a person that appears in that image or video.

1192

Y. Makhija and R. S. Sharma

Fig. 1 Face recognition system

3.1 Face Detection Principal approaches for face detection are [34].

3.1.1

Knowledge-Based Approaches

These techniques try to catch our face features and then transfers that knowledge in the form of a bunch of rules. It is simple to figure out some easy rules. For example, there are two symmetric eyes in a face, and the area around the eye is normally darker in color as compared to cheeks area. We can also describe facial features as the length separating the eyes or the intensity of color variance in the eye and the lower part. A trouble in these techniques is the complication in forming a suitable set of rules.

3.1.2

Feature-Invariant Approaches

We make use of these approaches to remove the limitation of our intuitive information about faces. This strategy first tries to discover eye-analogue pixels, so that undesirable pixel can be separated from the image. Following this partitioning process, it contemplates eye-analogue region of each eye considering it as a possibility of being an eye. Then, a bunch of rules is implemented to decide the possible pair of eyes. Once we have detected the eyes, the algorithm computes the facial region in the form of a rectangle and then we calculate the four vertexes of the face by using certain functions. Then, the possible face is standardized to a determined size and alignment. After that, by using a reinforcement neural network, we verify the face areas.

3.1.3

Template Matching Approaches

Template-based matching techniques aim to describe faces using a function. It searches for a principle template that defines each face. We can explain various features separately. For example, we can separate a face into eyes, nose, mouth, and face contour. Other templates apply brightness and darkness aspect to detect a face area. Then, these templates are compared with the input images for detecting faces. It is difficult to attain valuable outcomes by varying pose, lighting, scale, and shape.

Face Recognition: Novel Comparison of Various Feature Extraction Techniques

3.1.4

1193

Appearance-Based Approaches

The layout of appearance-based strategies is learned from the examples of the pictures. In general, appearance-based strategies depend on methods of statistical analysis and machine learning to search the significant attributes of face images. These techniques are additionally utilized as a part of feature extraction for face recognition.

3.2 Feature Extraction Techniques This is one of the most important stages in face recognition. Certain image processing strategies extract feature points like nose, eyes, and mouth, and then the points extracted are utilized as input data onto the application. These are by and large portrayed into four classifications, to be particular, feature-based, part-based, appearance-based, and template-based techniques [23].

3.2.1

Holistic/Appearance-based methods

Holistic-based techniques are also called appearance-based techniques. These methodologies try to recognize a face by utilizing global representations, i.e., information depending on the whole image rather than a local feature patch to determine a concise illustration of recognition. These methods have drawn most of the notice as compared to other methods [6, 23]. In the subsequent part, we will speak about the well-known eigenface [30] (implemented using PCA), Fisherface [4] (implemented using LDA), and certain other techniques utilized for conversion like Independent Component Analysis (ICA) [2]. To extract the feature vectors, we utilize methods like Principal Component Analysis (PCA) and Independent Component Analysis (ICA). – Eigenface and Principal Component Analysis One of the classical feature extraction methods is PCA [30]. Kirby et al. [33] utilized PCA technique to depict face in an image and later Turk et al. [29] expanded this technique for face recognition. This technique is also utilized by Eigenfaces (EF) method for reducing dimension [1]. The data that is no more required is evicted by PCA and then disintegrate face image formation into unrelated components called Eigenfaces (EF) [29]. It is one of the most famous techniques for face recognition [10]. Eigenfaces are an arrangement of eigenvectors for human face recognition. They seek to catch the variance in a set of face images and then encode and compare images of individual faces utilizing this data in a holistic manner. Our aim is to extricate the applicable data from a face image, encrypt it effectively, and compare one of the face’s encrypted data onto a database of faces encrypted similarly as face’s encrypted data.

1194

Y. Makhija and R. S. Sharma

We can approximate faces by utilizing the top eigenface, the one which has the biggest eigenvalue, and this way it accounts for the most variations among the bunch of face images. For increasing the computational efficiency, we choose the first few eigenfaces with the lowest values [25]. PCA keeps even the undesirable variations because of the change in illumination and expression [4]. As given by Moses et al. [1], “the change between the images of the same face because of illumination and lighting is always larger than image variations due to a change in person’s identity.” Therefore, Fishers’ linear discriminant analysis was proposed. – Linear Discriminant Analysis(LDA) Linear discriminant analysis and Principal Component Analysis (PCA) are closely linked as both of these techniques depend on linear transformations and seek to categorize the image data. LDA is generally famous for categorizing the image data into within class and between class. Linear Discriminant Analysis (LDA) technique has two stages: training and classification. In the former stage, the Fisherspace is constructed by using the training samples in LDA. In the latter stage, we project an input image to the same Fisherspace and then categorize that image by using an appropriate classifier. As specified by Fisher criteria, to categorize the image, we need to maximize the ratio of between-class scatter matrix determinant Sb to the within-class scatter matrix determinant Sw. To get maximum Sb, minimizing Sw is the final goal of LDA [32] (Fig. 2). LDA method by creating a projection of the dataset tries to maximize the ratio of the determinant of the between-class scatter matrix to the determinant of the within-class scatter matrix of the projected data. The former matrix, also called as the extra-personal, represents the variation in appearance due to different identities, while the latter matrix also called intrapersonal matrix represents the changes in the appearance of the same individual because of the change in illumination and facial expressions. This way, Fisherface can maintain discriminability by projecting away a little change in illumination and facial expressions.

Fig. 2 a Two-dimensional space with scattered points, b bad separation, c good separation [23]

Face Recognition: Novel Comparison of Various Feature Extraction Techniques

3.2.2

1195

Feature/Geometry Based Techniques

We have already studied above holistic-based methods depending on the data from a face image; based on that information, we can conclude that appearance-based techniques make use of statistical analysis, whereas feature-based techniques explore areas such as pattern recognition, computer vision, image processing, etc. In the following section, we talk about a successful technique for face recognition, that is, Local Binary Pattern Histogram (LBPH). – Local Binary Pattern Histogram. The Local Binary Pattern (LBP) operator [22] is successfully used as a texture descriptor. The face is composed of various micropatterns which are further described by using these operators. These operators designate each and every pixel of an image with a certain label. This is done by thresholding a 3 * 3 matrix from neighborhood of every pixel which is having a particular center value as well. We consider the value of this center pixel as a binary digit. Then, we create a histogram of these labels which is used for texture description. Generally, the LBP operator discussed above is of the following form: LBP(X c , Yc ) =

7 

2n S(i n − i c )

n=0

where in the above case n will consider digit 8 as a value due to the eight neighbors of a central pixel c, i c and i n describe the gray level values at center pixel c and S(u) = 1 if u ≥ 0 and 0 otherwise. While evaluating the LBP histogram, we will have an individual bin for each uniform pattern and we can considerably decrease the count of bins by designating all of these nonuniform patterns into one bin, that too without losing much data. Ojala et al. deduced that out of 256 patterns of 8 bits only 58 of them are uniform, of all the considered images neighborhood of about 90% of the images is uniform and the remaining 10% image’s neighborhood usually contains noise. A particular face image is separated into different local areas, and then we extricate texture descriptors out of every area individually. Later, a global descriptor of the face is computed by connecting these texture descriptors.

3.2.3

Template-Based Techniques

This approach basically uses templates that have been designed in the past to select the facial features out of an image by making use of a particular energy function and the perfect match of the template will give the least energy. In template-based approaches, first of all, an eye is detected from the image. Then by finding a relation between the eye and the other features of the face image, various features can be detected. The template has a maximum correlation with the eye area.

1196

3.2.4

Y. Makhija and R. S. Sharma

Part-Based Techniques

To discriminate from the feature-based technique, the part-based techniques find an important feature from the face image data and incorporate these part-based data with certain machine learning techniques for improved recognition. In the following section, we explore an approach that depends on SIFT (scale-invariant feature transform) features extricated from the facial information. – Scale-invariant Feature Transform (SIFT). In 2004, Lowe introduced a descriptor called SIFT that is robust to any variation due to scale, occlusions, noise, and rotation, and is particularly unique. SIFT features contain four different phases in representation and detection: Step 1: To find scale-space extrema, Step 2: Localizing key point and filtering, Step 3: Assigning orientation, and Step 4: key point descriptor.

4 Recent Approaches – Gabor Image Representation: This method is quick and robust. It also describes the new version of original Viola–Jones face detector. It provides improved accuracy. We can also achieve expression recognition by using Gabor filter. It relies on frequency and alignment parameters. – Elastic Bunch Graph Matching (EBGM): EBGM is a feature-based face recognition technique. By applying manual interaction, certain facial features are selected. A bunch graph is developed depending on these features. Various nodes of the bunch graph symbolize various facial landmarks. By searching the closest measure and comparing a given train image to all the test images, we measure the distance between a given test image features and the nearest train image feature. – Kernel Principal Component Analysis (KPCA): Approaches of kernel methods are utilized as an expansion of PCA known as KPCA that was proposed by Scholkopf [28]. By using KPCA, it is possible to extricate the nonlinear features of the image. Linear PCA displays nonlinear characteristic, i.e., nonlinear mapping of data by using kernel technique. – Active Appearance Model (AAM): Only the shape information is not adequate for complete image illustration as in the case of ASM. Therefore, AAM represents both texture and shape changes of the training image dataset and compares them. It is indicated that AAM is exceptionally quick for head tracking. It can instantly recognize face motion in real time.

Face Recognition: Novel Comparison of Various Feature Extraction Techniques

1197

5 Conclusion Face recognition has numerous applications in various domains in the field of image processing and is a difficult task to be performed. Face recognition has received a great success in the past few years. A great amount of success has been achieved in this field in the past four decades and has made huge progress with encouraging results. In the current scenario, face recognition system has attained a great sense of maturity along with restricted conditions. However, face recognition system is still far away from obtaining its ideal goal that is to emulate the human vision system adequately.

References 1. Adini, Y., Moses, Y., Ullman, S.: Face recognition: the problem of compensating for changes in illumination direction. IEEE Trans. Pattern Analysis Mach. Intell. 19(7), 721–732 (1997) 2. Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: Face recognition by independent component analysis. IEEE Trans. Neural Netw. 13(6), 1450–1464 (2002) 3. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. Compu. Vis. ECCV 2006, 404–417 (2006) 4. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces versus fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997) 5. Cartoux, J.-Y., LaPresté, J.-T., Richetin, M.: Face authentification or recognition by profile extraction from range images. In: Proceedings of Workshop on Interpretation of 3D Scenes, pp. 194–199. IEEE (1989) 6. Chao, Wei-Lun: Face Recognition. GICE, National Taiwan University, Taipei (2007) 7. Cohn J.F., Zlochower, A.J., Lien, J.J., Kanade, T.: Feature-point tracking by optical flow discriminates subtle differences in facial expression. In: Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 396–401. IEEE (1998) 8. Cootes, T.F., Taylor, C.J.: Statistical models of appearance for computer vision. Technical Report, Wolfson Image Analysis Unit, University of Manchester, UK (1999) 9. Cottrell, G.W., Fleming, M.: Face recognition using unsupervised feature extraction. In: Proceedings of the International Neural Network Conference, pp. 322–325 (1990) 10. De Carrera, P.F., Marques, I.: Face recognition algorithms. Master’s thesis in Computer Science, Universidad Euskal Herriko (2010) 11. Ebied, H.M.: Evaluation of CIE-XYZ system for face recognition using kernel-PCA. In: International Joint Conference on Advances in Signal Processing and Information Technology, pp. 137–143. Springer (2012) 12. He, X., Shuicheng, Y., Hu, Y., Partha, N., Hong-Jiang, Z.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005) 13. Kamencay, P., Breznan, M., Jelsovka, D., Zachariasova, M.: Improved face recognition method based on segmentation algorithm using sift-PCA. In: 2012 35th International Conference on Telecommunications and Signal Processing (TSP), pp. 758–762. IEEE (2012) 14. Kenade, T.: Picture processing system by computer complex and recognition of human faces. Kyoto University, Kyoto (1973) 15. Khotanzad, A., Hong, Y.H.: Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 489–497 (1990) 16. Kim, T.-K., Wong, S.-F., Stenger, B., Kittler, J., Cipolla, R.: Incremental linear discriminant analysis using sufficient spanning set approximations. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR’07, pp. 1–8. IEEE (2007)

1198

Y. Makhija and R. S. Sharma

17. Kirby, M., Sirovich, L.: Application of the karhunen-loeve procedure for the characterization of human faces. IEEE Trans. Pattern Anal. Mach. Intell. 12(1), 103–108 (1990) 18. Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: a convolutional neuralnetwork approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997) 19. Liu, C.: Capitalize on dimensionality increasing techniques for improving face recognition grand challenge performance. IEEE Trans. Pattern Anal. Mach. Intell. 28(5), 725–737 (2006) 20. Nefian, A.V., Hayes, M.-H.: Hidden Markov models for face detection and recognition. In: IEEE International Conference on Image Processing, vol. 1, pp. 141–145 (1998) 21. Nixon, M.: Eye spacing measurement for facial recognition. In: Applications of Digital Image Processing VIII, vol. 575, p. 279–286. International Society for Optics and Photonics (1985) 22. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002) 23. Pali, V., Goswami, S., Bhaiya, L.P.: An extensive survey on feature extraction techniques for facial image processing. In: 2014 International Conference on Computational Intelligence and Communication Networks (CICN), pp. 142–148. IEEE (2014) 24. Phillips, P.J. Support vector machines applied to face recognition. Adv. Neural Inf. Process. Syst. 803–809 (1999) 25. Ravi, K., Kattaswamy, M.: Face recognition using PCA and eigen face approach (2014) 26. Samaria, F.S., Harter, A.C.: Parameterisation of a stochastic model for human face identification. In: Proceedings of the Second IEEE Workshop on Applications of Computer Vision, pp 138–142. IEEE (1994) 27. Samaria, F.S.: Face recognition using hidden Markov models. Ph.D. thesis, University of Cambridge, Cambridge, UK (1994) 28. Schölkopf, B., Smola, A., Müller, K.-R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 1299–1319 (1998) 29. Sirovich, L., Kirby, M.: Low-dimensional procedure for the characterization of human faces. Josa a 4(3), 519–524 (1987) 30. Turk, M., Pentland, A.: Eigenfaces for recognition. Journal Cogn. Neurosci. 3(1), 71–86 (1991) 31. Wiskott, L., Krüger, N., Kuiger, N., Von Der Malsburg, C.: Face recognition by elastic bunch graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997) 32. Yambor, W.S., Draper, B.A., Beveridge, J.R.: Analyzing PCA-based face recognition algorithms: eigenvector selection and distance measures. In: Empirical Evaluation Methods in Computer Vision, pp. 39–60. World Scientific (2002) 33. Yang, M.-S.: Kernel eigenfaces versus kernel fisherfaces: face recognition using kernel methods. In: Fgr, vol. 2, p. 215 (2002) 34. Yang, M.-H., Kriegman, D.J., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002)

Performance Analysis of Hidden Terminal Problem in VANET for Safe Transportation System Ranjeet Singh Tomar, Mayank Satya Prakash Sharma, Sudhanshu Jha and Brijesh Kumar Chaurasia

Abstract Vehicular Ad Hoc Networks (VANETs) is evolving as an emerging technology to meet many demands in real-time applications. In this connection, vehicular nodes have a real-time communication with one another and also with a roadside infrastructure to provide enormous applications varying from safe travel to assistance while driving on the roads and Internet access during travel. Maximum issues of concern regarding Mobile Ad Hoc Network (MANET) are also concerned as VANETs. In this paper, we have described the hidden terminal problem in vehicle nodes with the help of NS-2 tool. We have also analyzed the vehicular communication performance for limited communication range and during communication, throughput increases between different vehicle nodes in VANET system with help of NetSim tool. Keywords Vehicular Ad hoc Network (VANET) · Hidden terminal problem Mobile ad hoc network (MANET) · Roadside unit (RSU) and global positioning system (GPS) · Intelligent transportation system (ITS)

1 Introduction As wireless communication is being used widely for the communication between the two and more devices placed at a different location, it also encourages the development of modern devices as self-working or healing networks instead of a pre-installed R. S. Tomar · M. S. P. Sharma (B) · S. Jha · B. K. Chaurasia ITM University Gwalior, Gwalior, India e-mail: [email protected] R. S. Tomar e-mail: [email protected] S. Jha e-mail: [email protected] B. K. Chaurasia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_111

1199

1200

R. S. Tomar et al.

center. The network which does not contain any center or previous designed network structure is called unplanned networks. Ad hoc network is a measure assortment of autonomous mobile nodes. Unplanned Vehicular Ad Hoc Networks (VANETs) is the taxonomic group of Unplanned Mobile Ad Hoc Networks (MANETs). VANET is one of the most impactful vehicular environments for developing and providing Intelligent Transportation System (ITS) for the view of safe and comfortable vehicle transmission for the road users. The feature of communication among the different vehicle drivers through VANET has a great impact to coordinate them and to avoid any mishap while driving the vehicles such as alerts for future accidents, information about the traffic jams in different areas, ease passage in emergency cases, speed controls, and others. Besides all these safety features, VANET allows much more comfort in other areas of daily activities like internet facility, e-commerce, weather information, and supports other multimedia function-based application [1]. VANET system is a class of portable technology, specially prearranged system of the vehicles, hubs communicate with one another for trading the data. The sensors are furnished in the vehicle. The GPS is employed to find out the position, velocity, and all other vehicles within their reach. The vehicle-related message trades security and all individuals from an accident in danger vehicular environment, and enhance the safety on roads and highways. Basically, the two fundamental trades to sort the message in vehicles are emergency messages and status messages. The function of the GPS system is to find the real data like place, increasing speed, and the velocity of every vehicle. Every vehicle receives the message by sending the direct message and it can likewise be called as a signal message. The pre-crash notice and the post-crash warning, in a vehicular environmental are given by the emergency message [1]. This paper basically concentrates on the emergency message spread between the vehicles. The Dedicated Short-Range Communication (DSRC) is being used in VANET and its range is 5.9 GHz for vehicles. Basically, the DSRC have 7 channels and the range is 1000 m and information rate is 27 Mbps. The 7 channels operate at 10 MHz and at 5 MHz are used for gatekeeper band in DSRC [1, 2]. In DSRC, among seven channels, the one channel is used as a control channel for security application and the rest of six are used for administration channel for non-safety and business application. To empower the short-range remote system either radio interface or onboard unit is utilized. The main purpose of switching between these two main channels (control channel and administration channel) is to continue the process of communication so that any of the important message signals will not be missed. In this paper, we have discussed hidden terminal problems in vehicular communication and provide a unique way to resolve this problem [2]. Due to faithful applications, vehicular ad hoc has generated emerging industrial interests and technology analysis. There are several applications of VANETs such as electronic tolling, emergency notification, safety enhancement, and dissemination of informative messages. There are unit varied forms of wireless communication technology (WSN) that have been predictable to discover Vehicular ad hoc; Wi-Fi (802.11 depend) is one of them. Vehicles starting with wireless interface use either IEEE 802.11g or IEEE 802.11b standard to approach media. The communication between the vehicle nodes that geographical area unit out of the reach of transmis-

Performance Analysis of Hidden Terminal Problem in VANET …

1201

sion of the radio is formed with single and multi-hop through intermediate vehicle nodes. The network topology is more dynamic in the vehicular ad hoc network scenario. Without wired communication media and with the deficiency of VANET infrastructures, multiple hops routing transmission protocol have a network in power of several classes of attacks till destruction, modification, and active interferences creation of delivery messages. On the conflicting hand of vehicle node sensory mobile, the vehicle delivers adequate high wattage in the network system of Inter-Vehicle Communication, thus energy consumption is another issue. Privacy and security area unit are the challenges and key factors in Inter-Vehicle Communication System [1–3].

2 Hidden Terminal Problem in VANET Carrier sense means and signifies the potential of the terminal point to establish the channel and consider the channel if the channel is busy or not. Initially, at vision, it looks that with carrier-sense multiple access collisions can be avoided altogether. Indeed, if all terminal points deliver their message packets only when the channel is not crowded then select a random retransmission time if they find that the channel is crowded, then it looks that a collision can happen solely when two or additional terminal point start transmission concurrently, a scenario that is quite unlikely. The changed CSMA system, whose principles of operation were delineated on the top of, comes by the name CSMA/CA, where CA means for collision avoidance or collision turning away. The word form means that collisions detect area unit wanted to be rejected and not those they are rejected altogether. As a result of the retransmission process of the carrier-sense multiple access system, collisions that will occur are not harmful: when collision occurs then ACK information or RTS/CTS information will not be received and also the transmission point can defer its transmission for a later time [4]. In Fig. 1, source T 1 and source T 2 are a unit out of radio waves of different frequency, i.e., they are hidden to at least one another. Therefore they will access the medium at an identical time making the receiver end r 1 − r 3 expertise a knowledge impact, being incapable to rewrite any of the packets either one or either of the two. This development is defined as the hidden terminal problem for the wireless ad hoc networks. Carrier-Sense Multiple Access (CSMA) in universal and half of two recognizes the dilapidation in performance that in hidden terminals it shows a huge network that contains many transmitters and one receiver [1]. The hidden terminals contained in case of unicast transmission is well investigated in contrast to the vehicular environment. Many analyses about the performance analysis of VANETs are stating about the hidden terminal, this is a problem, however, they are unsuccessful in indicating the level of this and impacts on performance in broadcasting things. At the time of design and development of performance characteristics of VANETs, vehicles are uses 802.11p. They are discussing about the hidden terminal downside, but they are not positioned or appraise; however, critical the matter is [1]. One main degraded performance about the results of the hidden terminal problem in VANETs

1202

R. S. Tomar et al.

Fig. 1 Hidden terminal problems in VANET

is that there is no similar definition for the matter once a broadcast accidental network is thought of. Furthermore, signals suffer from attenuation in a very real system and thus it’s onerous to outline “communication range” and once two nodes are “out of radio range”. That has a relevancy for road traffic safety; to suggest a mechanism for the solution of hidden terminals problem in VANETs.

3 Road Traffic Scenario in VANET ITS safety applications aim to pass the reports to all the surrounding vehicles regarding dangerous events on the road. Samples of such applications are the emergency electronic visual signal or the road merge warning. A significant issue with such applications is that the call of once a warning ought to be shown to drivers. Since the recipient vehicle is also isolated from wherever the harmful event occurred, an oversized variety of false warnings is also shown to the drivers. This results in driver desensitization which can cut back the safety edges. Whereas previous analysis has provided how to handle false warnings by estimating the relevancy of the reports, these strategies don’t take into all the vital factors and do not seem to be simply realistic for novel applications [3, 5]. In the Vehicular Ad Hoc Network (VANET), the quantity of intervention among neighboring nodes and the communication links is ruled through the large density of the vehicles in locality and transmission chances of the terminals. This is evident that on roads vehicles are likely to be distributed nonhomogeneously preferably by speed limits or by traffic signals at different locations on the road area. In most of the existing work for the distribution of similar nodes in Mobile Ad Hoc Networks (MANETs) seems to be incompatible in VANETs. In this paper, the author has presented an original attitude and studied the practical form

Performance Analysis of Hidden Terminal Problem in VANET …

1203

of distributing vehicles in the urban area with their performances. Specifically, we have introduced the stochastic traffic model to describe the common mode of flow of vehicular traffic as well as the arbitrariness of particular vehicles, from which we may obtain the mean dynamics and the probability distribution of vehicular network density. As illustrative examples, we demonstrate how the density information from the stochastic traffic model can be utilized to derive the throughput and growth performance of various routing strategies in different channels access protocol. We confirm the accuracy of the analytical results through extensive simulations. The results defined in this paper show the application of the given methodology for modeling protocols and their performance and shed insight on the analysis of the performance of another transmission protocol and different network designs of vehicular networks. Furthermore, from the result shown in this paper, it is straightforward that the maximum network performance of the network can be optimized according to the spatial location. Such data and information can be computed by roadside nodes and then broadcasted to road users for optimized multi-hop packet transfer in the communication network [6–8].

4 Simulation and Analysis of Hidden Terminal Problem Vehicular Ad Hoc Networks (VANETs) is evolving as an emerging scenario to meet many demands in real-time applications. In this connection, vehicular nodes have a real-time communication with one another and also with a roadside infrastructure to provide enormous applications varying from safe travel to assistance while driving and internet access. Maximum issues of concern regarding Mobile Ad Hoc Network (MANET) also concerns VANETs. In this section, we have analyzed the hidden terminal problem in vehicle node with the help of NS-2 tool. We have also analyzed the vehicular communication for particular limited communication range.

4.1 Results and Discussion In Fig. 2, we have used 4 nodes and every node sends message to each other. Node 0 is source node and node 3 is the destination node. The message is broadcasting between node 0 to node 3 and node 2. The communication was held through the node 0 to node 2 via node 3 but the node 1 and node 0 does not communicate in the vehicular network. In Fig. 3, we have used four nodes as same as Fig. 2 but same difference in Fig. 3. All nodes are active but the path is broken between node 2 and node 3. In Fig. 4, we have deployed many nodes and each node has own communication range and they are transferring the data to every node but the nodes are shown the hidden terminal problem in the communication network.

1204

R. S. Tomar et al.

Fig. 2 Four vehicles node communication in VANET system

Fig. 3 Four vehicles node communication in different scenarios

In Fig. 5, we have deployed the many nodes and they communicate to each other and each node has own communication range. But some nodes are shown hidden node problem in the vehicular communication environment.

Performance Analysis of Hidden Terminal Problem in VANET …

1205

Fig. 4 Hidden terminal problem and solution in vehicles nodes

Fig. 5 Hidden terminal problem and solution in vehicles nodes

4.2 Throughput Analysis In Fig. 6, the result shows the relation between time and throughput. In result, initially throughput increases, then after some time throughput (100 ms) is decreased and then becomes constant. This throughput shows that the message is transferred or communicated to vehicle one and vehicle six, and the how many messages are

1206

R. S. Tomar et al.

Fig. 6 Relation between time and throughput

Fig. 7 Relation between time and throughput

transferred to each other correctly. This graph is also represented the data packet transmission to one vehicle to another vehicle. Figure 7 shows the existing relation between time and throughput. In result, initially, throughput is increasing at 150 ms after it has decreased and then becomes constant. The throughput is shown that the message is transferred or communicated

Performance Analysis of Hidden Terminal Problem in VANET …

1207

Fig. 8 Relation between time and throughput

to vehicle three and vehicle six, and also shows the number of the message is transferred to each other correctly. This graph also represents data packet transmission information to one vehicle to another vehicle. In Fig. 8, the overall throughput of the network is shown. In result, overall throughput of vehicle node to communicate to each other is presented. Initially, the throughput is increasing and after sometimes throughput is decreasing and then becomes constant.

5 Conclusion In this paper, we have presented the hidden terminal problem in vehicle environment with help of NS-2 tool. We have also analyzed vehicular communication performance for different communication ranges and analyze the throughput using NetSim tool. In this paper, we have simulated the hidden terminal problem in VANET. In this work, throughput increases between different vehicles in VANET environment when simulation had been done in limited communication range with help of NetSim. We have simulated different vehicle nodes for communication and transferred the data packets on roadside communication successfully and throughput increases between the vehicles in vehicular environment efficiently.

1208

R. S. Tomar et al.

References 1. Singh, S., Virk, A.K.: Hybrid solution for hidden terminal problem on VANETS. Int. J. Sci. Res. (IJSR) 03, 1594–1600 (2014) 2. Devi, M.S., Malar, K.: Improved performance modeling of intelligent alert message diffusion in VANET. In: International Conference on Advanced Computing (ICoAC), pp. 463–467 (2013) 3. Ho, I.W.-H., Leung, K.K., Polak J.W.: A methodology for studying VANET performance with practical vehicle distribution in urban environment. Netw. Internet Archit. (2012) 4. Sharanappa, P.H., Mahabaleshwar, S.K.: Performance analysis of CSMA, MACA and MACAW protocols for VANETs. Int. J. Future Comput. Comm. 3(2) (2014) 5. Panagiotis, P., Gligor, V., Hubaux, J.P.: Securing vehicular communications-assumptions, requirements, and principles. In: Workshop on embedded security in cars (ESCAR), 2006 6. Qiu, H.J.F., Ivan W.-H.H., Chi K.T.: A stochastic traffic modeling approach for 802.11p VANET broadcasting performance evaluation. In: International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 1077–1083 (2012) 7. Booysen, M.J., Sherali Z., Gert-Jan V.R..: Impact of neighbor awareness at the MAC layer in a vehicular ad-hoc network (VANET). In: IEEE International Symposium on Wireless Vehicular Communications (WiVeC), pp. 1–5 (2013) 8. Kumar, V., Mishra, S., Chand, N.: Applications of VANETs: present and future. Comm. Netw. 5(1) (2013)

Effect of Various Distance Classifiers on the Performance of Bat and CS-Based Face Recognition System Preeti and Dinesh Kumar

Abstract Due to increasing risks mainly in surveillance, security, and authentication, it has become imperative to pay more attention to Face Recognition (FR) system. Any FR system consists of three main subdivisions—Feature Extraction, Feature Selection, and Classification. This paper uses a combination of DCT (Discrete Cosine Transform) and PCA (Principal Component Analysis), i.e., DCTPCA for feature extraction followed by Bat and Cuckoo Search Algorithms for Feature Selection. The aim here is to use different classifiers such as Euclidean Distance (ED), Manhattan Distance (MD), Canberra Distance (CD), and Chebyshev Distance (ChD) for classification purpose and to compare them to find as to which, among these, suits best for a given dataset. The results not only reveal the efficiency of Batbased feature selection algorithm over Cuckoo Search but also show how effective Euclidean Distance classifier is over other classifiers for Yale_Original database and Manhattan Distance classifier for Yale_Extended database. Keywords DCTPCA · Bat algorithm · Cuckoo search · Distance classifiers Face recognition

1 Introduction Face recognition (FR) has multiple applications which deal with human–computer interaction and robust security systems. These applications have attracted new developments in other research areas like artificial intelligence and image processing [1, 2]. Various FR algorithms have been proposed. The success of any face recognition system mainly depends upon how effectively we represent the face. Face Recognition System consists of three sub-processes: Feature Extraction, Feature Selection, Preeti (B) · D. Kumar Department of CSE, G J U S& T, Hisar, Haryana, India e-mail: [email protected] D. Kumar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_112

1209

1210

Preeti and D. Kumar

and Classification/Recognition. Feature extraction and selection methods are used to eliminate redundant and extraneous features in order to reduce the space requirements and to make the system more effective in terms of computation cost/speed and eventually to recognize the faces accurately [3]. This paper presents combined DCTPCA for feature extraction; Cuckoo Search and Bat algorithms for feature selection; and four classifiers namely: Euclidean Distance (ED), Manhattan Distance (MD), Canberra Distance (CD) and Chebyshev Distance (ChD) [4, 5]. The main contribution of this paper, besides analyzing the performance of two nature-inspired feature selection algorithms, i.e., Cuckoo search [6, 7] and Bat algorithm [8], is to compare the performance of four distance classifiers over two variations of Yale face database. The remaining of the paper is structured as follows: Sect. 2 contains the related work while the proposed work is included in next section. The experiments and results are presented in Sect. 4. Lastly, the conclusion is in Sect. 5.

2 Related Work Sodhi et al. [9] in 2013 described an FR system based on PCA and LDA. They compared PCA and LDA and also analyzed the performance of different distance classifiers [10]. It was observed that combination of standard distance classifiers such as Euclidean, Mahalanobis, and City-Block perform better than the individual distance classifiers. Chakrabarti et al. [11] in 2014 proposed an efficient method for facial expression recognition based on dimensionality reduction and eigenspaces methods. This technique was applied to JAFFE database and also compared four distance classifiers: Euclidean, Cosine, Mahalanobis, and Manhattan distance. Gawande et al. [12] in 2014 proposed a human face identification system using PCA. They used four distance classifiers for this purpose. These were: the Euclidean, City-Block, Squared Euclidean, and the Squared Chebyshev Distance. They found that ED classifier is the best among all. Rao et al. [13] in 2015 proposed an FR system using PCA in real time for various human beings and compared various distance classifiers. Mondal et al. [14] in 2017 presented a face recognition approach which is a combination of PCA based feature extraction technique and minimum distance classifier. This approach was tested on ORL and Yale database. Abbas et al. [15] also discussed the PCA based Face recognition system but the system was analyzed using three distance classifiers namely City-Block, Euclidean, and squared Euclidean distance. Malhotra and Kumar [16] proposed an optimized Face Recognition system based on Cuckoo Search (CS) approach. This new approach was compared with Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Differential Evolution (DE) and produced better results. Preeti and Kumar [17] also proposed a new face recognition system based on Bat algorithm. The performance of this algorithm was better when compared with other algorithms such as GA, CS, and PSO. Although many FR systems were proposed in past still there is always a need to propose a good face recognition system. This paper evaluates the effect of differ-

Effect of Various Distance Classifiers on the Performance …

1211

ent distance classifiers on the performance of Bat and CS-based Face Recognition system.

3 Proposed Work 3.1 Face Recognition (FR) System An FR system used in this paper has been shown in Fig. 1. As already discussed any FR system consists of three subprocesses. First and main process is Feature Extraction. There are so many methods present in the literature for the same such as PCA [18, 19], Independent Component Analysis (ICA) [20], Discrete Cosine Transform (DCT), Linear Discriminant Analysis (LDA) [21] and Discrete Wavelet Transform (DWT) [22]. In this paper, PCA and DCTPCA combination has been used for feature extraction which helps in increasing the efficiency and overall recognition rate of the system. Thereafter, in the second step, the best features, responsible for giving the maximum possible recognition rate are selected using CS [16] and Bat [17] algorithms. The last step is classification or recognition using various distance classifiers. These are: ED,

Fig. 1 Face recognition system used

Read Image from the database

Perform pre-processing like Histogram Equalization and adding Motion Blur

Feature Extraction using DCTPCA

Feature Extraction using PCA

CS for Feature Selection [4, 11]

Bat for Feature Selection [5]

Apply Different Distance Classifiers for Recognition

Stop

1212

Preeti and D. Kumar

MD, CD, and ChD. The recognition rate is considered as the metric for measuring the performance of face recognition system.

3.2 Distance Classifiers Classification or Recognition/Matching is the last step of any FR system. In this paper, we are using distance as a metric for recognition in which minimum distance corresponds to maximum similarity [12]. For experiments, we are using four distance classifiers. These are: Euclidean Distance: This metric is the most commonly used and also known as the Pythagorean Theorem or as L2 distance. It analyses the root of square differences between the coordinates of a pair of objects as follows:  d |P i − Q i |2 D(P, Q)  i1

Manhattan Distance: This distance metric is also known as rectilinear, CityBlock, L1 distance, taxicab, L1 norm. It finds the sum of absolute differences between the coordinates of a pair of objects as: D(P, Q) 

d i1

|P i − Q i |

Chebyshev Distance: It is named after Panfnuty Chebyshev. This distance metric is also called as Chebyshev, chessboard, maximum or L∞ distance. It determines the maximum of the absolute differences between the pair of objects as follows: D(P, Q)  max (|P i − Q i |) 1≤i≤d

Canberra Distance: It was developed by Lance and Williams [23]. It calculates the sum of absolute fractional differences between the pair of objects as: D(P, Q) 

d i1

|P i − Q i | |Pi | + |Q i |

4 Experimental Results and Discussions In this paper, we are using two variations of Yale face database for experiments. These are Yale_Original (Available at http://vision.ucsd.edu/datasets/yale_face_dat aset_original/yalefaces.zip) and Yale_Extended face database (Available at http://vi sion.ucsd.edu/extyaleb/CroppedYaleBZip/CroppedYale.zip).

Effect of Various Distance Classifiers on the Performance …

1213

The Yale_Original Face Database includes 15 individuals each having 11 images varies in terms of facial expression or configuration in GIF format and a total of 165 grayscale images. Two cases were considered for executing experiments. Yale_Extended face database has 16,128 images of 28 human beings under 9 poses and 64 illumination conditions [24]. We have considered 10 subjects with each having 9 poses under one lighting condition indicating the pose variations. In the first case, 5 images per class for training and leftover images as testing from all databases, whereas in the second case, 4 images per class as training and rest of them as testing for Yale_Extended; and 6 images per subject as training for Yale_original. The first step was to enhance an image that was achieved using histogram equalization during experiments. Next step was the feature extraction using PCA and DCTPCA methods depending upon our face recognition technique. Here, we have considered three FR techniques. These are PCA, DCTPCA + CS [16] and DCTPCA + Bat [17] as shown in Fig. 1. Different parameters of both the optimization techniques have been varied during experiments. Table 1 shows the values on which optimal results were obtained and are considered for our FR system. First experiment was executed to see the effect of different distance classifiers on face recognition system based on PCA, DCTPCA + CS and DCTPCA + Bat using

Fig. 2 Analysis of FR techniques on Yale_Extended database for case 1 using: a ED; b MD; c CD and d ChD Table 1 Parameters used for both the algorithms S. No. Name of the parameter 1 2 3 4 5

Number of bats/nests Pulse rate r Loudness L Number of iterations Probability of abandon nest (Pa)

Value 15 0.5 1.5 25 0.4

1214

Preeti and D. Kumar

Yale_Extended face database. Figure 2 shows the performance of four classifiers: ED, MD, CD, and ChD on Yale_Extended face database for case 1. Results reveal that recognition rate increases with increase in a number of features and after a number of features it reaches a constant value for all four classifiers. The outcome proves that DCTPCA + Bat achieves better recognition rate when compared to other methods with less number of features in all cases. Figure 3 compares the performance of different classifiers on both the cases of Yale_Extended database and taking DCTPCA + Bat as Face Recognition technique. Manhattan Distance classifier gives best results among others with less number of features in both the cases. Second experiment was performed to analyze the performance of different distance classifiers in the presence of motion blur. In the experiment we have added the motion blur during training phase of a FR System. Figure 4 reveals the outcome of

Fig. 3 Analysis of four distance classifiers on both the cases of Yale_Extended database: a case 1; b case 2

Fig. 4 Analysis of FR techniques by adding motion blur during training on Yale_Extended database case 1 using: a ED; b MD; c CD and d ChD

Effect of Various Distance Classifiers on the Performance …

1215

Fig. 5 Analysis of four distance classifiers on both the cases of on Yale_Extended database in the presence of motion blur: a case 1; b case 2

different classifiers on case 1 of Yale_Extended Face Database. Results demonstrate the efficacy of DCTPCA + Bat not only in the absence of motion blur but also in the presence of motion blur for all distance classifiers in the both the cases. Figure 5 compares the performance of different distance classifiers for DCTPCA + Bat face recognition technique. Even in the presence of motion blur Manhattan Distance classifier gives best results among others in both the cases. Last experiment was performed on Yale_Original Face Database in the presence as well as in absence of motion blur. Here also motion blur is added during the training phase. Figures 6 and 8 correspond to the performance of various Classifiers and Face Recognition methods without and with motion blur for case 1. Results show the efficacy of DCTPCA + Bat method which when compared with the other methods proves better for all classifiers and for both the cases. Figures 7 and 9 demonstrate the effectiveness of Euclidean Distance classifier both in the absence as well as in the presence of motion blur for both the cases of Yale_Original database. Table 2 shows the performance of four distance classifiers for fixed features on Yale database. From table, it is observed that face recognition using DCTPCA + Bat is having maximum recognition rate for all cases. It is also examined that Euclidean distance classifiers gives good recognition rate for majority of cases on Yale_original databases while Manhattan Distance classifier produces better results for Yale_Extended database in presence as well as the absence of motion blur. Effect of motion blur is also studied during experiments. We have found that in Yale databases recognition rate increases in the presence of motion blur.

5 Conclusions This paper evaluates the performance of various distance classifiers on face recognition system. Various distance classifiers and face recognition methods are tested on standard face databases such as Yale_Original and Yale_Extended. The outcome

PCA DCTPCA + CS DCTPCA + Bat PCA DCTPCA + CS DCTPCA + Bat PCA DCTPCA + CS DCTPCA + Bat PCA DCTPCA + CS DCTPCA + Bat

ED

50 62.5 65

57.5

60

52.5

45 57.5

42.5 60

70

67.5

35 52.5

50 62.5

67.5

63.8

50 60

50 65

50 62.5

60

44 58

58

46 58

64

50 62

64

44 62

W_O_M

66

42 62

54

50 52

66

54 64

64

50 64

W_M

C:10; NF:4 W_O_M

C:10; NF:4 W_M

Yale_ Extended case 2

Yale_Extended case 1

D_C Distance classifier; W_O_M Without motion blur; W_O_M With motion blur

ChD

CD

MD

FR methods

D_C

81.1

70 77.7

78.8

68.8 70

81.1

71.1 77.7

81.1

72 78.8

W_O_M

C:15; NF:5

85.5

72.2 77.8

72.2

62.2 66.7

83.3

74.4 76.7

85.6

75.6 77.8

W_M

Yale_Original case 1

77.3

66.6 74.6

76

56 69.3

78.6

66.6 73.3

81.33

64 76

W_O_M

C:15; NF:5

81.3

70.6 69.3

73.3

62.6 61.3

81.3

70.6 62.6

85.3

72 74.6

W_M

Yale_Original case 2

Table 2 Analysis of different classifiers based on recognition rate for the fixed number of features for both the variations of Yale database in the presence as well as the absence of motion blur

1216 Preeti and D. Kumar

Effect of Various Distance Classifiers on the Performance …

1217

Fig. 6 Analysis of FR techniques on Yale_Original database for case 1 using: a ED; b MD; c CD and d ChD

Fig. 7 Analysis of four distance classifiers on both the cases of on Yale_Original database in the absence of motion blur: a case 1; b case 2

illustrates that the DCTPCA + Bat presents superior results with less number of features both with as well as without motion blur when compared with the PCA and DCTPCA + CS algorithms. For case 1 of Yale_Original database, DCTPCA + Bat method produces 88% recognition rate without motion blur while other DCTPCA + CS and PCA methods yield 84, 76% recognition rate respectively with 6 features using Euclidean Distance Classifiers. Same is also true in the presence of motion blur here also DCTPCA + Bat yields maximum recognition rate using Euclidean Distance Classifiers. The results also depict the performance of various classifiers. Euclidean distance classifier is having maximum recognition rate for Yale_Original database when compared with other three classifiers. For Case 2 of Yale_Original database for 6 features using DCTPCA + Bat face recognition technique, Euclidean Distance Classifier

1218

Preeti and D. Kumar

Fig. 8 Analysis of FR techniques by adding motion blur during training on Yale_Original for case 1 using: a ED; b MD; c CD and d ChD

Fig. 9 Analysis of four distance classifiers on both the cases of on Yale_Original database in the presence of motion blur: a case 1; b case 2

produces 82% recognition rate, Manhattan Distance gives 77%, Canberra Distance generates 76% and Chebyshev Distance provides 77% when used without motion blur. On the other hand, for the same database but in the presence of motion blur, Euclidean Distance Classifier produces 87% recognition rate, Manhattan Distance gives 80%, Canberra Distance generates 76% and Chebyshev Distance provides 81% recognition rate using only 6 features. The same is also true for Yale_Extended face database but in this Manhattan classifier produces better results among others. Here, in the presence of motion blur for case 2, Manhattan Distance gives 70% recognition rate, Euclidean Distance Classifier produces 68%, Canberra Distance generates 60%, and Chebyshev Distance yields 64% recognition rate using only 5 features.

Effect of Various Distance Classifiers on the Performance …

1219

References 1. Chellappa, R., Wilson, C.L., Sirohey, S.: Human and machine recognition of faces: a survey. IEEE Proc. 83(5), 705–740 (1995) 2. Zhao, W., Chellappa, R., Phillips, P.J., Rosenfeld, A.: Face recognition: a literature survey. ACM Comput. Surv. 35(4), 399–458 (2003) 3. Kaur, H., Panchal, V.K., Kumar, R.: A novel approach based on nature inspired intelligence for face feature extraction and recognition. In: Sixth International Conference on Contemporary Computing (IC3), pp. 149–153 (2013) 4. Kumar, V., Chhabra, J.K., Kumar, D.: Performance evaluation of distance metrics in the clustering algorithms. J. Comput. Sci. 13(1), 38–52 (2014) 5. Gupta, B., Singh, A.K.: Analyzing face recognition using PCA and comparison between different distance classifier. Int. J. Eng. Sci. Res. Technol. 2(4), 683–686 (2013) 6. Yang, X.S., Deb, S.: Cuckoo search via Lévy flights. In: Proceedings of World Congress on Nature & Biologically Inspired Computing, pp. 210–214 (2009) 7. Preeti, Kumar, D.: Performance analysis of combination of CS with PCA and LDA for face recognition. In: IEEE, International conference on information, communication, instrumentation and control (ICICIC), pp. 1–6 (2017) 8. Yang, X.S.: A new metaheuristic bat inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010), pp. 65–75(2010) 9. Sodhi, K.S., Lal, M.: Face recognition using PCA, LDA and various distance classifiers. J. Glob. Res. Comput. Sci. 4(3), 30–35 (2013) 10. Sodhi, K.S., Lal, M.: Comparative analysis of PCA-based face recognition system using different distance classifiers. Int. J. Appl. Innov. Eng. Manage. (IJAIEM) 2(7), 341–348 (2013) 11. Chakrabarti, D., Dutta, D.: Facial expression recognition using PCA and various distance classifiers. In: Sengupta S., Das K., Khan G. (eds.) Emerging trends in computing and communication. Lecture Notes in Electrical Engineering, Vol. 298. Springer, New Delhi (2014) 12. Gawande, M.P., Agrawal, D.G.: Face recognition using PCA and different distance classifiers. IOSR J. Electron. Comm. Eng. (IOSR-JECE) 9(1), 1–5 (2014) 13. Padyala, A.V.K.R., Neeraja, K.S.P.D.: Face recognition using eigenfaces and distance classifiers. Int. J. Eng. Comput. Sci. 4(4), 11597–11601 (2015) 14. Mondal, S., Bag, S.: Face recognition using PCA and minimum distance classifier. In: Satapathy, S., Bhateja, V., Udgata, S., Pattnaik, P. (Eds) Proceedings of the 5th International Conference on Frontiers in Intelligent Computing: Theory and Applications. Advances in Intelligent Systems and Computing, Vol. 515. Springer, Singapore (2017) 15. Abbas, E.I., Safi, M.E., Rijab, K.S.: Face recognition rate using different classifier methods based on PCA. In: International Conference on Current Research in Computer Science and Information Technology (ICCIT), pp. 26–27 (2017) 16. Malhotra, P., Kumar, D.: An optimized face recognition system using cuckoo search. J. Intell. Syst. Retrieved 28 Nov 2017, from https://doi.org/10.1515/jisys-2017-0127 17. Preeti, Kumar, D.: Feature selection for face recognition using DCT-PCA and bat algorithm. Int. J. Inf. Technol. 9(4), 411–423 (2017) 18. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cognitive Neurosicence 3(1), 71–86 (1991) 19. Kumar, D., Kumar, S., Rai, C.S.: Memetic algorithms for feature selection in face recognition. In: Eighth International Conference on Hybrid Intelligent Systems, pp. 931–934 (2008) 20. Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: Face recognition by independent component analysis. IEEE Trans. Neural Netw. 13(6), 1450–1464 (2002) 21. Belhumeur, P.N., Hespanha, J. P., Kriegman, D. J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE TPAMI 19, 711–720 (1997) 22. Samra, A.S., Gad Allah, S.E., Ibrahim, R.M.: Face recognition using wavelet transform, fast fourier transform and discrete cosine transform. In: Proceeding 46th IEEE international midwest symposium circuits and systems (MWSCAS‘03), vol. 1, pp. 272–273 (2003)

1220

Preeti and D. Kumar

23. Lance, G.N., William, W.T.: Computer programs for hierarchical polythetic classification (similarity analyses). Computer 9(1), 60–64 (1966) 24. Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 643–660 (2001)

An Improved TLBO Leveraging Group and Experience Learning Concepts for Global Functions Jatinder Kaur, Surjeet Singh Chauhan and Pavitdeep Singh

Abstract In this paper, we have proposed a variant of teaching-learning-based optimization (TLBO) algorithm, which leverages group and experience learning of different learners to enhance the overall performance of the original TLBO algorithm. The modified algorithm is based upon the concept of microteaching in which a class (called population) is divided into different smaller groups (called sub-populations) and algorithm is individually run for all the sub-populations before being merged together after certain generations to improve the diversity of the population . Within each sub-population, the algorithm uses the mean values of all the learners within that group, and exploiting other individuals learning experience to find the optimum value. The algorithm hugely benefits of exploitation perspective by strategizing group concept incorporated in the algorithm. Whereas the exploration search immensely benefits from randomly regrouping of sub-populations and learning experience mechanisms inculcated into the algorithm. The proposed algorithm is tested on several bench mark function which proved that GTLBOLE has some good performances when compared with other established algorithms including other variants of teaching-learning-based optimization techniques. Keywords Optimization techniques · Single objective optmization problem Teaching-learning based optimization (TLBO) · Group teaching-learning-based optimization using learning experience (GTLBOLE)

J. Kaur (B) · S. S. Chauhan Chandigarh University, Mohali, India e-mail: [email protected] S. S. Chauhan e-mail: [email protected] P. Singh Royal Bank of Scotland, Gurgoan, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4_113

1221

1222

J. Kaur et al.

1 Introduction In the past, population-based techniques were developed to solve single/multiple objective optimization problems, which helped in decision-making for industrial and other related systems. However, setting-up the correct algorithmic parameters is the most challenging task for precise working of these algorithms. Furthermore, proper parameter selection will assist in finding the global optimal solutions rapidly. A slight alteration in the algorithm parameters modify the effectiveness of the technique. In the past, significant research has done to solve optimization problems, which include Genetic Algorithm (GA) [6] (leveraging the concept of the Darwinian theory of the survival of the fittest and the theory of evolution of the living beings), Particle Swarm Optimization (PSO) [8] (leveraging the concept of foraging of the swarm of birds), Artificial Bee Colony (ABC) [1, 7] (leveraging the concept of foraging of bee are the swarm intelligence based algorithms), Differential Evolution (DE) [4, 10] (works similar to GA with specialized crossover and selection method), Harmony Search (HS) [5] (leveraging the concept of music improvisation in a music player).However, at few instances, the difficulty for the selection of parameters increases with modifications and hybridization of these algorithms. Rao et al. demonstrated that teaching-learning-based optimization which does not require any algorithm specific paraemeters to be set, are yielding better optimization results as compared with other population-based techniques. Recently, considerable work has been done in improving/modifying the original TLBO algorithm. Few of the algorithims derived from TLBO algorithm are: Elitist teaching-learning-based optimization algorithm [9] proposed by Rao in 2012, which is based upon the concept of elitism applied to the original teaching-learning-based optimization algorithm, LETLBO proposed by Zou [11] which utilizes the experience of other learners to arrive at global optimum solution, VTTLBO technique proposed Chen et al. [2] which uses dynamic population size similar to a triangle to reduce the computing cost of original TLBO and SAMCCTLBO proposed by Chen et al. [3], which leverages the concepts of multi-classes cooperation and simulated annealing for faster convergence to global optimum solution.

2 Teaching-Learning-Based Optimization (TLBO) Algorithm TLBO has been recently proposed as one of the parameter-less population-based evolutionary optimization algorithm which is based upon the concept of imparting knowledge through teacher student interaction learning. It is considered as parameterless algorithm due to the lack of any algorithm specific parameter to run the algorithm. In the past few years, it has been successfully applied on number of optimization problems. TLBO algorithm can be divided into two important phases: Teacher Phase (or the teacher–student interaction phase) and Learner Phase (or the student–student

An Improved TLBO Leveraging Group and Experience Learning …

1223

interaction phase). The optimized or best solution among different individuals within a population represents a teacher of the class. Teacher interacts with students to help them improve their knowledge (i.e., marks or grades). Quality and knowledge of a teacher plays a vital role in shaping the outcomes of the learners. Additionally, learners constantly learns with interaction with other learners in the class. We will now briefly touch upon the basic concepts exhibited by these two phases.

2.1 Teacher Phase In the entire population, the best individual (having the optimal objective value) is considered as the teacher. The positions of various learners are computed based upon the mean values of the learners belonging to different subjects. Assume X i = xi,1 , xi,2 , ..., xi,n represents ith learner having n-subjects (or dimensions), and X Mean and X T eacher represents the mean and best positions of the current class, respectively. Teacher phase can then be expressed in the equation form as follows: X new,i = X old,i + rand(0, 1) ∗ (X T eacher − T F ∗ X Mean )

(1)

where X new,i and X old,i belongs to new and old or previous positions of ith learner, X T eacher is the position of the current teacher, rand(0,1) is the random function which generate value in the range [0,1]. In each iteration of the teacher phase, all learners are recomputed to find new positions. If X new,i is superior in terms of objective function value than X old,i , X new,i will be accepted and taken forward to Learner Phase, otherwise, X old,i remains same. TF is the teaching factor, which can have either 1 or 2 randomly decided by the algorithm. It is calculated as per Eq. (2). T F = r ound[1 + rand(0, 1) {2 − 1}]

(2)

2.2 Learner Phase In the learner phase, two individuals i and j one after another are randomly selected such that i = j. The process of updating learner i is shown in Eq. (3).  X new,i =

X old,i + rand(0, 1)(X old,i − X old, j )i f f (X old,i ) < f (X old, j ) X old,i + rand(0, 1)(X old, j − X old,i )other wise

(3)

Accept X new,i if it is superior than X old,i . Where, X new,i represents the new position of ith learner, X old,i and X old, j are the old positions of ith and jth learners, respectively. rand(0,1) is as defined for the teacher phase.

1224

J. Kaur et al.

3 TLBO Using Grouping and Learning from Others Algorithm There are many methods by altering the learners and teachers phases to improve the overall performance of TLBO, however, there is one such algorithm which mimics the micro learning concept and using grouping strategy to improve the global performance. In this section, a variant of teaching-learning-based optimization algorithm (GTLBOLE) is proposed, which embeds group learning and learning experience of other learners concepts into the algorithm. Fig. 1 depicts the flowchart of the GTLBOLE algorithm from its entirety. An improved algorithm has the following main parts.

3.1 Inspiration The inspiration behind our method is to leverage the concept of micro-learning to improve the scores of learners within small groups and a learning from other experienced learners, thus providing an effective and efficient population- based algorithm. In this paper, groups within a class are introduced into TLBO combined with learning from other experienced learners within the group to maintain a proper trade-off between exploration and exploitation capabilities, as a consequence of enhancing the global optimization performance.

3.2 Teacher Phase of GTLBOLE During the teacher phase of GTLBOLE, the individual having the best solution among all the groups is chosen as a teacher (X T eacher ) for the current generation. The learners belonging to various groups with a population improve their performance by shifting toward the best solution (teacher). The algorithm makes use of the current mean of group, while compiling the new individual position within a group as mentioned in Eq. 4 (4) X new,i = X old,i + rand(.)(X T eacher − mean gr oup ) where X new,i and X old,i represent the new and old values or positions of ith learner, respectively. rand(.) is a random function, which can generate any value randomly between 0 and 1. X T eacher is the best individual of the current generation. mean gr oup is the mean value for a group. TF is as defined in Eq. 2. New position X new,i is accepted only in case it is better than the old or previous one X old,i , otherwise X old,i is retained. The pseudocode of teacher phase in GTLBOLE is depicted in Algorithm 1

An Improved TLBO Leveraging Group and Experience Learning … Fig. 1 Flowchart of the GTLBOLE algorithm

1225

1226

J. Kaur et al.

Algorithm 1 (Teacher phase of GTLBOLE) Begin Determine the number of groups Determine the best teacher(X T eacher ) among all the groups Calculate the mean values (mean gr oup ) of all the groups for i = 1 : N G (NG is the number of groups) for j = 1 : G S (GS is the size of group) Calculate X new,i = X old,i + rand(.)(X T eacher − mean gr oup ) If (X new,i better than X old,i ) Accept X new,i else Reject X new,i endif endfor (population within a group) endfor (for groups) End

3.3 Learner phase of GTLBOLE In the learner phase of GTLBOLE, the learners dynamically improve their learning capabilities by either exploiting the standard learning phase strategy or the customized learning process which implies experience from other learners within a group. This process continues for all the groups within a population. According to the technique, few learners will acquire knowledge from a randomly selected learner within a group, and other leftovers will acquire knowledge from differently selected individuals by way of mutual experience sharing for a group. This helps in maintaining the diversity of the individuals during learning phase of the algorithm, which further results in moving the solution toward global optimum. For ith learner, if (rand(.) < 0.5) , the learners are updated according to Eq. 3. In case, the new value of ith learner is superior than the previous one than the new value is considered otherwise the old value is retained. On the contrary, if (rand(.) >= 0.5), then two learners (lth and lth) are randomly selected within a group which are different from ith learner to compute the new value of ith learner. Depending upon whether kth or lth learner is better, ith learners new value can be computed according to Eqs. 5 or 6, respectively. X new,i = X old,i + rand(.) ∗ (X old,k − X old,l )

(5)

X new,i = X old,i + rand(.) ∗ (X old,l − X old,k )

(6)

where, rand(.) is the random function similar to the one defined in the teacher phase. Again, X new,i and X old,i represents the new and old values of the ith learner, respectively. Two randomly selected kth and lth positions are represented by X old,k

An Improved TLBO Leveraging Group and Experience Learning …

1227

and X old,l values within the group. Algorithm 2 depicts the pseudocode of Learner Phase in GTLBOLE. Algorithm 2 (Learner phase of GTLBOLE) Begin for i= 1 :NG (NG is the number of groups) for i= 1 :GS (GS is the size of group) If (rand(.) < 0.5) % Randomly select the learning approach Select two learners X k and X l If the k th learner is better than the l th learner X new,i = X old,i + rand(.) ∗ (X old,k − X old,l ) else X new,i = X old,i + rand(.) ∗ (X old,l − X old,k ) endif else Update the learner according to equation 3 endif endfor (population within a group) endfor (for groups) End

3.4 Grouping Strategy The algorithm uses the grouping strategy to create various groups and apply the teacher and learner phase on these groups. Diversity of the groups is improved by regrouping the existing groups after a certain number of generations (say a period T), which is an input to the algorithm. In the real classroom, to improve the effectiveness of teaching-learning process, the students are made to change their respective positions after a certain period of time. On the same line, the algorithm makes use of regrouping the existing individuals after a certain number of generations. The process is quite simple as individuals are allocated groups on random basis. Fig. 2 depicts nine students attending a class, which is divided into three groups. Figure 2 shows the initial population which is divided into 3 groups in the first period, the individuals are grouped randomly; the first group is allocated learner 1, learner 5 and learner 6; the second group is allocated learner 4, learner 8 and learner 9, whereas the third group contains learner 2, learner 7 and learner 5. After a Period T (certain number of generations), the groups are merged again, and groups are created on random basis for subsequent processing. By changing the learners within the groups, the diversity of the groups are changed which is a key for finding the global optimum of the solution. The pseudocode of merging and regrouping strategy is shown in Algorithm 3

1228

J. Kaur et al.

Fig. 2 Grouping strategy of GTLBOLE

Algorithm 3 (Merging and Regrouping Strategy of GTLBOLE) Begin % Merging and Regrouping Strategy Determine the Period T Determine the maximum number of Generations Gen max If ( Gen curr (current generation) is less Gen max ) If (mod(Gen curr /T ) is equal to 0) Merge the groups into a single population Recreate groups by selecting from population randomly else Apply the Teacher and Learner Phase on the exisitng groups endif else Exit the algorithm endif End

4 Simulation Results and Discussion This section will cover the experimental tests on various benchmarks functions, which have been further categorized into unimodal and multimodal functions to authenticate the proposed GTLBOLE algorithm. Well-known existing optimization techniques

An Improved TLBO Leveraging Group and Experience Learning …

1229

like FDR-PSO (which is extension of PSO algorithm), jDE (an improved version of original Differential Evolution (DE) algorithm proposed), self-adaptive DE algorithm (SaDE), and TLBO are used for comparing the performance of GTLBOLE algorithm.

4.1 Experimental Results and Comparisons We have used two sets of test functions comprising of ten different benchmark functions listed in Tables 1 and 2, for the experiments. We have taken the simulation results from literature for the ten benchmark functions for FDR-PSO, jDE, SaDE, and TLBO algorithms. Parameter Settings As the results retrieved through literature are run over 30D (D representing the dimensionality of the global function) and 50D dimensions for various algorithms, we ran our GTLBOLE algorithm with 30D and 50D to compare

Table 1 Unimodal functions Function Formula D 2 f 1 (Sphere) F1 (x) = i=1 xi D  f 2 (Quadric) F2 (x) = i=1 ( ij=1 x j )2 D f 3 (Sum square) F3 (x) = i=1 i x2  D 2i f 4 (Zakharov) F4 (x) = i=1 xi + D D 0.5i xi )2 + ( i=1 0.5i xi )4 ( i=1  D−1 2 f 5 (Rosenbrock) F5 (x) = i=1 [100(xi − xi+1 )2 + (xi − 1)2 ]

Range

Fmin

Acceptance

[−100, 100] [−100, 100]

0 0

1E−8 1E−8

[−10, 10] [−10, 10]

0 0

1E−8 1E−8

[−2.048, 2.048]

0

5

Table 2 Multimodal functions Function

Formula

Range

Fmin

Acceptance

f 6 (Ackley)

F6 (x) =     D x 2 − 1 20 − 20 ex p − 51 D i i=1 

D cos(2π x ) + e 1 ex p D i i=1 D F7 (x) = i=1 (xi2 − 10cos(2π xi ) + 10)

[−32.768, 32.768]

0

1E−6

[−5.12, 5.12]

0

10

[−0.5, 0.5]

0

5

[−600, 600]

0

0.05

[−500, 500]

0

600

f 7 (Rastrigin) f 8 (Weierstrass)

f 9 (Griewank) f 10 (Schwefel)

 D kmax k F8 (x) = i=1 ( [a cos(2π bk (xi + kmaxk=0k 0.5))]) − D k=0 [a cos(2π bk × 0.5)] a = 0.5 b = 3 kmax = 20

D n xi2 xi √ +1 F9 (x) = i=1 4000 − i=1 cos i

F10 (x) = D 418.9829D + i=1 (−xi sin abs(xi ))

1230

J. Kaur et al.

the mean solutions and standard deviations. Moreover, to reduce the statistical errors during simulation of the algorithm, 30 independent runs were performed and their mean results were used for comparison. Range column defines the lower and upper values in Tables 1 and 2. Fmin displays the theoretical global minimum value for a function whereas the Acceptance provides the acceptable values for a solution. We have taken most of the parameters from the relevant literature however population size is set to 50, group size to 10 and Period T is set 50. Maximum function evaluation (FEs) used during simulation is 300,000. Comparison of solution accuracy The mean optimum values, standard deviation, mean function evaluations (mFE) and successful ratio for unimodal and multi model functions for 30D are depicted in Tables 3 and 4 respectively. Similarly, Tables 5 and 6 are used to show the performance results for unimodal and multi- model functions functions for 50 dimensions (50D) respectively. Tables 3 and 5 show that GTBLOLE is the best when it comes for mean solutions and mean function evaluations irrespective of the dimensionality of the populations. It is efficient to predict

Table 3 Mean, Std. Mean FEs, and Successful Ratio of the 30 runs obtained by different methods for 30D unimodal functions Algorithm f1 f2 f3 f4 f5 FDR-PSO

jDE

SaDE

TLBO

GTLBOLE

Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio

2.77−145 3.61E−145 128180.70 100.00% 1.96E−125 2.98E−125 29735.3 100.00% 2.00E−110 3.45E−110 27603.00 100.00% 0.00E+00 0.00E+00 7027.3 100.00% 0.00E+00 0.00E+00 4207.7 100.00%

5.47E−12 4.54E−12 266238.00 100.00% 1.68E−11 1.56E−11 235111.70 100.00% 2.20E−09 2.46E−09 276928.7 100.00% 1.87E−123 3.03E−123 29019.30 100.00% 0.00E+00 0.00E+00 6522 100.00%

8.73E−142 1.48E−141 124255.00 100.00% 1.19E−126 1.67E−126 28106.00 100.00% 8.26E−114 1.43E−113 25804.30 100.00% 0.00E+00 0.00E+00 6567 100.00% 0.00E+00 0.00E+00 3936.3 100.00%

4.71E−17 7.60E−17 204416.30 100.00% 1.76E−19 1.92E−19 155334.70 100.00% 1.53E−18 1.42E−18 156240.00 100.00% 3.80E−79 6.42E−79 54680.00 100.00% 1.10E−280 0.00E+00 14036.00 100.00%

2.12E+00 2.19E+00 282830.70 100.00% 1.28E−04 1.75E−04 199621.00 100.00% 2.02E+01 1.09E+00 NaN 0.00% 1.02E+01 5.20E−01 NaN 0.00% 2.21E+01 2.64E+00 NaN 0.00%

An Improved TLBO Leveraging Group and Experience Learning …

1231

Table 4 Mean, Std., Mean FEs and Successful Ratio of the 30 runs obtained by different methods for 30-D multimodal functions Algorithm f6 f7 f8 f9 f 10 FDR-PSO

jDE

SaDE

TLBO

GTLBOLE

Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio

7.11E−15 0.00E+00 135095.00 100.00% 3.55E−15 0.00E+00 35680.00 100.00% 3.55E−15 0.00E+00 33929.00 100.00% 3.55E−15 0.00E+00 8760.7 100.00% 2.37E−15 2.05E−15 4644.7 100.00%

2.89E+01 1.49E+01 NaN 0.00% 0.00E+00 0.00E+00 31547.30 100.00% 0.00E+00 0.00E+00 51625.00 100.00% 1.26E+01 5.85E+00 19517.00 33.30% 0.00E+00 0.00E+00 936 100.00%

3.00E−03 4.55E−03 63473.30 100.00% 0.00E+00 0.00E+00 15453.30 100.00% 0.00E+00 0.00E+00 8248 100.00% 0.00E+00 0.00E+00 2157.3 100.00% 0.00E+00 0.00E+00 646 100.00%

3.36E−02 1.03E−02 103761.70 100.00% 0.00E+00 0.00E+00 15325.30 100.00% 0.00E+00 0.00E+00 16249.70 100.00% 0.00E+00 0.00E+00 3586.3 100.00% 0.00E+00 0.00E+00 2265.3 100.00%

3.03E+03 1.11E+03 NaN 0.00% 3.82E−04 0.00E+00 21506.30 100.00% 3.82E−04 0.00E+00 25712.00 100.00% 4.57E+03 4.07E+02 NaN 0.00% 4.62E+03 6.39E+02 NaN 0.00%

the optimal solutions for 4 functions out of 5. “Nan” indicates that the algorithm failed to converge in the given number of maximum function evaluations. For multimodal functions ( f 6 to f 10 ) shown in Tables 4 and 6, GTLBOLE may not yield the best results for few functions as jDE and SaDE performs better for f 10 . GTLBOLE seems to find the solution of various functions in a lesser number of mean function evaluations as compared to other algorithms under consideration.

5 Conclusions This paper presents a modified TLBO algorithm called GTLBOLE using combination of group learning and learning experience of other learners to enable the micro learning concept to be incorporated in the original TLBO algorithm. The mean value of the group is used in the teacher phase to update the positions of the learners whereas a global teacher is chosen for all the groups for calculating the new positions values.

1232

J. Kaur et al.

Table 5 Mean, Std. Mean FEs, and Successful Ratio of the 30 runs obtained by different methods for 50D unimodal functions Algorithm f1 f2 f3 f4 f5 FDR-PSO

jDE

SaDE

TLBO

GTLBOLE

Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio

6.40E−67 1.06E−66 154168.00 100.00% 2.63E−87 4.40E−87 44472.30 100.00% 1.69E−76 8.49E−77 41812.30 100.00% 0.00E+00 0.00E+00 7566.7 100.00% 0.00E+00 0.00E+00 5165.7 100.00%

7.73E−02 2.86E−02 NaN 0.00% 2.19E−01 9.72E−02 NaN 0.00% 1.42E−01 9.26E−02 NaN 0.00% 7.26E−85 1.22E−84 42666.70 100.00% 0.00E+00 0.00E+00 10027.7 100.00%

1.18E−66 1.07E−66 151954.30 100.00% 8.77E−87 1.51E−86 39868.30 100.00% 1.52E−77 2.63E−77 39199.00 100.00% 0.00E+00 0.00E+00 7255.3 100.00% 0.00E+00 0.00E+00 5462.3 100.00%

7.52E−03 1.22E−02 NaN 0.00% 2.61E−05 1.51E−05 NaN 0.00% 1.75E−05 8.44E−06 NaN 0.00% 2.11E−39 3.10E−39 101644.70 100.00% 2.40E−175 0.00E+00 32452.30 100.00%

2.58E+01 1.40E+00 NaN 0.00% 1.91E+01 1.96E+00 NaN 0.00% 3.94E+01 3.20E+00 NaN 0.00% 3.56E+01 3.45E−01 NaN 0.00% 2.82E+01 2.12E+00 NaN 0.00%

The diversity of the population within different groups is increased by regrouping strategy which can be easily controlled by changing the period value parameter defined at the beginning of the algorithm. Numerous experiments were performed on test functions (including unimodal and multimodal) in this paper. Comparing the experimental results, few observations can be made from the modified algorithm. First of all, it may not be best for all the test functions but it has excellent performance of solution accuracy and high convergence speed. Future work will include tweaking the algorithm in terms of dynamically changing the period value based upon the learners position in order to improve the diversity of the population further. Group size also plays an important role toward the convergence speed, finding the right group size, and regrouping strategy for a problem will definitely invite the researchers intention in the future.

An Improved TLBO Leveraging Group and Experience Learning …

1233

Table 6 Mean, Std. Mean FEs, and Successful Ratio of the 30 runs obtained by different methods for 50D multimodal functions Algorithm f6 f7 f8 f9 f 10 FDR-PSO

jDE

SaDE

TLBO

GTLBOLE

Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio Mean Std. mFEs Ratio

1.89E−14 4.10E−15 165983.00 100.00% 7.11E−15 0.00E+00 51069.00 100.00% 7.11E−15 0.00E+00 49733.70 100.00% 3.55E−15 0.00E+00 9280.7 100.00% 2.37E−15 2.05E−15 5997.7 100.00%

5.77E+01 9.49E+00 NaN 0.00% 0.00E+00 0.00E+00 56013.70 100.00% 6.63E−01 1.15E+00 93548.70 100.00% 2.26E+01 8.10E+00 NaN 0.00% 0.00E+00 0.00E+00 1171.3 100.00%

5.41E−01 8.60E−01 93960.30 100.00% 9.18E−02 1.59E−01 57106.70 100.00% 0.00E+00 0.00E+00 12453.30 100.00% 0.00E+00 0.00E+00 2626 100.00% 0.00E+00 0.00E+00 1060 100.00%

9.03E−03 7.92E−03 121100.00 100.00% 0.00E+00 0.00E+00 21667.00 100.00% 0.00E+00 0.00E+00 21508.00 100.00% 0.00E+00 0.00E+00 3943.3 100.00% 0.00E+00 0.00E+00 2768.3 100.00%

6.69E+03 1.72E+02 NaN 0.00% 6.36E−04 0.00E+00 39878.3 100.00% 6.36E−04 0.00E+00 52637.00 100.00% 8.06E+03 2.43E+02 NaN 0.00% 9.01E+03 5.07E+02 NaN 0.00%

References 1. Akay, B., Karaboga, D.: A modified artificial bee colony algorithm for constrained optimization problems. Appl. Soft Comput. 10 (2010) 2. Chen, D., Lu, R., Zou, F., Li, S.: Teaching-learning-based optimization with variable-population scheme and its application for ANN and global optimization. Neuro Comput. 173, 1096–1111 (2015) 3. Chen, D., Zou, F., Wang, J., Yuan, W.: Samcctlbo: a multi-class cooperative teachinglearningbased optimization algorithm with simulated annealing. Soft Comput. (2015). https://doi.org/ 10.1007/s00500-015-1613-9 4. Efrn, M., Mariana, E., Rub, D.: Differential evolution in constrained numerical optimization: an empirical study. Inf. Sci. 180, 4223–4262 (2010) 5. Geem, Z., Kim, J., Loganathan, G.: A new heuristic optimization algorithm: harmony search. Simulation 76, 60–70 (2001) 6. Holland, J.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975) 7. Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical ReportTR06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005) 8. Kennedy, J., Eberhart, R.: Particle swarm optimization. in: Proceedings of IEEE International Conference on Neural Networks, Piscataway, pp. 1942–1948 (1995) 9. Rao, R.V., Patel, V.: An elitist teaching learning based optimization algorithm for solving complex constrained optimization problems. Int. J. Ind. Eng. Comput. 3, 535–560 (2012)

1234

J. Kaur et al.

10. Storn, R., Price, K.: Differential evolution a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997) 11. Zou, F., Wang, L., Hei, X., Chen, D.: Teaching learning-based optimization with learning experience of other learners and its application. Appl. Soft Comput. 37, 725–736 (2015)

Author Index

A Aakash Kumar, 159 Aakif Fairooze Talee, 293 Aayush Jain, 1041 Abbasi, M. Y., 293 Adis Alihodzic, 305 Aditya Bakshi, 879 Aditya Dixit, 539 Aditya Narayanan, G., 761 Aggarwal, A. K., 193 Agrawal, A., 547 Ajay Sharma, 1023 Akhil Gupta, 957 Akshay Goel, 661 Ali Sadollah, 97 Alok Singh, 279 Amandeep Kaur, 906 Aman Kumar, 621 Aman Mehta, 661 Amardeep Singh, 819 Amitav Chakraborty, 405 Amol Prakash, 679 Ananya Gupta, 203 Anil Kumar, U., 633, 1165 Animesh Jha, 897 Anita Pal, 325, 335, 347 Anjali Munde, 319 Ankit Agrawal, 239 Anoop Arya, 171 Anubhav Agrawal, 1153 Anubhav Namdeo, 393 Anupam Shukla, 539 Anupam Yadav, 27 Anurag Singh Baghel, 1177 Anusuman, S. K., 441

Apoorva Mishra, 539 Arindam Debroy, 471 Arun Kumar Misra, 57 Arup Abhinna Acharya, 371 Ashish Singla, 783, 809, 819 Ashok Kumar Suhag, 521 Ashwin Dhawad, 783 Asif Iqbal, 1177 Avinash Chandra Pandey, 661 Ayush Kumar Singh, 159 Azzam, R., 557 B Baghel, T., 557 Baier, K., 557 Balyan, L. K., 633 Bhaskar Dhariyal, 133 Bhavnesh Kumar, 143 Boominathan Perumal, 217 Brijesh Kumar Chaurasia, 1199 C Claudio Piciarelli, 869 Cyril Joe Baby, 217 D Deepti Bala Mishra, 371 Deval Verma, 193 Dhanesh G. Kurup, 77 Dheeraj Joshi, 1101, 1111 Dinesh Khanduja, 761, 999 Dinesh Kumar, 1209 Donghwi Jung, 97, 113, 153, 249, 259 Dushyant Kumar Singh, 1089

© Springer Nature Singapore Pte Ltd. 2019 N. Yadav et al. (eds.), Harmony Search and Nature Inspired Optimization Algorithms, Advances in Intelligent Systems and Computing 741, https://doi.org/10.1007/978-981-13-0761-4

1235

1236 E Eva Tuba, 305 Eui Hoon Lee, 97 F Fatemeh Jafari, 39 G Gaurav Dhiman, 857, 909 Gaurav Dwivedi, 621 Gaurav Srivastava, 279 Gayathri Narayanan, 77 Geetanjali Raghav, 527 Geetha Kuntoji, 383 Girish Mishra, 969, 1041 Gopi Nath Kaki, 599 Gunjan Soni, 751, 1111 Gupta, A., 547 Gupta, S. C., 181 Gurbakash Phonsa, 1123 Gur Mauj Saran Srivastava, 707 Gurvinder Singh, 939 Gursimran Kaur, 415 Gurvinder S. Virk, 783 H Harendra Pal Singh, 869 Hari Om, 687 Harish Sharma, 449, 797, 1009, 1023 Haris Smajlovic, 305 Harkiran Kaur, 415, 611 Harsh Goud, 1141 Himanshu Agarwal, 193 Himanshu Monga, 957 Himanshu Singh, 633 HitkulKarmanya Aggarwal, 67 Ho Min Lee, 97, 249, 259 I Indu Bala, 27, 697 Ishan Chawla, 809 J Jafar Yazdi, 39 Jagdish Chand Bansal, 449, 1023 Jasmeet Kalra, 527 Jatinder Kaur, 1221 Jaya Mukhopadhyay, 347 Joong Hoon Kim, 39, 97, 105, 113, 153, 249, 259 Juneja, A., 547 K Karmanya Aggarwal, 67

Author Index Kartik Sharma, 1153 Kavita Sharma, 1009 Kawaljeet Singh, 611 Kedar Nath Das, 85, 371 Komal Singh, 509 Krishna Mohan, M., 797 Krishna Murthy, S. V. S. S. N. V. G., 969 Kritika Gaur, 697 Kumar Gourab Mallik, 589 Kumar Mylapalli, M. S., 1077 Kumar, N., 771 Kumar Vatti, V. B., 1077 Kunal Govil, 621 Kusum Deep, 797 L Lokesh Singh, 731 M Madhvi Shakya, 49 Maheshwar Dwivedy, 67 Maheshwar Pathak, 357, 669 Maiya Din, 679, 741, 1049 Mandal, S., 383 Manisha Rajoriya, 579 Manoj Singh, 449 Manu, K., 383, 1123 Mathur, R., 547 Manik Tandon, 761 Manisha dubey, 171 Manish Kumar, 1101, 1111 Mayank Chhabra, 621 Mayank Satya Prakash Sharma, 119, 1199 Milan Tuba, 305 Mittal, M. L., 1101, 1111 Mohammad Hasan Shahid, 227 Mohammad Meftah Alrayes, 57 Mohd Salim Qureshi, 599 Mohit Agrawal, 707, 761 Mohit Kumar, 647, 845 Mousavi, S. Jamshid, 39 Mridul Narayan Tulsian, 761 Mukesh Chand, 17 Muttoo, S. K., 741, 1049 N Neelam Dwivedi, 1089 Neha Sethi, 939 Neha Yadav, 67 Neeraj Saini, 999 Neeraj Tyagi, 57 Nidhi Singh Pal, 143, 159 Nikhil Paliwal, 119 Nilanjan De,, 335

Author Index

1237

Nirmala Sharma, 1023 Nisha Rathee, 717 Nitin Sharma, 49

Romana Capor Hrosik, 305 Rishabh Srivastava, 897 Rohit Singh, 17, 569

O Om Ji Shukla, 751

S Sabahat Ali Khan, 293 Sachchida Nand Chaurasia, 105, 249, 259 Sachin Ahuja, 1, 835 Saibal K. Pal, 741, 1049 Sajal Mukhopadhyay, 347 Sajjad Eghdami, 105 Sakshi Kukreja, 717 Sakshi Vij, 717 Sandeep K. Raghuwanshi, 979 Sandeep Kumar, 1009 Sang Hoon Jun, 113 Sankar Prasad Mondal, 269 Sanoj Kumar, 869 Santanu Roy, 85 Sarmah, S. P., 471, 485 Sarsij Tripathi, 239 Satish Pawar, 1069 Saurav Dhand, 783 Shahin Ara Begum, 203 Shakti Chourasia, 449 Shakti Raj Chopra, 957 Shandilya, A. M.., 181 Shashank Agarwal, 923 Shivam Mahendru, 923 Shivani Sharma, 1 Shobhit Nigam, 889 Shruti Garg, 979 Shubhra Aakanksha,, 579 Shuvabrata Bandopadhaya, 1153 Shwetank Avikal, 17, 569 Shyam Sundar, 249 Siddharth Gupta, 509 Siddheshwar Mukhede, 679 Sinha, M. K., 557 Somendra P. S. Mathur, 171 Sonia Gupta, 647 Soniya Lalwani, 797 Sonu Lal Gupta, 1177 Soumya, J., 1165 Soon Ho Kwon, 153 Sreedhara, B. M., 383 Srinivasa Raju, K., 1057 Srinivas, M. B., 423 Sucheta Singh, 579 Sudhanshu Jha, 1199 Sujil, A., 751 Sumit Roy, 405 Sumonta Ghosh, 325 Sushma Gupta, 599

P Pal, S. K., 969 Pandiri Venkatesh, 279 Pankaj Swarnkar, 599, 1141 Param Deep Singh, 979 Paras Kalura, 527 Pavitdeep Singh, 1221 Philomina Simon, 431, 441 Pooja Bansal, 227 Prabhujit Mohapatra, 85 Pradeepika Verma, 687 Pranav Venkatesh Kulkarni, 1165 Praneeth, P., 1057 Prashant Sharma, 1009 Prashant Shrivastava, 119 Pratibha Joshi, 357, 669 Pratibha Tiwari, 509 Preeti, 1209 Preet Kamal, 835 Priyank Srivastava, 761, 999 Prosanta Sarkar, 325, 335 Pulak Samanta, 461 R Rachhpal Singh, 989 Raghav, G., 547 Raghunadh Pasunuri, 133 Rahul Banerjee, 405 Rahul Maurya, 897 Rajashree Mishra, 371 Rajeev Tripathi, 57 Rajen B. Bhatt, 217 Rajendra Singh Chillar, 717 Rajesh Kondabala, 949 Rajesh Kumar, 751 Rakhee, 423 Ramadevi Sri, 1077 Rama Shankar Sharma, 497, 1189 Ramraj Dangi, 1069 Ranjeet Singh Tomar, 119, 1199 Rashi Jain, 497 Rashmi Priyadarshini, 647 Rashmi Rashmi, 17, 569 Rathore, P., 485 Rausheen Bal, 879 Ravinder Ahuja, 897 Rekhram Janghel, 731

1238 Sutirtha Kumar Guha, 589 Sunanda Gupta, 879 Surjeet Singh Chauhan, 1221 Surya Prakash, 751 Susheem Kashyap, 527 Surjit Singh, 939 Sukavanam, N. , 771 Syed Abou Iltaf Hussain, 269 Swathi Jamjala Narayanan, 217 T Tarun Shrivastava, 181 Tejinder Kaur, 611 Thi Thuy Ngo, 105 U Utkarsh Dixit, 661 Uttam Kumar Khedlekar, 393 Uttam Kumar Mandal, 269

Author Index V Vadlamudi China Venkaiah, 133 Vasan, A., 1057 Vashu Gupta, 661 Veda Bhanu, P., 1165 Verma, M. K., 557 Vignesh, R., 431 Vijay Kumar, 857, 949 Vikash Kumar Singh, 347 Vikas Verma, 143 Vishal Sharma, 527 Vishnu P. Agrawal, 999 Vivek Chawla, 697 Y Yashoda Makhija, 1189 Young Hwan Choi, 105, 113

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.