Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering

This book puts forward a new method for solving the text document (TD) clustering problem, which is established in two main stages: (i) A new feature selection method based on a particle swarm optimization algorithm with a novel weighting scheme is proposed, as well as a detailed dimension reduction technique, in order to obtain a new subset of more informative features with low-dimensional space. This new subset is subsequently used to improve the performance of the text clustering (TC) algorithm and reduce its computation time. The k-mean clustering algorithm is used to evaluate the effectiveness of the obtained subsets. (ii) Four krill herd algorithms (KHAs), namely, the (a) basic KHA, (b) modified KHA, (c) hybrid KHA, and (d) multi-objective hybrid KHA, are proposed to solve the TC problem; each algorithm represents an incremental improvement on its predecessor. For the evaluation process, seven benchmark text datasets are used with different characterizations and complexities.Text document (TD) clustering is a new trend in text mining in which the TDs are separated into several coherent clusters, where all documents in the same cluster are similar. The findings presented here confirm that the proposed methods and algorithms delivered the best results in comparison with other, similar methods to be found in the literature.


107 downloads 6K Views 4MB Size

Recommend Stories

Empty story

Idea Transcript


Studies in Computational Intelligence 816

Laith Mohammad Qasim Abualigah

Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering

Studies in Computational Intelligence Volume 816

Series editor Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland e-mail: [email protected]

The series “Studies in Computational Intelligence” (SCI) publishes new developments and advances in the various areas of computational intelligence—quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output. The books of this series are submitted to indexing to Web of Science, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink.

More information about this series at http://www.springer.com/series/7092

Laith Mohammad Qasim Abualigah

Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering

123

Laith Mohammad Qasim Abualigah Universiti Sains Malaysia Penang, Malaysia

ISSN 1860-949X ISSN 1860-9503 (electronic) Studies in Computational Intelligence ISBN 978-3-030-10673-7 ISBN 978-3-030-10674-4 (eBook) https://doi.org/10.1007/978-3-030-10674-4 Library of Congress Control Number: 2018965455 © Springer Nature Switzerland AG 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

List of Publications Journal 1. Abualigah, L. M., Khader, A. T., Al-Betar, M. A., Alomari, O. A.: Text feature selection with a robust weight scheme and dynamic dimension reduction to text document clustering, (2017). Expert Systems with Applications. Elsevier. (IF:3.928). 2. Abualigah, L. M., Khader, A. T., Hanandeh, E. S., Gandomi, A. H.: A novel hybridization strategy for krill herd algorithm applied to clustering techniques, (2017). Applied Soft Computing. Elsevier. (IF:3.541). 3. Abualigah, L. M., Khader, A. T., Hanandeh, E. S.: A new feature selection method to improve the document clustering using particle swarm optimization algorithm, (2017). Journal of Computational Science. Elsevier. (IF:1.748). 4. Abualigah, L. M., Khader, A. T.: Unsupervised text feature selection technique based on hybrid particle swarm optimization algorithm with genetic operators for the text clustering, (2017). Journal of Supercomputing. Springer. (IF:1.326). 5. Bolaji, A. L. A., Al-Betar, M. A., Awadallah, M. A., Khader, A. T., Abualigah, L. M.: A comprehensive review: Krill Herd algorithm (KH) and its applications, (2016). Applied Soft Computing. Elsevier. (IF:3.541). 6. Abualigah, L. M., Khader, A. T., Al-Betar, M. A., Hanandeh, E. S., Alyasseri, Z. A.: A hybrid strategy for krill herd algorithm with harmony search algorithm to improve the data clustering, (2017). Intelligent Decision Technologies. IOS Press. (Accepted). 7. Abualigah, L. M., Khader, A. T., Hanandeh, E. S., Gandomi, A. H.: A Hybrid Krill Herd Algorithm and K-mean Algorithm for Text Document Clustering analysis. Engineering Applications of Artificial Intelligence. Elsevier. (Under 3rd revision). (IF: 2.894). 8. Abualigah, L. M., Khader, A. T., Hanandeh, E. S.: Multi-objective modified krill herd algorithm for intelligent text document clustering. Information Systems and Applications. Springer. (Under review). (IF:1.530). 9. Abualigah, L. M., Khader, A. T., Hanandeh, E. S., Rehman, S. U., Shandilya, S. K.: b-HILL CLIMBING TECHNIQUE FOR IMPROVING THE TEXT DOCUMENT CLUSTERING PROBLEM. Current Medical Imaging Reviews. (Under review). (IF:0.308).

v

vi

List of Publications

Chapter 1. Abualigah, L. M., Khader, A. T., Hanandeh, E. S.: A novel weighting scheme applied to improve the text document clustering techniques. Book Series: Studies in Computational Intelligence published by Springer. Book Title: Innovative Computing, Optimization and Its Applications. Springer. 2. Abualigah, L. M., Khader, A. T., Hanandeh, E. S.: Modified Krill Herd Algorithm for Global Numerical Optimization Problems. Book Title: Advances in Nature-inspired Computing and Applications. Springer. (Accepted).

Conference 1. Abualigah, L. M., Khader, A. T., Al-Betar, M. A., Awadallah, M. A.: A krill herd algorithm for efficient text documents clustering. In Computer Applications and Industrial Electronics (ISCAIE), (2016) IEEE Symposium on (pp. 67–72). IEEE. 2. Abualigah, L. M., Khader, A. T., Al-Betar, M. A.: Unsupervised feature selection technique based on genetic algorithm for improving the Text Clustering. In Computer Science and Information Technology (CSIT), (2016) 7th International Conference on (pp. 1–6). IEEE. 3. Abualigah, L. M., Khader, A. T., Al-Betar, M. A.: Unsupervised feature selection technique based on harmony search algorithm for improving the Text Clustering. In Computer Science and Information Technology (CSIT), (2016) 7th International Conference on (pp. 1–6). IEEE. 4. Abualigah, L. M., Khader, A. T., Al-Betar, M. A.: Multi-objectives-based text clustering technique using K-mean algorithm. In Computer Science and Information Technology (CSIT), (2016) 7th International Conference on (pp. 1–6). IEEE. 5. Abualigah, L. M., Khader, A. T., Al-Betar, M. A., Hanandeh, E. S.: Unsupervised Text Feature Selection Technique Based on Particle Swarm Optimization Algorithm for Improving the Text Clustering. First EAI International Conference on Computer Science and Engineering (2017). EAI. 6. Abualigah, L. M., Khader, A. T., Al-Betar, M. A., Hanandeh, E. S.: A new hybridization strategy for krill herd algorithm and harmony search algorithm applied to improve the data clustering. First EAI International Conference on Computer Science and Engineering, (2017). EAI.

List of Publications

vii

7. Abualigah, L. M., Khader, A. T., Al-Betar, M. A., Alyasseri Z. A., Alomari, O. A., Hanandeh, E. S.: Feature Selection with b-Hill climbing Search for Text Clustering Application. Second Palestinian International Conference on Information and Communication Technology, (2017). IEEE. 8. Abualigah, L. M., Sawaiez, A. M., Khader, A. T., Rashaideh, H., Al-Betar, M. A.: b-Hill Climbing Technique for the Text Document Clustering. New Trends in Information Technology (NTIT), (2017). IEEE.

Acknowledgements

Beginning this Ph.D. thesis has been a life-changing experience for me and I would not have achieved it without the guidance and support of many people. I must declare many external donations from individuals, who extended a helping hand throughout this study. I am thankful to Allah SWT for giving me strength to finish this study. I am also grateful to my supervisor, Prof. Dr. Ahamad Tajudin Khader from the School of Computer Sciences at Universiti Sains Malaysia, for his wise counsel, helpful advice, connected support, and supervision throughout the duration of this study. I am also thankful to my co-supervisor, Dr. Mohammed Azmi Al-Betar from the Department of Information Technology, Al-Huson University College, Al-Balqa Applied University, for his assistance. I am also thankful to Dr. Essam Said Hanandeh from the Department of Computer Information System, Zarqa University, for his assistance. My family deserves special thanks. Words cannot express how grateful I am to my father, mother, and brothers for all of the sacrifices that they have done for me. Finally, I thank all of my friends who encouraged me throughout the duration of this study.

ix

Contents

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

1 1 2 4 4 5 6 7 7

Herd Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Krill Herd Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . Why the KHA has been Chosen for Solving the TDCP Krill Herd Algorithm: Procedures . . . . . . . . . . . . . . . . 2.4.1 Mathematical Concept of Krill Herd Algorithm 2.4.2 The Genetic Operators . . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

11 11 11 11 12 12 17 18 19

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

21 21 21 22 22 25 26 27 27

1 Introduction . . . . . . . . . . . . . . . . . . . . . 1.1 Background . . . . . . . . . . . . . . . . . 1.2 Motivation and Problem Statement 1.3 Research Objectives . . . . . . . . . . . 1.4 Contributions . . . . . . . . . . . . . . . . 1.5 Research Scope . . . . . . . . . . . . . . 1.6 Research Methodology . . . . . . . . . 1.7 Thesis Structure . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

2 Krill 2.1 2.2 2.3 2.4

3 Literature Review . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . 3.2 Background . . . . . . . . . . . . . . . . . . . . . 3.3 Text Document Clustering Applications . 3.4 Variants of the Weighting Schemes . . . . 3.5 Similarity Measures . . . . . . . . . . . . . . . 3.5.1 Cosine Similarity Measure . . . . 3.5.2 Euclidean Distance Measure . . . 3.6 Text Feature Selection Method . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

xi

xii

Contents

3.7

Metaheuristics Algorithm for Text Feature Selection . . . . . 3.7.1 Genetic Algorithm for the Feature Selection . . . . 3.7.2 Harmony Search for the Feature Selection . . . . . . 3.7.3 Particle Swarm Optimization for the Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Dimension Reduction Method . . . . . . . . . . . . . . . . . . . . . 3.9 Partitional Text Document Clustering . . . . . . . . . . . . . . . . 3.9.1 K-mean Text Clustering Algorithm . . . . . . . . . . . 3.9.2 K-medoid Text Clustering Algorithm . . . . . . . . . 3.10 Meta-heuristics Algorithms for Text Document Clustering Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.1 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . 3.10.2 Harmony Search Algorithm . . . . . . . . . . . . . . . . 3.10.3 Particle Swarm Optimization Algorithm . . . . . . . . 3.10.4 Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . 3.10.5 Ant Colony Optimization Algorithm . . . . . . . . . . 3.10.6 Artificial Bee Colony Optimization Algorithm . . . 3.10.7 Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Hybrid Techniques for Text Document Clustering . . . . . . 3.12 The Krill Herd Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 3.12.1 Modifications of Krill Herd Algorithm . . . . . . . . 3.12.2 Hybridizations of Krill Herd Algorithm . . . . . . . . 3.12.3 Multi-objective Krill Herd Algorithm . . . . . . . . . 3.13 Critical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Proposed Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Research Methodology Outline . . . . . . . . . . . . . . . . 4.3 Text Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Tokenization . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Stop Words Removal . . . . . . . . . . . . . . . . . 4.3.3 Stemming . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Text Document Representation . . . . . . . . . . 4.4 Term Weighting Scheme . . . . . . . . . . . . . . . . . . . . . 4.4.1 The Proposed Weighting Scheme . . . . . . . . 4.4.2 Illustrative Example . . . . . . . . . . . . . . . . . . 4.5 Text Feature Selection Problem . . . . . . . . . . . . . . . . 4.5.1 Text Feature Selection Descriptions and Formulations . . . . . . . . . . . . . . . . . . . . 4.5.2 Representation of Feature Selection Solution 4.5.3 Fitness Function . . . . . . . . . . . . . . . . . . . . . 4.5.4 Metaheuristic Algorithms for Text Feature Selection Problem . . . . . . . . . . . . . . . . . . .

..... ..... .....

28 29 29

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

30 30 32 33 34

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

34 34 35 36 37 38 38 39 40 42 43 45 46 46 54 54

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

61 61 61 63 63 63 63 64 64 65 66 68

......... ......... .........

68 68 69

.........

70

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

Contents

xiii

4.6 4.7

Proposed Detailed Dimension Reduction Technique . . . . . . Text Document Clustering . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Text Document Clustering Problem Descriptions and Formulations . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Solution Representation of Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Developing Krill Herd-Based Algorithms . . . . . . . . . . . . . . 4.8.1 Basic Krill Herd Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . 4.8.2 Summary of Basic Krill Herd Algorithm . . . . . . . . 4.8.3 Modified Krill Herd Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . 4.8.4 Hybrid Krill Herd Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . 4.8.5 Multi-objective Hybrid Krill Herd Algorithm for Text Document Clustering Problem . . . . . . . . . 4.9 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Comparative Evaluation and Analysis . . . . . . . . . . 4.9.2 Measures for Evaluating the Quality of Final Solution (Clusters) . . . . . . . . . . . . . . . . . . 4.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Benchmark Text Datasets . . . . . . . . . . . . . . . . . . 5.2 Feature Section Methods for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Experimental Design . . . . . . . . . . . . . . . . . . . . . 5.2.2 Results and Discussions . . . . . . . . . . . . . . . . . . . 5.2.3 Summary of Feature Selection Methods . . . . . . . . 5.3 Basic Krill Herd Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Experimental Design . . . . . . . . . . . . . . . . . . . . . 5.3.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . 5.3.3 Parameter Setting . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Comparative and Analysis . . . . . . . . . . . . . . . . . 5.3.5 Basic Krill Herd Algorithm Summary . . . . . . . . . 5.4 Modified KH Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Experiments Design . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Results and Discussions . . . . . . . . . . . . . . . . . . . 5.4.3 Modified Krill Herd Algorithm Summary . . . . . .

.... ....

76 80

....

81

.... .... ....

81 82 82

.... ....

82 87

....

87

....

89

.... .... ....

92 94 94

.... 97 . . . . 100 . . . . 100

. . . . . 105 . . . . . 105 . . . . . 105 . . . .

. . . .

. . . .

. . . .

. . . .

106 106 108 118

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

119 119 121 121 126 132

. . . .

. . . .

. . . .

. . . .

. . . .

132 132 133 137

xiv

Contents

5.5

Hybrid KH Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Results and Discussions . . . . . . . . . . . . . . . . . . . 5.5.3 Hybrid Krill Herd Algorithm Summary . . . . . . . . 5.6 Multi-objective Hybrid KH Algorithm for Text Document Clustering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Results and Discussions . . . . . . . . . . . . . . . . . . . 5.6.3 Multi-objective Hybrid Krill Herd Algorithm Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Comparing Results Among Proposed Methods . . . . . . . . . 5.8 Comparison with Previous Methods . . . . . . . . . . . . . . . . . 5.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Conclusion and Future Work . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . 6.2 Research Summary . . . . . . . . . . . 6.3 Contributions Against Objectives . 6.4 Future Research . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

139 139 139 146

. . . . . 146 . . . . . 146 . . . . . 147 . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

151 151 156 161 161

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

163 163 163 163 164 165

Abbreviations

ABC ASDC BCO BKHA BPSO CS DDF DDR DF DFTF DR DTF FE FF FS GA HKHA HS IDF KH KHA KHM KI LFW MKHA NLP NP PSO TC TD

Ant Colony Optimization Average Similarity of Documents Centroid Bee Colony Optimization Basic Krill Herd Algorithm Binary Particle Swam Optimization Cuckoo Search Detailed Document Frequency Detailed Dimension Reduction Document Frequency Document Frequency with Term Frequency Dimension Reduction Detailed Term Frequency Feature Extraction Fitness Function Feature Selection Genetic Algorithm Hybrid Krill Herd Algorithm Harmony Search Inverse Document Frequency Krill Herd Krill Herd Algorithm Krill Herd Memory Krill Individual Length Feature Weight Modified Krill Herd Algorithm Natural Language Processing Nondeterministic Polynomial time Particle Swam Optimization Text Clustering Text Document

xv

xvi

TDCP TF TFSP VSM WTDC

Abbreviations

Text Document Clustering Problem Term Frequency Text Feature Selection Problem Vector Space Model Web Text Documents Clustering

List of Figures

Fig. 1.1 Fig. 2.1 Fig. 2.2 Fig. 3.1 Fig. Fig. Fig. Fig.

4.1 4.2 4.3 4.4

Fig. 4.5 Fig. 4.6 Fig. 4.7 Fig. Fig. Fig. Fig. Fig.

5.1 5.2 5.3 5.4 5.5

Fig. 5.6 Fig. 5.7

Fig. 5.8

Research methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A flowchart of basic krill herd algorithm (Bolaji et al. 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A schematic represents the sensing domain around a KI (Bolaji et al. 2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An example of the clustering search engine results by Yippy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The research methodology stages . . . . . . . . . . . . . . . . . . . . . . Sigmoid function used in binary PSO algorithm . . . . . . . . . . . Representation of the text clustering solution . . . . . . . . . . . . . The flowchart of adapting basic KH algorithm to text document clustering problem . . . . . . . . . . . . . . . . . . . . . . . . . The flowchart of the modified krill herd algorithm . . . . . . . . . The flowchart of the hybrid krill herd algorithm . . . . . . . . . . . The flowchart of the multi-objective hybrid krill herd algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The general design of the experiments in the first stage . . . . . The general design of the experiments in the second stage . . . A snapshot of the dataset file . . . . . . . . . . . . . . . . . . . . . . . . . The experimental of the proposed methods in the first stage . The number of features in each dataset using three feature selection algorithms and dimension reduction techniques . . . . Average computation time of the k-mean iteration (in second) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The average similarity document centroid (ASDC) values of the basic text clustering algorithms for 20 runs plotted against 1000 iterations on seven text datasets . . . . . . . . . . . . . The average similarity document centroid (ASDC) values of the text clustering using modified KH algorithms for 20 runs plotted against 1000 iterations on seven text datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

6

..

13

..

15

. . . .

. . . .

23 62 76 81

.. .. ..

85 88 90

. . . . .

93 106 106 108 108

. . . . .

. . 115 . . 118

. . 131

. . 138 xvii

xviii

Fig. 5.9

Fig. 5.10

List of Figures

The average similarity document centroid (ASDC) values of the text clustering using hybrid KH algorithms for 20 runs plotted against 1000 iterations on seven text datasets . . . . . . . . . 145 The average similarity document centroid (ASDC) values of the text clustering using multi-objective hybrid KH algorithm for 20 runs plotted against 1000 iterations on seven text datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

List of Tables

Table Table Table Table Table Table

3.1 3.2 4.1 4.2 4.3 4.4

Table 4.5 Table 4.6

Table 4.7 Table 4.8 Table 4.9 Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 5.5

Overview of the feature selection algorithms . . . . . . . . . . . . Overview of the text clustering algorithms . . . . . . . . . . . . . . Terms frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Terms weight using TF-IDF . . . . . . . . . . . . . . . . . . . . . . . . . Terms weight using LFW . . . . . . . . . . . . . . . . . . . . . . . . . . The text feature selection problem and optimization terms in the genetic algorithm context . . . . . . . . . . . . . . . . . The text feature selection problem and optimization terms in the harmony search algorithm context . . . . . . . . . . The text feature selection problem and optimization terms in the particle swarm optimization algorithm context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The text document clustering and optimization term in the krill herd solutions context . . . . . . . . . . . . . . . . . The parameters values for different variant of text feature selection algorithms . . . . . . . . . . . . . . . . . . . . . . . . . The best configuration parameters of the TD clustering algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Description of document datasets that used in this research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary of experimental methods using k-mean clustering algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the k-mean text clustering algorithms in terms of the purity and entropy measures . . . . Comparing the performance of the k-mean text clustering algorithms in terms of the precision and recall measures . . . Comparing the performance of the k-mean text clustering algorithms in terms of the accuracy and F-measure measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

47 50 67 67 67

..

70

..

72

..

74

..

83

..

97

..

97

. . . . .

. . 107 . . 109 . . 110 . . 111

. . 112

xix

xx

Table 5.6

Table 5.7

Table 5.8 Table 5.9 Table 5.10

Table 5.11 Table 5.12 Table 5.13 Table 5.14 Table 5.15 Table 5.16 Table 5.17

Table 5.18

Table 5.19

Table 5.20

Table 5.21

Table 5.22

List of Tables

The number of the best results obtained by the proposed methods in terms of the evaluation measures over all datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The number of the best results obtained by the feature selection algorithms (GA, HS, and PSO) in terms of the evaluation measures over all datasets . . . . . . . . . . . . . Summary of the experimental using DDR to adjust the threshold value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dimension reduction ratio when threshold ¼ 25 using DDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The average ranking of k-mean clustering algorithms based on the F-measure. i.e., lower rank value is the best method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convergence scenarios of the basic krill herd algorithm (BKHA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The results of BKHA convergence scenarios (Scenario.(1) through Scenario.(5)). . . . . . . . . . . . . . . . . . . . The results of BKHA convergence scenarios (Scenario.(6) through Scenario.(10)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The results of BKHA convergence scenarios (Scenario.(11) through Scenario.(15)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The results of BKHA convergence scenarios (Scenario.(16) through Scenario.(20)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The versions of the proposed basic krill herd algorithms (BKHAs) for text document clustering problem (TDCP) . . . Comparing the performance of the text document clustering algorithms along with original datasets and proposed datasets using accuracy measure values . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the text document clustering algorithms along with original datasets and proposed datasets using purity measure values . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the text document clustering algorithms along with original datasets and proposed datasets using entropy measure values . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the text document clustering algorithms along with original datasets and proposed datasets using precision measure values . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the text document clustering algorithms along with original datasets and proposed datasets using recall measure values . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the text document clustering algorithms along with original datasets and proposed datasets using F-measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 113

. . 114 . . 117 . . 117

. . 119 . . 120 . . 122 . . 123 . . 124 . . 125 . . 126

. . 127

. . 127

. . 128

. . 129

. . 129

. . 130

List of Tables

Table 5.23 Table 5.24 Table 5.25 Table 5.26 Table 5.27 Table 5.28 Table 5.29 Table 5.30

Table 5.31

Table 5.32

Table 5.33 Table 5.34

Table 5.35

Table 5.36

Table 5.37

Table 5.38

Table 5.39 Table 5.40

xxi

The best results obtained by the basic clustering algorithms in terms of the evaluation measures over all datasets . . . . . . Versions of the proposed modified KH algorithm . . . . . . . . Comparing the performance of the text document clustering algorithms using the average accuracy measure values . . . . Comparing the performance of the text document clustering algorithms using the average purity measure values . . . . . . . Comparing the performance of the text document clustering algorithms using the average entropy measure values . . . . . Comparing the performance of the text document clustering algorithms using the average precision measure values . . . . Comparing the performance of the text document clustering algorithms using the average recall measure values . . . . . . . Comparing the performance of the text document clustering algorithms using the average F-measure measure values and its ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The average ranking of the modified clustering algorithms based on the F-measure. i.e., lower rank value is the best method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The best results obtained by the modified clustering algorithms in terms of the evaluation measures over all datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Versions of the proposed hybrid KH algorithm . . . . . . . . . . Comparing the performance of the hybrid text document clustering algorithms using the average accuracy measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the hybrid text document clustering algorithms using the average purity measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the hybrid text document clustering algorithms using the average entropy measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the hybrid text document clustering algorithms using the average precision measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the hybrid text document clustering algorithms using the average recall measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the hybrid text document clustering algorithms using the average F-measure values . . The average ranking of the hybrid clustering algorithms based on the F-measure. i.e., lower rank value is the best method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 130 . . 133 . . 134 . . 134 . . 134 . . 135 . . 135

. . 136

. . 136

. . 136 . . 139

. . 140

. . 141

. . 141

. . 142

. . 142 . . 143

. . 144

xxii

Table 5.41 Table 5.42

Table 5.43

Table 5.44

Table 5.45

Table 5.46

Table 5.47

Table 5.48

Table 5.49

Table 5.50 Table 5.51

Table 5.52

Table 5.53

Table 5.54

Table 5.55

List of Tables

The best results obtained by the hybrid clustering algorithms in terms of the evaluation measures over all datasets . . . . . . Comparing the performance of the multi-objective hybrid text document clustering algorithms using the average accuracy measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the multi-objective hybrid text document clustering algorithms using the average purity measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the multi-objective hybrid text document clustering algorithms using the average entropy measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the multi-objective hybrid text document clustering algorithms using the average precision measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the multi-objective hybrid text document clustering algorithms using the average recall measure values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing the performance of the multi-objective text document clustering algorithms using the average F-measure values and its ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The average ranking of the multi-objective clustering algorithm based on the F-measure. i.e., lower rank value is the best method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The best results obtained by the multi-objective clustering algorithm in terms of the evaluation measures over all datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison results between the krill herd-based methods according to F-measure evaluation criteria . . . . . . . . . . . . . . The average ranking of the krill-based algorithms based on the average F-measure. i.e., lower rank value is the best method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Significance tests of the basic KH algorithms on the original datasets and the proposed datasets using t-test with a\0:05. Highlight (bold) denote that result is significantly different . Significance tests of the basic KH algorithms and the modified KH algorithm using t-test with a\0:05. Highlight (bold) denote that result is significantly different . . . . . . . . . Significance tests of the modified KH algorithms and the hybrid KH algorithm using t-test with a\0:05. Highlight (bold) denote that result is significantly different . . . . . . . . . Significance tests of the hybrid KH algorithms and the multi-objective hybrid KH algorithm using t-test with a\0:05. Highlight (bold) denote that result is significantly different . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 144

. . 147

. . 147

. . 148

. . 148

. . 148

. . 149

. . 149

. . 149 . . 152

. . 154

. . 155

. . 155

. . 156

. . 156

List of Tables

Table 5.56 Table 5.57 Table 5.58

xxiii

Key to the comparator methods . . . . . . . . . . . . . . . . . . . . . . . . 157 Description of text document datasets that used by the comparative methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 A comparison of the results obtained by MHKHA and best-published results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Abstrak

Pengklusteran dokumen teks adalah satu tren baru dalam galian teks di mana dokumen-dokumen diasingkan kepada beberapa kluster yang koheren, di mana dokumen-dokumen dalam kluster yang sama adalah serupa. Dalam kajian ini, satu kaedah baru untuk menyelesaikan masalah pengklusteran dokumen teks dijalankan dalam dua peringkat: (i) Satu kaedah pemilihan fitur menggunakan algoritma optima kumpulan partikel dengan satu skima pemberat yang baru dan satu teknik pengurangan dimensi yang lengkap dicadangkan untuk mendapatkan satu subset baru fitur-fitur yang lebih bermaklumat dengan ruang berdimensi rendah. Subset baru ini digunakan untuk memperbaiki prestasi algoritma pengklusteran teks dalam peringkat berikutnya dan ini mengurangan masa pengiraannya. Algoritma pengklusteran min-k digunakan untuk menilai keberkesanan subset-subset yang diperolehi. (ii) Empat algoritma krill herd iaitu (a) algoritma krill herd asas, (b) algoritma krill herd yang telah diubahsuai, (c) algoritma krill herd hibrid, dan (d) algoritma hibrid pelbagai objektif krill herd, disarankan untuk menyelesaikan masalah pengklusteran teks; algoritma ini adalah penambahbaikan lanjutan kepada versi-versi yang terdahulu. Untuk proses penilaian, tujuh set data teks penanda aras digunakan dengan pencirian dan kesukaran yang berbeza. Keputusan menunjukkan bahawa kaedah yang dicadangkan dan algoritma yang diperolehi mencapai keputusan terbaik berbanding dengan kaedah-kaedah lain yang diutarakan dalam literatur.

xxv

Abstract

Text document (TD) clustering is a new trend in text mining in which the TDs are separated into several coherent clusters, where documents in the same cluster are similar. In this study, a new method for solving the TD clustering problem worked in the following two stages: (i) A new feature selection method using particle swarm optimization algorithm with a novel weighting scheme and a detailed dimension reduction technique are proposed to obtain a new subset of more informative features with low-dimensional space. This new subset is used to improve the performance of the text clustering (TC) algorithm in the subsequent stage and reduce its computation time. The k-mean clustering algorithm is used to evaluate the effectiveness of the obtained subsets. (ii) Four krill herd algorithms (KHAs), namely, (a) basic KHA, (b) modified KHA, (c) hybrid KHA, and (d) multi-objective hybrid KHA, are proposed to solve the TC problem; these algorithms are incremental improvements of the preceding versions. For the evaluation process, seven benchmark text datasets are used with different characterizations and complexities. Results show that the proposed methods and algorithms obtained the best results in comparison with the other comparative methods published in the literature.

xxvii

Chapter 1

Introduction

1.1 Background With the growth of the amount of text information on Internet web pages and modern applications, in general, interest in the text analysis area has increased to facilitate the processing of a large amount of unorganized text information (Sadeghian and Nezamabadi-pour 2015). Text clustering (TC) is an efficient unsupervised learning technique used to deal with numerous text documents (TDs) without any foreknowledge of the class label of the document (Prakash et al. 2014). This technique partitions a set of large TDs into meaningful and coherent clusters by collating relevant (similar) documents in the same cluster based on its intrinsic characteristics (Cobos et al. 2014). The same clusters (groups) contain relevant and similar TDs. Meanwhile, different clusters contain irrelevant and dissimilar TDs (Abualigah et al. 2016a). In the modern era, clustering is an important activity because of the size of text information on Internet web pages (Oikonomakou and Vazirgiannis 2010). Clustering is used to determine relevant TDs and facilitate TD display by groups that share the same pattern and contents (Cobos et al. 2014). The TC technique is successfully utilized in many research areas to facilitate the text analysis process, such as data mining, digital forensics analysis, and information retrieval (Forsati et al. 2013). Vector space model (VSM) is the most common model used in TC to represent each document; in this model, each term in the TDs is a feature (word) for document representation (Salton et al. 1975; Yuan et al. 2013). The TDs are represented by a multi-dimensional space, in which the position value of each dimension corresponds to a term frequency (TF) value. The text features generated from different text terms, even in a small document, would be represented by hundreds and/or thousands of text features. Thus, TDs will have high-dimensional informative and uninformative features (i.e., irrelevant, redundant, unevenly distributed, and noisy features). These uninformative features can be eliminated using the feature selection (FS) technique (Bharti and Singh 2016b; Zheng et al. 2015). © Springer Nature Switzerland AG 2019 L. M. Q. Abualigah, Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering, Studies in Computational Intelligence 816, https://doi.org/10.1007/978-3-030-10674-4_1

1

2

1 Introduction

FS techniques are nondeterministic polynomial time-hard optimization methods used to determine the optimal subset of informative text features and improve the performance of the TC method while maintaining the necessary text information (Bharti and Singh 2016b; Lin et al. 2016). Typically, these techniques are performed even without any foreknowledge of the class label of the document. Conventionally, these techniques are divided into three main types, namely, FS based on document frequency (DF), FS based on TF, and hybrid feature technique based on DF and TF (Wang et al. 2015b). Several text-based studies rely on FS methods, such as TC (Abualigah et al. 2016b), text classification (Zheng et al. 2004), and data mining (Lin et al. 2016). Recently, metaheuristic algorithms have been successfully used in the area of text mining to solve the text document clustering problems (TDCPs) and text feature selection problems (TFSPs) (BoussaïD et al. 2013). The application of FS techniques produces a new subset with numerous informative text features. However, the dimensionality is still high because all dimensions remain even after removing the uninformative features. The dimensional space of this subset must be reduced further to facilitate the TC process (Lu et al. 2015). Highdimensional feature space has become a significant challenge to the TC domain because it increases the computational time while decreases the efficiency of TC techniques (van der MLJP and van den HH 2009). Thus, a dimension reduction (DR) technique is necessary to produce a new low-dimensional subset of useful features (Diao 2014; Esmin et al. 2015; Sorzano et al. 2014). This technique will reduce the computation time and improve the performance of the TC algorithm. The DR technique should eliminate useless text features; eliminate unnecessary, redundant, and noisy text features; preserve intrinsic information; and significantly reduce the dimension of the text feature space (Bharti and Singh 2014; Raymer et al. 2000).

1.2 Motivation and Problem Statement Recently, unorganized TDs on Internet web pages and modern applications have increased exponentially, and the number of Internet users in the world has exceeded three billion.1 These users face difficulties in obtaining the information that they need easily and neatly (Bharti and Singh 2014; U˘guz 2011). The process of managing such a large TD is called TD clustering technique, which transforms a set of large unorganized TDs into coherent and similar groups, that is, clusters, which facilitate user browsing and searching for information. From the literature on TC techniques, four main problems are identified and explained as follows: First, TDs usually contain informative and uninformative text features. Uninformative features can confuse and mislead the TD clustering algorithm, thereby reducing the performance of the clustering algorithm (Bharti and Singh 2014, 2015b). Therefore, identifying and removing these uninformative text features can improve the performance of the clustering algorithm and reduce the computation time. The 1 https://en.wikipedia.org/wiki/List_of_countries_by_number_of_Internet_users.

1.2 Motivation and Problem Statement

3

main drawback of these methods is focusing only on selecting a new subset of text features that rely on the existing weighting scheme (score) (Bharti and Singh 2016b). The current weighting scheme has certain weaknesses in evaluating the features by computing the weight score for all features equally using one main factor (i.e., term frequency). Thus, the distinction between informative and uninformative text features over the document is insufficient (Ahmad et al. 2015; Bharti and Singh 2016b; Cobos et al. 2010; Moayedikia et al. 2015). The weighted score should be more accurate to facilitate the process of the text FS technique, where it plays the main role in the FS procedure by distinguishing between TD features by providing a high score to the more informative features. Second, the high-dimensional feature space is one of the most critical weaknesses of TDs because it influences the process of TD clustering techniques by increasing the execution time and decreasing the performance of the TD clustering algorithm (Bharti and Singh 2014; Nebu and Joseph 2016). The high-dimensional feature space contains the necessary (useful) and unnecessary (useless) text features. Thus, the DR technique reduces the dimensional feature space by pruning useless text features to improve the performance of the clustering algorithm. One of the possible ways to solve the high-dimensional feature space is the DF method. This method deals with the reduction process with fixed roles (DF of the feature) in making the decision to prune useless text features (Esmin et al. 2015; Tang et al. 2005; Yao et al. 2012a). The fundamental premise of the DF method is impractical because the frequently occurring features are considered more important in the documents than the infrequently occurring features (Bharti and Singh 2015b). Third, the main advantage of the TC algorithm is its effectiveness in guaranteeing access to the accurate clusters. Over the past few years, a large proportion of researchers in the TC domain applied metaheuristic algorithms to solve the TDCPs. However, a major drawback of these algorithms is that it provides a good exploration of the search space at the cost of exploitation (Bharti and Singh 2016a). Other problems are related to unsatisfactory outcomes, such as inaccurate clusters, and the behavior of the algorithms that were selected is inappropriate for the problem of the TC instances (Bharti and Singh 2015a; Binu 2015; Forsati et al. 2015; Wang et al. 2015a). All available TC techniques based on metaheuristic algorithms still face these problems. Solving the TC problem using metaheuristic algorithms still need more in-depth investigation for several important reasons (Guo et al. 2015; Mohammed et al. 2015; Wang et al. 2015c). However, these reasons can be justified by the “no free lunch” theorem (Wolpert 2013; Wolpert and Macready 1997). Fourth, the core effectiveness of the TD clustering techniques relies on the similarity and distance functions of the TC algorithm. These functions are used in making the decision to partition the document into an appropriate cluster based on the similarity or distance value; these decisions affect the performance of the TD clustering algorithm (Rao et al. 2016). Similarity and distance measurements are standard function criteria used in the TD clustering domain as an objective function. Nevertheless, the results of these measurements are different and lead to certain challenges because of the variance between the values of similarity and distance measures for the same document (Abualigah et al. 2016a; Forsati et al. 2013). Determining the appropriate

4

1 Introduction

objective function to deal with the large TDs is difficult (Mukhopadhyay et al. 2014, 2015). Multi-objective functions (multiple-criteria decision making) are currently used in several domains as an alternative technique to yield better results (George and Parthiban 2015; Saha et al. 2014). However, for the TD clustering technique, multiple-criteria decision making is relatively unknown.

1.3 Research Objectives The overall aim of this study is to develop an effective TD clustering method. The main objective is to show that the improved method can outperform the other comparative methods. This research has the following objectives: • to find the best features: – to enhance the weight score of the terms for the text FS technique in order to improve the TD clustering; – to improve the text FS technique for finding a new subset of more informative features to improve the TD clustering; – to reduce the dimension of the feature space in the form of a low-dimensional subset of useful features to improve the TD clustering; • to improve the text document clustering using krill herd algorithm: – to increase the effectiveness of the TD clustering technique and to reduce its errors; – to improve the global search ability and its speed of convergence; – to enhance the quality of initial solutions obtained by the local search strategy; – to increase the likelihood of obtaining an accurate decision (similarity value) between the document and clusters centroids in the k-mean clustering algorithm.

1.4 Contributions After the research objectives are achieved, this study will have the following main contributions: 1. Introduced a new weighting scheme to provide a significant influence score for the informative text features within the same document. This scheme focuses on assigning a favorable term weight to facilitate the text FS technique and distinguishes among the features of the clusters by giving a high weight to essential features in the same document. 2. Adapted metaheuristic optimization algorithms (i.e., genetic algorithm (GA), harmony search (HS), and particle swarm optimization (PSO)) to find the best features at the level of each document using a new FS method.

1.4 Contributions

5

3. Introduced a new detailed DR technique to reduce the dimensional space of text features based on the detailed term frequency (DTF) and detailed document frequency (DDF) of each feature compatible with the size of its effect on the document. The DDF of each feature at the level of all documents is compatible with the size of its effect on the documents in partnership with its DTF value. 4. Adapted the basic krill herd algorithm (BKHA) and tuning its parameters for the text document clustering problem. 5. The modified krill herd algorithm (MKHA) to improve the global search ability. These modifications occur during ordering of the basic KH operators where the crossover and mutation processes are invoked after updating the positions of the krill herd algorithm (KHA). 6. The hybrid krill herd algorithm with the k-mean algorithm (HKHA) as a new operator, which plays a basic role in the MKHA to improve the local search ability. Hybridization is used to enhance the capacity of the KHA for finding locally optimal solutions by taking the refining power of the k-means clustering algorithm. 7. Introduced a multi-objective function based on the local best concept for the kmean algorithm to enhance the capacity of the KHA by achieving an accurate local search, called multi-objective hybrid krill herd algorithm (MHKHA).

1.5 Research Scope This study covers the main TC preprocessing steps (i.e., text FS and DR techniques) and the metaheuristic algorithms (i.e., different versions of the proposed KHA) to deal with the TDCP. The methods proposed in this study are applied to a large amount of TDs as electronic pages (i.e., newsgroup documents appearing on newswires, Internet web pages, and hospital information), modern applications (technical reports and university data), and biomedical sciences (large biomedical datasets). Note, all the datasets used in this research have been written in English language. These TDs (datasets) are characterized by high-dimensional informative and uninformative text features (Bharti and Singh 2014, 2015b; Zheng et al. 2015). All of the proposed methods need the number of clusters as input parameter K . Determining the correct number of clusters for the given TD datasets is an important issue because the number of document clusters is an essential parameter in TC problems. Standard TD datasets with different sizes (i.e., number of documents, number of terms, and number of clusters), constraints, and complexities are used in the TC technique to evaluate the proposed methods.

6

1 Introduction

1.6 Research Methodology This section briefly discusses the stages of the research methodology, which are applied to achieve the research objectives for improving the TD clustering technique, as shown in Fig. 1.1. The detailed description is provided in Chap. 4. The first stage is modeling and adapting GA, HS, and PSO to solve the text FS problem (TFSP) with the novel weighting scheme and detailed DR technique. This stage facilitates the TC task to deal with a low-dimensional subset of informative text features, which reduce the computation time and improve the performance of the TD clustering algorithm. The second stage is adapting the basic KH algorithm (BKHA) and tuning its parameters to solve the text DC problem (TDCP). Then, three versions of the BKHAs are modified (MKHAs) to improve the global (exploration) search ability. The three versions of the HKHA with the k-mean algorithm (MKHAs) are used to increase the performance of the TC technique by improving the local (exploitation) search ability. These hybrid versions used the results of the k-mean algorithm as the initial

Fig. 1.1 Research methodology

1.6 Research Methodology

7

solutions in KHA to ensure balance between local exploitation and global exploration. Finally, a multi-objective function is applied to obtain an accurate TC technique by combining two standard measures (i.e., cosine similarity and Euclidean distance measurements). The multi-objective function is the primary factor used to obtain an effective clustering method by deriving an accurate similarity value between the document and the cluster centroid.

1.7 Thesis Structure The rest of this thesis organized as follows: Chapter 2 (Krill Herd Algorithm): This chapter discusses the principles of the KHA. The analogy between the clustering technique and the optimization terms is provided. The steps of the KHA are described in detail. Chapter 3 (Literature Review): This chapter provides an overview of the text preprocessing steps, TFSPs, and TDCPs with particular attention to TDs. This chapter also examines several methods used to deal with TFSP and TDCP. This chapter also presents a review of KHA in the areas of applications, modifications, and hybridizations across many fields. Chapter 4 (Proposed Methodology): This chapter illustrates the modeling of TFSP and TDCP. This study also includes a comprehensive description of the adapted research methodology, including different weight schemes, metaheuristic algorithms for text FS, DR techniques, and KHAs for TD clustering, and the sequence of the procedures conducted. Chapter 5 (Experimental Results): This chapter shows the experiments and results of all the proposed methods and presents the comparisons of each method with the others. Chapter 6 (Conclusion and Future Work): This chapter provides the research conclusion and possible future works.

References Abualigah, L. M., Khader, A. T., & Al-Betar, M. A. (2016a). Multi-objectives-based text clustering technique using k-mean algorithm. In 7th International Conference on Computer Science and Information Technology (CSIT) (pp. 1–6). https://doi.org/10.1109/CSIT.2016.7549464. Abualigah, L. M., Khader, A. T., & Al-Betar, M. A. (2016b). Unsupervised feature selection technique based on genetic algorithm for improving the text clustering. In 7th International Conference on Computer Science and Information Technology (CSIT) (pp. 1–6). https://doi.org/10. 1109/CSIT.2016.7549453. Ahmad, S. R., Abu Bakar, A., & Yaakub, M. R. (2015). Metaheuristic algorithms for feature selection in sentiment analysis. In Science and Information Conference (SAI) (pp. 222–226). Bharti, K. K., & Singh, P. K. (2014). A three-stage unsupervised dimension reduction method for text clustering. Journal of Computational Science, 5(2), 156–169.

8

1 Introduction

Bharti, K. K., & Singh, P. K. (2015a). Chaotic gradient artificial bee colony for text clustering. Soft Computing, 1–14. Bharti, K. K., & Singh, P. K. (2015b). Hybrid dimension reduction by integrating feature selection with feature extraction method for text clustering. Expert Systems with Applications, 42(6), 3105– 3114. Bharti, K. K., & Singh, P. K. (2016a). Chaotic gradient artificial bee colony for text clustering. Soft Computing, 20(3), 1113–1126. Bharti, K. K., & Singh, P. K. (2016b). Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. Applied Soft Computing, 43(C), 20–34. Binu, D. (2015). Cluster analysis using optimization algorithms with newly designed objective functions. Expert Systems with Applications, 42(14), 5848–5859. BoussaïD, I., Lepagnot, J., & Siarry, P. (2013). A survey on optimization metaheuristics. Information Sciences, 237, 82–117. Cobos, C., León, E., & Mendoza, M. (2010). A harmony search algorithm for clustering with feature selection. Revista Facultad de Ingeniería Universidad de Antioquia, (55), 153–164. Cobos, C., Muñoz-Collazos, H., Urbano-Muñoz, R., Mendoza, M., León, E., & Herrera-Viedma, E. (2014). Clustering of web search results based on the cuckoo search algorithm and balanced Bayesian information criterion. Information Sciences, 281, 248–264. Diao, R. (2014). Feature selection with harmony search and its applications (Unpublished doctoral dissertation), Aberystwyth University. Esmin, A. A., Coelho, R. A., & Matwin, S. (2015). A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data. Artificial Intelligence Review, 44(1), 23–45. Forsati, R., Keikha, A., & Shamsfard, M. (2015). An improved bee colony optimization algorithm with an application to document clustering. Neurocomputing, 159, 9–26. Forsati, R., Mahdavi, M., Shamsfard, M., & Meybodi, M. R. (2013). Efficient stochastic algorithms for document clustering. Information Sciences, 220, 269–291. George, G., & Parthiban, L. (2015). Multi objective hybridized firefly algorithm with group search optimization for data clustering. In 2015 IEEE International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN) (pp. 125–130). Guo, Y., Li, Y., & Shao, Z. (2015). An ant colony-based text clustering system with cognitive situation dimensions. International Journal of Computational Intelligence Systems, 8(1), 138– 157. Lin, K.-C., Zhang, K.-Y., Huang, Y.-H., Hung, J. C., & Yen, N. (2016). Feature selection based on an improved cat swarm optimization algorithm for big data classification. The Journal of Supercomputing, 1–12. Lu, Y., Liang, M., Ye, Z., & Cao, L. (2015). Improved particle swarm optimization algorithm and its application in text feature selection. Applied Soft Computing, 35, 629–636. Moayedikia, A., Jensen, R., Wiil, U. K., & Forsati, R. (2015). Weighted bee colony algorithm for discrete optimization problems with application to feature selection. Engineering Applications of Artificial Intelligence, 44, 153–167. Mohammed, A. J., Yusof, Y., & Husni, H. (2015). Document clustering based on firefly algorithm. Journal of Computer Science, 11(3), 453. Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., & Coello, C. A. C. (2014). Survey of multiobjective evolutionary algorithms for data mining: Part ii. IEEE Transactions on Evolutionary Computation, 18(1), 20–35. Mukhopadhyay, A., Maulik, U., & Bandyopadhyay, S. (2015). A survey of multiobjective evolutionary clustering. ACM Computing Surveys (CSUR), 47(4), 61. Nebu, C. M., & Joseph, S. (2016). A hybrid dimension reduction technique for document clustering. In Innovations in Bio-inspired Computing and Applications (pp. 403–416). Berlin: Springer. Oikonomakou, N., & Vazirgiannis, M. (2010). A review of web document clustering approaches. In Data Mining and Knowledge Discovery Handbook (pp. 931–948). Berlin: Springer.

References

9

Prakash, B., Hanumanthappa, M., & Mamatha, M. (2014). Cluster based term weighting model for web document clustering. In Proceedings of the Third International Conference on Soft Computing for Problem Solving (pp. 815–822). Rao, A. S., Ramakrishna, S., & Babu, P. C. (2016). MODC: Multi-objective distance based optimal document clustering by GA. Indian Journal of Science and Technology, 9(28). Raymer, M. L., Punch, W. F., Goodman, E. D., Kuhn, L. A., & Jain, A. K. (2000). Dimensionality reduction using genetic algorithms. IEEE Transactions on Evolutionary Computation, 4(2), 164– 171. Sadeghian, A. H., & Nezamabadi-pour, H. (2015). Document clustering using gravitational ensemble clustering. In 2015 International Symposium on Artificial Intelligence and Signal Processing (AISP) (pp. 240–245). Saha, S., Ekbal, A., Alok, A. K., & Spandana, R. (2014). Feature selection and semisupervised clustering using multiobjective optimization. SpringerPlus, 3(1), 465. Salton, G., Wong, A., & Yang, C.-S. (1975). A vector space model for automatic indexing. Communications of the ACM, 18(11), 613–620. Sorzano, C. O. S., Vargas, J., & Montano, A. P. (2014). A survey of dimensionality reduction techniques. arXiv:1403.2877. Tang, B., Shepherd, M., Milios, E., & Heywood, M. I. (2005). Comparing and combining dimension reduction techniques for efficient text clustering. In Proceeding of SIAM International Workshop on Feature Selection for Data Mining (pp. 17–26). U˘guz, H. (2011). A two-stage feature selection method for text categorization by using information gain, principal component analysis and genetic algorithm. Knowledge- Based Systems, 24(7), 1024–1032. van der MLJP, P. E., & van den HH, J. (2009). Dimensionality reduction: A comparative review (Technical Report). Tilburg, Netherlands: Tilburg Centre for Creative Computing, Tilburg University, Technical Report: 2009-005. Wang, G.-G., Gandomi, A. H., Alavi, A. H., & Deb, S. (2015a). A hybrid method based on krill herd and quantum-behaved particle swarm optimization. Neural Computing and Applications, 1–18. Wang, Y., Liu, Y., Feng, L., & Zhu, X. (2015b). Novel feature selection method based on harmony search for email classification. Knowledge-Based Systems, 73, 311–323. Wang, J., Yuan, W., & Cheng, D. (2015c). Hybrid genetic-particle swarm algorithm: an efficient method for fast optimization of atomic clusters. Computational and Theoretical Chemistry, 1059, 12–17. Wolpert, D. H. (2013). Ubiquity symposium: Evolutionary computation and the processes of life: What the no free lunch theorems really mean: How to improve search algorithms. Ubiquity, 2013(December), 2. Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67–82. Yao, F., Coquery, J., & Lê Cao, K.-A. (2012a). Independent principal component analysis for biologically meaningful dimension reduction of large biological data sets. BMC Bioinformatics, 13(1), 1. Yuan, M., Ouyang, Y. X., & Xiong, Z. (2013). A text categorization method using extended vector space model by frequent term sets. Journal of Information Science and Engineering, 29(1), 99– 114. Zheng, L., Diao, R., & Shen, Q. (2015). Self-adjusting harmony search-based feature selection. Soft Computing, 19(6), 1567–1579. Zheng, Z., Wu, X., & Srihari, R. (2004). Feature selection for text categorization on imbalanced data. ACM Sigkdd Explorations Newsletter, 6(1), 80–89.

Chapter 2

Krill Herd Algorithm

2.1 Introduction Krill herd (KH) algorithm has a unique behavior to solve the text clustering problem. This algorithm was introduced by Gandomi and Alavi in the year 2012 to solve global optimization functions (Gandomi and Alavi 2012). This section presents the modeling of the basic-krill herd algorithm (KHA) for the TDCP (Abualigah et al. 2016).

2.2 Krill Herd Algorithm Krill herd (KH) is a swarm intelligence (SI) search algorithm based on the herding behavior of krill individuals (KIs). It is a population-based approach consisting of a huge number of krill, where each krill individual (KI) moves through a multidimensional space to search for close food and high-density herd (swarm). In KH as optimization algorithm, positions of KIs are considered as various design variables and the distance of the KI from the food is the objective function (Gandomi and Alavi 2012; Mandal et al. 2014). The KH algorithm is considered in three categories: (1) Evolutionary algorithms (2) Swarm intelligence (3) Bacterial foraging algorithm (Bolaji et al. 2016).

2.3 Why the KHA has been Chosen for Solving the TDCP The KH is a suitable algorithm for the TC technique according to: (i) the similarities between the behavior of the KHA and the behavior of the TD clustering technique, (ii) KH algorithm obtained better results in solving many problems in comparison with others common algorithms published in the literature. © Springer Nature Switzerland AG 2019 L. M. Q. Abualigah, Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering, Studies in Computational Intelligence 816, https://doi.org/10.1007/978-3-030-10674-4_2

11

12

2 Krill Herd Algorithm

The compatibility between KHA and TC involves searching for the closest food (closest centroid) and high density groups (similar groups) (Bolaji et al. 2016). Density is one of the main factors that influence the success of all the algorithms used to achieve coherence and similar groups. If documents in the same cluster are relevant, then density is high, and vice versa. If the KIs are close to the food, then density is high, and vice versa. Thus, the behavior of KIs is exactly the same as that of the TD clustering technique (both of them are a swarm). With regard to the KHA, each KI (document) moves toward the best solution by searching for the herd (group) with high density (similar groups) and the closest food (closest centroid). These factors are used as objectives to lead each krill to an optimal herd around the food. With regard to the TC, each document moves toward the best solution by searching for the similar cluster centroid and the cluster with a high density. Moreover, these factors are used as objectives to lead each document to an optimal cluster around the closest centroid. The relationship between the behavior of KHA and the behavior of TD clustering is considered a strong feature in applying KHA to solve the TDCP.

2.4 Krill Herd Algorithm: Procedures Due to the nature of this research, predation disperses KIs, leads to a decrease of the average krill density and distances of the KH from the food location. This process is the initialization phase in the KH algorithm. In the natural system, the objective function of each document is supposed to be the distance or similarity from the cluster centroid. The fitness function of each candidate solution is the total distance or similarity between all documents with clusters centroid. The KH algorithm has three main motion calculation to update individual positions; then it applies the KH operators, which is inspired by the evolutionary algorithm. The procedures sequence of the basic KH algorithm is shown in Fig. 2.1.

2.4.1 Mathematical Concept of Krill Herd Algorithm The KH algorithm has three main steps to update the time-dependent position of each KI as follows: • Movement induced by the presence of other KIs: only individual neighbors in the visual field that affects the KI moving. • Foraging activity: the KIs search for food resources. • Random diffusion: the net movement of each KI based on density regions (Gandomi and Alavi 2012). The ith individual position is updated by the following Lagrangian model using Eq. (2.1).

2.4 Krill Herd Algorithm: Procedures

13

Fig. 2.1 A flowchart of basic krill herd algorithm (Bolaji et al. 2016)

dxi = Ni + Fi + Di , dt

(2.1)

where for the krill i, Ni is the motion effect of the ith individual from other KIs. This value is estimated from the local swarm density, a target swarm density, a repulsive swarm density, and the target direction which is effected by the best KI. Fi is the foraging motion for the ith KI. This value estimated from the food attractiveness, food location, the foraging speed, the last foraging action or movement and the best fitness of the ith krill so far. Di is the physical diffusion for the ith KI, where this value estimated from two factors: the maximum diffusion speed of the KIs and random direction (Gandomi et al. 2013).

2.4.1.1

Movement Induced by Other Krill Individuals

Movement induced is an illusion of visual perception in which a moving individual appears to move differently because of neighbors moving nearby in the visual field. Theoretically, individuals try to keep the high density (Bolaji et al. 2016; Wang et al. 2014a). The direction of movement induced is defined by Eq. (2.2). Ninew = N max αi + ωn Niold ,

(2.2)

where for krill i, N max is the parameter for tuning the movement induced by other individuals, it is determined experimentally (see Table 5.11). αi is estimated from the local swarm density by Eq. (2.3), ωn is the inertia weight of the movement induced by other individuals’ in range [0, 1], and Niold is the last change or movement produced.

14

2 Krill Herd Algorithm target

αi = αilocal + αi

,

(2.3) target

is where, the αilocal is the effect of the neighbors in ith individual movement, αi the target direction effected by the jth KI. The effect of individual neighbors can be considered as an attractive or repulsive tendency between the KIs for a local search while the normalized values can be positive or negative (Bolaji et al. 2016; Gandomi and Alavi 2012). The αilocal is calculated by Eq. (2.4). αilocal =

n 

i,j xi,j , K

(2.4)

j=1

i,j is the normalized value of the objective function vector for the ith KI. where, K xi,j i,j is calculated is the normalized value of the related positions for the ith KI. The K by Eq. (2.5): Ki − Kj i,j = , (2.5) K K worst − K best where, Ki is the objective function of ith KI, Kj is the objective function of jth neighbor (j = 1, 2, . . . , n). n is the number of all KIs, K best and K worst are the best and worst xi,j is calculated by Eq. (2.6). objective function values of ith individual. The  xj − xi  ,  xi,j =  xj − xi  + ε

(2.6)

where, xi is the current position, xj is the position of jth neighbor, ||xj − xi || is the vector normalization, it is used for calculating the neighbors of the ith KI by Eq. (2.7), ε is a small positive number to avoid singularities (Jensi and Jiji 2016; Mandal et al. 2014). The sensing distance is calculated by Eq. (2.7). dei =

n  1  xi − xj  , 5n j=1

(2.7)

where, dei is the sensing distance for the krill i. Note, if the distance value between two KIs is less than the current value, they are neighbors. Figure 2.2 illustrates the movement of the KIs and their neighbors. The known target vector of each KI is the highest objective function. The effect of the best fitness on the jth individual is calculated by Eq. (2.8). This procedure allows the solution to move towards the current best solution and is calculated by Eq. (2.8). i,best = C best K xi,best ,

(2.8)

  I , = 2 rand + Imax

(2.9)

target

αi where, C

best

2.4 Krill Herd Algorithm: Procedures

15

Fig. 2.2 A schematic represents the sensing domain around a KI (Bolaji et al. 2016)

i,best is the best objective function of the C best is the coefficient of individuals, K xi,best is the best position of the ith KI, rand is a random number between [0, ith KI,  1] for improving the local exploration; I is the current iteration number; Imax is the maximum number of iterations (Gandomi and Alavi 2012).

2.4.1.2

Foraging Motion

The foraging motion of KIs is estimated by two effects, namely, current food and old food location (Abualigah et al. 2016; Bolaji et al. 2016; Mandal et al. 2014). Food area or location is defined to attract KIs to the global optima possibly. The foraging motion for ith individual is expressed by Eq. (2.10). Fi = Vf βi + ωf Fiold ,

(2.10)

where, Vf is the parameter for tuning the foraging speed, it is determined experimentally (see Table 5.11), βi is the food location of the ith KI by Eq. (2.11), ωf is the inertia weight of the foraging speed in range [0, 1], and Fiold is the last foraging motion. food (2.11) βi = βi + βibest ,

16

2 Krill Herd Algorithm food

where, βi is the food attractiveness of the ith KI, it is calculated by Eq. (2.12). βibest is the best objective function of the ith KI. food

βi

i,food = C food K xi,food ,

(2.12)

  I , =2 1− Imax

(2.13)

where, C

food

i,food is the normalized value of the objective function of the ith centroid and K xi,food is the normalized value of the ith centroid position. The center of the individual’s food for each iteration is calculated by Eq. (2.14). n x

food

=

1 i=1 Ki xi n 1 , j=1 Kj

(2.14)

where, n is the number of the KIs, Ki is the objective function of the ith KI, and xi is the ith position value. The effect of the best objective function of the ith KI is handled by using Eq. (2.15).: i,ibest xi,ibest , (2.15) βibest = K i,best is the best previous objective function of the ith KI, xi,food is the best prewhere, K vious visited food position of the ith KI. The movement induced by other individuals and the forging movement decrease with the increase in the time (iterations).

2.4.1.3

Physical Diffusion

Physical diffusion is the net movement of each KI from a region of high density to a region of low density or vice versa. The better position of the KI is the less random direction. Physical diffusion values of individuals are estimated by two effects, namely, maximum diffusion speed (Dm ) and random directional vector (δ) (Abualigah et al. 2016; Gandomi and Alavi 2012; Jensi and Jiji 2016; Wang et al. 2014a). Physical diffusion for the ith KI is determined by Eq. (2.16). Di = D

max

  I 1− δ, Imax

(2.16)

where, Dmax is the parameter for tuning the diffusion speed, it is determined experimentally (see Table 5.11), and δ refers to the array that contains random values between [−1, 1]. I is the current iteration, Imax is max number of iterations.

2.4 Krill Herd Algorithm: Procedures

2.4.1.4

17

Updating the Krill Individuals

The movement of the ith KI is influenced by the other KIs, foraging motion, and physical diffusion. These factors seek to obtain the best objective function for each KI. The foraging movement and the movement induced by other KIs include two global and two local strategies. These strategies are working in parallel to make KH a robust algorithm (Gandomi and Alavi 2012; Bolaji et al. 2016; Wang et al. 2013). The individual positions updated towards the best objective function by Eq. (2.17). xi (I + 1) = xi (I ) + t where, t = Ct

dxi , dt

n  (U Bj − LBj ),

(2.17)

(2.18)

j=1

t is an important and sensitive constant computed by Eq. (2.18), and n is the total number of individuals. LBj is the lower bound, U Bj is the upper bounds of the ith variables (J = 1, 2, . . . , n), and Ct is a constant value between [0, 2]. It works as a scale factor of the speed vector.

2.4.2 The Genetic Operators Genetic algorithm (GA) is a stochastic meta-heuristic search method for the global solution in a large search space. This algorithm is inspired by the classical evolutionary algorithms (EA). The genetic operators encoded in a genome that performed in an unusual way that permits asexual reproduction that leads to the offspring. However, the sexual reproduction can swap and reorder chromosomes, giving birth to offspring which includes a cross breeding of genetic information from all parents. This operation is often called a crossover, which means swapping of the genetic information. To avoid premature convergence, the mutation operator is used to increase the diversity of the solutions (Chen et al. 2013; Wang et al. 2014b). Genetic operators are incorporated into the KH algorithm to improve its performance (Bolaji et al. 2016; Gandomi and Alavi 2012).

2.4.2.1

Crossover Operator of KH Algorithm

The crossover operator is an effective procedure for global solutions. This procedure is controlled by a probability Cr by generating a uniformly distributed random value between [0, 1] (Wang et al. 2014b). The mth component of xi,m is determined as the following:

18

2 Krill Herd Algorithm

xi,m =

 xp,m , if rand < Cr else. xq,m

i,best , Cr = 0.2K

(2.19)

(2.20)

where, the crossover probability is determined by Eq. (2.19). p and q refer to the two solutions which are chosen for the crossover operator, p, q ∈ {1, 2, . . . , i − 1, i + i,best = Ki − K best ; Ki 1, . . . , n}, the Cr increases with decreasing fitness function, K best is the objective function value of the ith KI, and K is the best objective function value of the ith KI.

2.4.2.2

Mutation Operator of KH Algorithm

The mutation operator is an effective strategy for a global solution. This strategy is controlled by a probability Mu (Wang et al. 2014a). The mutation operator is determined as the following: xi,m

 + μ(xp,m − xq,m ), if rand < Mu x = gbest,m else. xi,m , i,best . Mu = 0.05/K

(2.21)

(2.22)

where, the mutation probability is determined by Eq. (2.22). p, q ∈ {1, 2, . . . , i − 1, i + 1, . . . , S}, Mu is valued between [0, 1] and it increases with decreasing fitness function.

2.5 Conclusion Krill herd algorithm is a new meta-heuristic population-based method. It has been successfully applied in many different optimization problems. The concepts of the KH algorithm described within a context of the sensing the food and creating a herd of KIs with a high density. Finally, the main steps of KH algorithm have been fully described. Further explanation about KH algorithm for the text clustering technique is provided in Chap. 4.

References

19

References Abualigah, L. M., Khader, A. T., Al-Betar, M. A., & Awadallah, M. A. (2016). A krill herd algorithm for efficient text documents clustering. In 2016 IEEE Symposium on Computer applications and Industrial Electronics (ISCAIE) (pp. 67–72). Bolaji, A. L., Al-Betar, M. A., Awadallah, M. A., Khader, A. T., & Abualigah, L. M. (2016). A comprehensive review: Krill herd algorithm (kh) and its applications. Applied Soft Computing. Chen, H., Jiang, W., Li, C., & Li, R. (2013). A heuristic feature selection approach for text categorization by using chaos optimization and genetic algorithm. Mathematical Problems in Engineering, 2013. Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation, 17(12), 4831–4845. Gandomi, A. H., Talatahari, S., Tadbiri, F., & Alavi, A. H. (2013). Krill herd algorithm for optimum design of truss structures. International Journal of Bio-inspired Computation, 5(5), 281–288. Jensi, R., & Jiji, G. W. (2016). An improved krill herd algorithm with global exploration capability for solving numerical function optimization problems and its application to data clustering. Applied Soft Computing, 46, 230–245. Mandal, B., Roy, P. K., & Mandal, S. (2014). Economic load dispatch using krill herd algorithm. International Journal of Electrical Power and Energy Systems, 57, 1–10. Wang, G., Guo, L., Gandomi, A. H., Cao, L., Alavi, A. H., Duan, H., et al. (2013). Lévy-flight krill herd algorithm. Mathematical Problems in Engineering, Article ID 682073, 2013, 14. https:// doi.org/10.1155/2013/682073. Wang, G., Guo, L., Wang, H., Duan, H., Liu, L., & Li, J. (2014a). Incorporating mutation scheme into krill herd algorithm for global numerical optimization. Neural Computing and Applications, 24(3–4), 853–871. Wang, G.-G., Gandomi, A. H., & Alavi, A. H. (2014b). Stud krill herd algorithm. Neurocomputing, 128, 363–370.

Chapter 3

Literature Review

3.1 Introduction This chapter reviews the full explanation of the TD clustering technique, discusses the text document clustering problem (TDCP) and text feature selection problem (TFSP), shows more related works, and examines KHA and its application.

3.2 Background TD clustering plays a significant role in machine learning and text mining. Clustering methods are introduced to classify relevant text information into different topicrelated groups; each group is called a cluster (Fan et al. 2016). Each cluster contains relevant documents, whereas different clusters contain irrelevant documents (Deepa et al. 2012). The TDCP can be formulated as an optimization problem depending on maximization or minimization of an objective function (Jajoo 2008). So far, the meta-heuristic method is the best, because it has been applied successfully to solve the TDCP (Aggarwal and Zhai 2012). Clustering is an important unsupervised learning technique used in numerous analysis applications to group sets of objects (text documents) into subsets of coherent groups (clusters) (Abualigah et al. 2016a). Clustering algorithms are divided into two categories, as follows: hierarchical clustering and partitional clustering algorithms (Agarwal and Mehta 2015). In the hierarchical algorithm, each object can simultaneously belong to two clusters. Hierarchical models can either be agglomerative or divisive. Agglomerative algorithms begin forming with each object as a freelance cluster; this cluster subsequently merges to significant clusters. By contrast, divisive algorithms start with all dataset objects, which are subsequently partitioned into smaller clusters.

© Springer Nature Switzerland AG 2019 L. M. Q. Abualigah, Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering, Studies in Computational Intelligence 816, https://doi.org/10.1007/978-3-030-10674-4_3

21

22

3 Literature Review

3.3 Text Document Clustering Applications TD clustering is the most common unsupervised learning technique that is used as a tool in numerous applications in various science and business fields (Ghanem and Alhanjouri 2014; Jajoo 2008). We provide a brief statement on basic trends that use TC. Figure 3.1 shows a modern search engine that used the result grouping. The standard applications that use the TC technique for arranging information are as follows: • Finding Similar Documents: This technique is used when the user sees related documents in the search results and wants to obtain the same type of document. The advantages of this technique is that the clustering method can obtain relevant documents that contain many of the same features. • Organizing Large Document Collections: The information retrieval system focuses on retrieving the relevant document to a particular query, but dealing with a huge amount of unorganized documents can be difficult. The main problem with this technique is grouping these documents into groups according to their similar characteristics to help ease the user’s browsing activity. • Duplicate Content Detection: This technique works in many applications to explore duplicate documents in search engines. TC plays an essential role in the reclassification of the search results, clustering of related news, and plagiarism detection. • Recommendation System: In this system, a user is asked to read many articles to reach a particular field. The clustering of the articles is in real time, without external knowledge bases; and the performance is efficient and fully automatic. • Search Optimization: Document clustering helps improve efficiency of browsing and performance and allows fast access to relevant documents by comparing the user’s query with the clusters rather than with all documents directly. It easily ranks the search engine results. We took note of some of the search engines that used the clustering results, such as SnakeT,1 Yippy,2 iBoogie,3 KeySRC,4 and Lingo3G.5

3.4 Variants of the Weighting Schemes Term weighting scheme is important numerical statistic step to consider the weighting of document words (terms or features) for TD clustering processes according to the term frequency (Shah and Mahajan 2012). In the other words, Term weighting schemes are utilized to identify the significance and importance of each feature or 1 SnakeT.

website at http://snaket.di.unipi.it. website at http://yippy.com. 3 iBoogie. website at http://www.iboogie.com. 4 KeySRC. website at http://keysrc.fub.it. 5 Lingo3G. website at http://www.carrot2.org. 2 Yippy.

3.4 Variants of the Weighting Schemes 11/17/2016

23 Yippy - computer

computer

Search

Results 1-20 of about 211,904 | Details

Sources Sites Top 709 Results

+ + + + +

Computer Repair (88)

+ + + + + + + + + + +

Project (40)

Apple (32)

Online shopping from the earth's biggest selection of books, magazines, music, DVDs, videos, electronics, computers , software, apparel and accessories, shoes, jewelry ... www.amazon.com - cache - Popular Sites

Hacking (33) Downloads (47) Laptop & Desktop Computers (11) Computer Science (22)

Best Buy - Computers , Video Games, TVs, Cameras, Appliances, Phones

new window preview Best Buy is your source for computers , video games, televisions, digital cameras, mp3 players, mobile phones & appliances. Shop for delivery or in-store pickup today. www.bestbuy.com - cache - Popular Sites

Video Games (9) Photos (24) Reviews (23) Media (23) Industrial (18) Test (17) British Airways, Delays (8)

Office Supplies, Printer Ink, Toner, Electronics, Computers , Printers & Office...

new window preview Operates chain of retail office products superstores, offering office supplies, business machines, office furniture, computer supplies, janitorial supplies and other office products. Includes ordering information, store locator, and corporate information. www.staples.com - cache - Popular Sites

Parts, PC (17) Watch (10)

· Books, DVD (3) · Package (6) · Office Supplies (3)

+ + + + +

remix

Amazon.com: Online Shopping for Electronics, Apparel, Computers , Books, DVDs, and More new window preview

Memory, Upgrade (17) Awards, Wins (10) Intel, Computer vision (8) Humans (9) Emails, Deleted (9)

Apple new window preview Apple designs and creates iPod and iTunes, Mac laptop and desktop computers , the OS X operating system, and the revolutionary iPhone and iPad. www.apple.com - cache - Popular Sites HP - United States | Laptop Computers , Desktops , Printers, Servers and more

new window preview Research or buy HP printers, desktops, laptops, servers, storage, enterprise solutions and more at the Official Hewlett-Packard Website www.hp.com - cache - Popular Sites

· Printers, Servers (3)

+ Attack, Cyber (8) + Training (15) + Machine learning (7) · Students (8)

+ Free Computer (13)

3M Global Gateway Page

new window preview Manufacturer of Post-it products, Scotch Office tapes, packaging products, laminating systems, computer accessories, presentation and meeting products designed to make life easier at work. Tips to save time and money. www.3m.com - cache - Popular Sites

more | all

Dell | The Official Site, Dell

new window preview Official Dell site covering the personal computers and technology related products they manufacture, support and sell. www.dell.com - cache - Popular Sites

BC Technology for Learning Society Reaches 648 Computers for Syrian Families new window preview BURNABY, BRITISH COLUMBIA--(Marketwired - Sept. 12, 2016) - BC Technology for Learning Society has hit its 648 computer goal for the #WelcomeRefugees initiative. The initiative was announced by the Honourable Navdeep Bains, who outlined the role of Computers for Schools in delivering 7,500 computers across Canada. Thanks to funding provided... www.marketwired.com/...uters-for-syrian-families-2157515.htm - cache - Yippy News

First recording of computer -generated music – created by Alan Turing – restored new window preview Researchers restore 1951 recording generated on machine built by computer scientist famous for breaking Enigma code http://www.yippy.com/search?v%3Aproject=clusty-new&query=computer

1/3

Fig. 3.1 An example of the clustering search engine results by Yippy

term at the level of the documents collection. A popular term-weighting scheme has been used in text-mining is the term frequency-inverse document frequency (TF-IDF) (Bharti and Singh 2016b; Forsati et al. 2013; Moayedikia et al. 2015; Mohammed et al. 2014b). Four new term-weighting schemes have been proposed for TD clustering (Prakash et al. 2014). The authors used the k-mean algorithm to validate the proposed schemes. Experiments were conducted on TD datasets. The proposed weighted pair group method with arithmetic mean improved the results in comparison with the other term-weighting schemes for the TC. The results are evaluated in terms of the entropy, purity, and F-measure.

24

3 Literature Review

Information retrieval systems have to deal with text documents of varying lengths. Document length normalization is utilized to retrieve text documents of all lengths. In this paper (Singhal et al. 2017), a normalization scheme is proposed to retrieve text documents of all lengths with big probability chances of relevance will be better than another scheme which retrieves text documents with chances different from their probability of relevance. Document length normalization is normalized all text document into unit vectors by the Euclidean length, then eliminating all information on the length of the first text document; this hides some subtleties about larger text documents (longer documents will have higher tf values and contain more distinct terms). Document length normalization is pivoted, which can be utilized to modify any normalization function by reducing the difference between the relevance and the retrieval possibilities. Training pivoted length normalization on one collection, it can successfully use on other text collections, yielding a strong, collectors independent normalization technique. They use the idea of pivoting with the well-known cosine normalization function. The results show that the proposed scheme outperforms some shortcomings of the cosine function. Firefly algorithm (FA) is utilized to improve the solution diversity by introducing a new weighting scheme (Mohammed et al. 2014b). The authors applied FA for the TD clustering using the Reuters-21578 text dataset. The proposed method identifies TDs that have the highest light represents it as a centroid. This is supported by calculating the similarity using the cosine function. All the documents which are similar to the cluster centroid are assigned into the same cluster and dissimilar in the other cluster. The proposed weighting scheme based on FA is competitive in text mining. The results are evaluated by two standard measures in the text mining field, namely, purity and F-measure. In this paper (Lv and Zhai 2011), the authors show a common deficiency of the current text retrieval models: the component of term frequency (TF) normalization by text document length is not lower-bounded properly (LP); consequently, very long text documents tend to be overly punished. In order to analytically solve this problem, they propose two desirable formal limitations to capture the heuristic of LP-TF, and use constraint analysis to test several representative retrieval functions. The results show that all these retrieval functions can only pay the constraints for a certain range of parameter values and/or for a special set of query terms. Empirical results additional show that the retrieval performance tends to be poor when the parameter is out of the area or the query term is not in the particular set. To solve this problem, they propose a general method to import a sufficiently large LP for TF normalization which can be shown to fix the problem. The experimental results show that the proposed method, incurring almost no additional computational cost, can be employed to the others retrieval functions, such as Okapi BM25, language models, and the divergence from randomness an approach, to improve the average precision. A new weighing scheme for TC, namely, Cluster-Based Term weighting scheme (CBT), is proposed to enhance the clustering algorithm (Murugesan and Zhang 2011). The proposed scheme is based on two common factors, namely, TF and IDF. The authors worked to find a new scheme to assign the weighting values according to the information obtained from the produced clusters. It identifies the terms (features)

3.4 Variants of the Weighting Schemes

25

that are closed in each cluster and increases their weight score based on their importance according to the distance value from the cluster centroid. They consider the proposed scheme a beneficial scheme to enhance the clustering technique based on their document contents. The experimental results are investigated using the k-mean clustering algorithm and compared with other three existing weighing schemes. The proposed scheme outweighs the existing weighting schemes, and this enhances the clustering algorithm. Several term weighting schemes are central to the study of text retrieval systems. In this paper (Paik 2013), a novel TF-IDF term weighting scheme is proposed to employ two different within-document TF normalizations to capture two different features of term saliency. One component of the TF is effective for small queries, while the other works better on large queries. The final weight value is measured by taking a weighted mixture of these components, which is determined on the basis of the length of the similar query. Experiments were carried out on a large number of TREC news and web collections prove that the proposed weighing scheme outperforms five of the other comparative models with remarkable consistency and significance. The experimental results prove that the proposed model achieves significantly better results in terms of precision than the existing models.

3.5 Similarity Measures The TC problem is formulated as an optimization problem by maximizing or minimizing the objective function (Jajoo 2008). Calculating the TD similarity is a fundamental issue in text mining (i.e. the TC and information retrieval) (Aggarwal and Zhai 2012). Similarity and distance are the two measures used to scale the similarity between documents and the clusters centroids (De Vries 2014; Mohammed et al. 2016; Song et al. 2014a). A common similarity measure is the cosine similarity, and a common distance measure is the Euclidean distance (Forsati et al. 2015; Jaganathan and Jaiganesh 2013). Predominantly, these measures are not identical in the accuracy of the clustering results (Forsati et al. 2013). Similarity and distance are common measures that are used widely in estimating the variation and closeness degrees of various arguments. Many researchers have paid significant attention to solve this issue and have obtained many results (Liao et al. 2014). These similarity measures can be approximately categorized into two categories, as follows. The first category is based on the term such as Block Distance, Cosine Similarity, Dice’s Coefficient, Euclidean Distance, Jaccard Similarity, Matching Coefficient, and Overlap Coefficient. The second category is based on the character such as Longest Common Subsequence (LCS), Damerau Levenshtein, Jaro, Jaro Winkler, Needleman Wunsch, Smith Waterman, and N-gram (Gomaa and Fahmy 2013).

26

3 Literature Review

3.5.1 Cosine Similarity Measure To partition a large set of TDs into related groups, a suitable measure is the cosine similarity to compute the similarity between the document and the cluster centroid (Tunali et al. 2016). Hence, the cosine similarity is the essential measure for TD or data cluster analysis. This measure is a good choice for calculating the similarity value in the clustering domain, because it affects the performance of the underlying clustering algorithm. Cosine similarity is one of the most popular TD similarity measurements because of its precision and sensitivity. This measure has been successfully used in many studies (Gomaa and Fahmy 2013; Kadhim et al. 2014; Lin et al. 2014). It is defined as the cosine of the angle between two vectors (documents). To calculate the similarity value between two TDs with respect to document terms, the author proposed a measure that takes three cases into consideration, as follows: the term appears in both documents, the term appears in a single document, and the term does not appear in any documents (Lin et al. 2014). For the first case, the similarity value increases with increasing similarity between the two involved terms. Moreover, the contribution of the variation is notable. For the second case, a fixed similarity value is added to the similarity. For the last case, the term does not affect the similarity. The proposed similarity measure is extended to calculate the similarity between two sets of TDs. The effectiveness of the proposed measure is assessed on several text datasets for TC and text classification problems. The performance achieved by the proposed measurement is much better than that obtained by other measures. Equation (3.1) is used to compute the similarity between two vectors. d1 is document number 1, which is represented as a vector of terms weight d1 = (w1,1 , w1,2 , w1,3 , ....., w1,t ), whereas and c1 is the centroid of the cluster 2, which is represented as a vector of terms weight c2 = (w2,1 , w2,2 , w2,3 , ....., w2,t ) (Fan et al. 2016; Zaw and Mon 2015). t d1 .c2 j=1 w(t j , d1 ) × w(t j , c2 )  =  Cos(d1 , c2 ) = , t t ||d1 || ∗ ||c2 || 2 2 w(t , d ) w(t , c ) j 1 j 2 j=1 j=1

(3.1)

where, w(t j , d1 ) is the weight of the term j in the  document 1, and w(t j , c2 ) is the weight of the term j in the centroid of cluster 2. tj=1 w(t j , d1 )2 is the angle square  of all terms weight from { j = 1 to t} of the document number 1, and tj=1 w(t j , c2 )2 is the angle square of all term weight from { j = 1 to t} of the centroid of the cluster 2. This measure returns to 1 if the document and centroid are conformable and to 0 if the document and centroid differ.

3.5 Similarity Measures

27

3.5.2 Euclidean Distance Measure The Euclidean distance is a common measure used in the domain of the TC to calculate the straight-line distance between two points (i.e., document and cluster centroids) in Euclidean space (Bharti and Singh 2015a; De Vries 2014; Forsati et al. 2013, 2015). In this paper (Qian et al. 2004), two commonly used measures are compared, namely, cosine angle distance (CAD) and Euclidean distance (EUD), for high-dimensional queries. CAD is used to calculate the similarity between two vector but EUD is used to calculate the distance between two vector. Experimental results revealed that the information retrieval results using the Euclidean distance are similar to the results obtained using the cosine angle distance when the dimension is high. In this paper (Singh and Sharma 2013), five general similarity and distance measures, namely, Cosine Similarity, Euclidean Measure, Mahalanobis Distance, Pearson Correlation Coefficient, and Jaccard Coefficient, were tested using the k-mean TD clustering algorithm. Experiments were conducted on seven text datasets. The results were evaluated by two external measures, namely, purity, and entropy. The Euclidean measure outperformed the other comparative similarity and distance measurements in the domain of the TC. Mathematically, the Euclidean distance is the “ordinal measure” (i.e., vector), which is used to calculate the distance between two points in Euclidean space. The Euclidean measure is defined as Eq. (3.2). ⎛ D(di , c j ) = ⎝

t 

⎞1/2 |wd i j − wci j |2 ⎠

,

(3.2)

j=1

where, D(di ,c j ) represents the distance between the document number i, and the cluster centroid number j, wdi j is the weight of term j in the document i. wci j is the weight of term j in the centroid of the cluster i (Forsati et al. 2013).

3.6 Text Feature Selection Method The FS method is a non-deterministic polynomial-time-hard optimization problem that aims to find an optimal subset of informative features to improve the multidimensional TC method while maintaining the necessary information (Lin et al. 2016; Sorzano et al. 2014). It eliminates uninformative features. The FS method is defined as an optimization problem with a multi-objective function that maximizes the algorithm accuracy while minimizing the number of features (Bharti and Singh 2016b). Many applications benefit from the FS method, such as TC (Bharti and Singh 2016b), text classification (Hong et al. 2015), and data mining (Lin et al. 2016).

28

3 Literature Review

The FS method is divided into two categories, namely, supervised and unsupervised learning. Supervised learning FS methods are performed with the knowledge of the document class label (Kadhim et al. 2014). Unsupervised FS is the most suitable technique and is compatible with the clustering technique; it can be performed without any prior knowledge of the document class label. Unsupervised FS methods are further divided into three types, namely, document frequency-based FS, TF-based FS, and hybrid FS methods, based on the combination of DF and TF (Wang et al. 2015). Depending on the search strategy used to find an informative subset of text features, the existing FS techniques are categorized into three categories, as follows: filter, wrapper, and hybrid. First, filter methods produce statistical analysis of TDs dataset by assessing the relevance of text features in the selection of a discriminative subset of the text features without considering the interaction with the learning algorithm, such as DF, chi-square, term strength, mutual information, mean absolute difference, term variance, and absolute cosine (Bharti and Singh 2014b). Second, wrapper methods employ search strategy to find a new subset of features and evaluate the obtained subset of features by using the learning algorithm (Bharti and Singh 2016b). Third, hybrid methods combine different methods for obtaining a new subset of informative features. Different combinations of DR methods have been introduced in literature, e.g., FS or feature extraction methods, filter-filter methods, filter wrapper methods, to find a new informative subset of text features (Fodor 2002). It takes the power of one method and reduces the drawback of the other method (Bharti and Singh 2014b). Generally, text FS technique is used to enhance the algorithm of text analysis process (i.e., text clustering) via two steps. The first step involves eliminating uninformative text features, which reduces the performance of the underlying algorithm. The second step involves reducing dimension features space by creating a new subset of low-dimensional useful features, which in turn reduces the computational time to facilitate the handling of a huge amount of unorganized TD (Lin et al. 2016).

3.7 Metaheuristics Algorithm for Text Feature Selection In the last years, a number of nature-inspired metaheuristic algorithms have been proposed to solve many optimization problems and are applied to solve many real-life applications. Recently, to solve different unsupervised learning problems (i.e., text FS and TD clustering), the metaheuristic algorithms have been successfully applied. To solve a current optimization problem, a user can simply and easily use a proper metaheuristic optimization algorithm (Nanda and Panda 2014). The recent works on FS using metaheuristic algorithms are summarized in the following subsections.

3.7 Metaheuristics Algorithm for Text Feature Selection

29

3.7.1 Genetic Algorithm for the Feature Selection Genetic algorithm (GA) is an evolutionary optimization method developed by Holland John (Holland 1975). GA simulates the evolution behavior and is a powerful optimization algorithm in large-scale search spaces. Three main operations, namely, selection, crossover, and mutation, are involved in its processes. TF-IDF is used as an objective function to evaluate each text feature (Uˇguz 2011; Abualigah et al. 2016b). GA is an evolutionary algorithm (EA) that is used for solving the FS problem (Tsai et al. 2013). A new GA is proposed to select an optimal subset of text features for improving the TC algorithm (Shamsinejadbabki and Saraee 2012). In this study, the weight scheme (TF-IDF) is used to reduce the relationships between the document terms. Clustering experiments were performed on spam email documents to investigate the performance of the FS method. The proposed algorithm enhanced the performance of the TC algorithm. A new unsupervised FS method was developed based on GA (Shamsinejadbabki and Saraee 2012). This method was used in the modified term variance measure to evaluate the term groups. GA was designed to find an optimal subset of features based on the new modified measure. An experiment was conducted on Reuters-21578 dataset to investigate the performance and the computational time of the proposed algorithm. It is evaluated in terms of accuracy, precision, recall, and F-measure. The proposed methods were favorable and improved FS accuracy. GA is used to select a new subset of features according to their importance, which is calculated by the weighting scheme. New approaches are suggested to select a subset of optimal features for minimum computational time and maximum algorithm performance (Abualigah et al. 2016b). The authors proposed a new GA to extract features for improving the TC technique. Moreover, the term weight (TF-IDF) is used to introduce the relationship among the document terms. The experimental evaluations are performed using spam E-mail documents to investigate the performance of FS method. The proposed algorithm enhances TC performance.

3.7.2 Harmony Search for the Feature Selection Harmony search (HS) algorithm is a metaheuristic optimization approach introduced in 2001 by Zong Woo Geem Geem et al. (2001). HS imitates music improvisation, in which a set of musicians plays its pitch with instruments to produce a pleasing harmony. HS algorithm is used for stochastic search techniques and for obtaining good-quality subsets of features to discover the entire dataset while maintaining the necessary information (Vergara and Estévez 2014). A new FS method based on the HS algorithm is proposed to enhance the performance of email classification using combined DF and TF (Wang et al. 2015). The authors used two threshold stages to discriminate informative features and more

30

3 Literature Review

informative features. An experiment was conducted on six text datasets. DF combined with TF was superior than the other methods. A novel FS method based on the modified HS algorithm is proposed to select a new optimal subset of informative text features (Diao 2014). The algorithm is used to reduce the computational time and to minimize the number of informative features. Experimental results show that the proposed modified HS algorithm enhanced the efficacy of text FS technique in comparison with the original HS.

3.7.3 Particle Swarm Optimization for the Feature Selection Particle swarm optimization (PSO) algorithm is introduced by Kennedy and Eberhart (Eberhart et al. 1995). PSO mimics the social behavior of a swarm, which refers to the set of candidate solutions where each solution is a particle. This algorithm uses the global best solution concept to obtain the optimal solution (Zhang et al. 2014). PSO algorithm is an effective technique to solve the FS problem in the TC domain (Abd-Alsabour 2014). New models are proposed to enhance the effect of text FS based on the PSO algorithm (Lu et al. 2015). The authors applied three models, as follows. The first model used the original PSO. The second model used the improved PSO algorithm by the inertia weight to optimize the FS model. The third model added a new function to the original PSO algorithm. Experiments were conducted using CHI as the basic text FS method and three PSO models using the improved algorithm. The improved PSO model enhanced the performance of the text FS and was the best among the proposed models. A hybrid method is applied to solve the FS problem in TC by combining binary PSO with opposition-based learning (Bharti and Singh 2016b). Dynamic inertia weight was used to control particle movement, chaotic strategy, and mutation to improve the global search ability. The proposed method performed well in comparison with other methods according to the clusters accuracy on three text datasets, namely, Reuters-21578, Classic4, and WebKB.

3.8 Dimension Reduction Method DR is one of the prominent preprocessing steps that attempts to reduce the feature dimension by eliminating useless features (Diao 2014). High-dimensional space is a common problem in the text-mining area because it increases the execution time and reduces the efficiency of the text analysis process. An efficient DR technique should do the following: (1) recognize relevant features and eliminate irrelevant features; (2) eliminate redundant features; (3) eliminate noisy features; (4) conserve the important information present in the initial feature space; and (5) significantly reduce the dimension of the feature space (Bharti and Singh 2014b; Cunningham 2008).

3.8 Dimension Reduction Method

31

DF is a one of the powerful and simple FS methods for DR technique, and it is the most efficient method for handling a huge amount of TDs (Tang et al. 2005). The main difference between FS and DR is that the FS is a process to eliminate non-informative features from each document, but, DR is a process to eliminate nonuseful feature from all the documents together (Bharti and Singh 2015b; Lin et al. 2016). DF method works by determining the number of the document in which a term occurs to remove useless features according to a predetermined threshold (Nebu and Joseph 2016). DR technique based on DF is used to improve the TC. DF is applied to keep useful features, thereby facilitating the distinction between documents. These features are considered relatively useful in text analysis; hence, they should not be removed (Cunningham 2008; Ljp et al. 2007). The basic conjecture behind using DF as a criterion is that non-useful features either do not affect the intrinsic information about one cluster or they do not affect the performance of the underlying TC algorithm (Yang and Pedersen 1997). It may also enhance the performance of the clustering algorithm if these low-frequency features appear to be noise, irrelevant, and redundant features. This technique is typically used after text pre-processing steps, which eliminate some very high-frequent features (i.e., stop words) (Shafiei et al. 2007). DF can be basically defined as follows. For a TD collection D in matrix notation, Dn∗t , t is the number of text features, and n is the number of all documents. The D F value of feature t, D Ft , is determined as the number of documents in which t occurs at least once among the n documents. To reduce the dimensionality space of D from t to m (m < t) according to predetermine threshold value, we select the m dimensions (features) with the top m DF values. It is an efficient and effective technique to improve the performance of the TC algorithm and to reduce the computational time (Bharti and Singh 2014b; Shafiei et al. 2006; Tang et al. 2005). DF threshold is the simplest way to reduce the dimension space. It is easy to scale a huge collection to TDs with low computational complexity. This is considered an ad hoc approach for improving the efficiency of feature reduction (Yang and Pedersen 1997). A new combination of DF and TF is proposed to enhance the FS technique DCFS (Wang et al. 2015). It is designed to improve the performance of the TC method by preserving the essential information and removing the unnecessary information. The threshold is determined beforehand to apply to the document’s features. An experiment is conducted on seven text benchmark datasets. The proposed method (DC-FS) in this study was superior to the other comparative methods. Three new models in this study are applied to the DR technique to eliminate uninformative features and to reduce the dimension space (Bharti and Singh 2014b). The first model used FS and feature extraction (FE) methods. The second model combined FS and feature extraction methods. The third model utilized DR to eliminate irrelevant, redundant, and noisy features without losing substantial information. Experiments were conducted over three subset text datasets. The three proposed models improved the performance of the clustering algorithm according to F-score and total execution time. DR techniques are proposed to perform FS using TV and DF methods (Bharti and Singh 2015b). These two methods are used for assigning the features’ relevance

32

3 Literature Review

score. Then, a feature extraction technique (principal component analysis (PCA)) is used to further reduce dimensions space without losing the intrinsic information. The proposed method improves the clustering accuracy in comparison with the comparative methods. Linear discriminant analysis (LDA) is a common technique that has been generally used. LDA aims to maximize the ratio of the between-class set and total data scatter in projected space, and the class label of each data is needed. Nevertheless, the labeled data are limited and unlabeled data are in huge quantity, thus LDA is difficult to be used in the before-mentioned case. A novel method is proposed, named, semisupervised linear discriminant analysis (SLDA) (Wang et al. 2016). It can use a limited number of class labeled data and a number of the unlabeled ones for training so that LDA can provide to the situation of a few labeled data available. The authors developed an iterative algorithm to calculates the class indicator and the projection alternatively. The convergence of the proposed algorithm is proved by experiments. The experimental results were done by using eight datasets show that the performance of SLDA is superior in comparison with the traditional LDA and some state-of-the-art algorithms. In this paper (Li et al. 2008), a simple solution is presented to multi-class text categorization. Classification problems are first expressed as optimization problem via discriminant analysis. Text categorization is then cast as the problem of getting coordinate transformations that shows the original similarity from the data. While most of the prior approaches decompose a multi-class classification problem into various independent binary classification assignments, the proposed approach works direct multi-class classification. By using generalized discriminant analysis (GDA), a coordinate transformation that shows the inherent class structure shown by the generalized singular values is recognized. Comprehensive experiments illustrate the efficiency and effectiveness of the proposed approach. In this paper (Hafez et al. 2015), a feature selection method based on hybrid Monkey Algorithm (MA) with Krill Herd Algorithm (KHA) is proposed (MAKHA). Datasets ordinarily include a large number of attributes, with the irrelevant and redundant feature. The MAKHA algorithm adaptively balances the exploration (global) and exploitation (local) search to quickly find the optimal solution. The MAKHA can fast search the feature space for optimal subset of feature by minimizing a given fitness function. The fitness function was utilized to incorporate both classification accuracy and the reduction size. The proposed method was tested on 18 datasets and shows advance over other search methods as PSO and GA optimizers.

3.9 Partitional Text Document Clustering The task of partitioning TD is to divide a collection of documents into subsets of clusters that are as similar as possible for homogeneity (Liu and Xiong 2011). Homogeneity is achieved by choosing a proper similarity function and maximizing the similarity between the document and the cluster centroids. The key for achieving this

3.9 Partitional Text Document Clustering

33

task is the similarity function and the method of selecting the cluster centroid. The common partitional text clustering algorithms are k-mean and k-medoid (Aggarwal and Zhai 2012; Nanda and Panda 2014).

3.9.1 K-mean Text Clustering Algorithm K-mean is a common clustering algorithm used in the domain of the TC. MacQueen James, and other scholars introduced this algorithm (MacQueen et al. 1967). Kmean TC uses the number of clusters K and the initial clusters centroids to identify the related documents in each group or cluster using the similarity measurement. The similarity value between each TD and clusters centroids iteratively updates the cluster centroids until the termination criterion is met (Abualigah et al. 2016a). The k-mean initially assigns a random initial cluster centroids. It partitions a set of TDs D = (d1 , d2 , d3 , ...., dn ) into a subset of clusters K . This algorithm uses the maximum similarity to assign each document to a similar cluster centroid (Alghamdi et al. 2014; Forsati et al. 2013; Jensi and Jiji 2016; Maitra and Ramler 2012). The k-mean procedure is presented in Algorithm 1. Algorithm 1 K-mean clustering algorithm (Abualigah et al. 2016a) 1: Input: A collection of text documents D, and the number of all clusters K . 2: Output: Assign D to K. 3: Termination criteria 4: Randomly choosing K documents as clusters centroids C = (c1 , c2 , ...., c K ). 5: Initialize matrix A as zeros 6: for all d in D do 7: let j= argmaxk{1toK } , based on Cos(di , ck ). 8: Assign di to the cluster j, A[i][ j] = 1. 9: end for 10: Update the clusters centroids. 11: End

K-mean is used to cluster large, high-dimensional, and sparse TDs, where a new step in the k-mean algorithm process was proposed to calculate the weight of features in each cluster to discriminate important (informative) features (Wu et al. 2015). The experimental results showed that the proposed clustering algorithm outperformed other comparative methods. A new approach is to cut down the iterative dataset to improve the traditional algorithm for the clustering web documents. The authors applied fuzzy c-mean and k-mean with Vector Space Model (VSM) techniques separately (Roul et al. 2015). The experiments were conducted on the Classic3 and Classic4 text datasets of Cornell University. The proposed approach (i.e., c-mean with VSM) outperformed the traditional algorithm in enhancing the web document clustering. The c-mean algorithm is better than k-mean in terms of the F-measure.

34

3 Literature Review

3.9.2 K-medoid Text Clustering Algorithm The process of k-medoid TD clustering algorithm and k-mean text document clustering algorithm is similar. The only difference is that k-medoid clustering algorithm uses the document closest to the cluster center, but k-mean clustering algorithm uses average documents (vectors) that belong to the cluster to represent the cluster centroid (Aggarwal and Zhai 2012; Liu and Xiong 2011; Nanda and Panda 2014). The k-medoid clustering algorithm can effectively improve the performance of the clustering technique (Aggarwal and Zhai 2012). The process of the k-medoid clustering algorithm is as follows. Select random K documents as initial centroids of K clusters according to the similarity between the documents and the clusters centroids. Assign the documents to the similar cluster, and update the clusters centers. Check the quality of the clusters. If it is increased, keep this replacement. Otherwise, reject it, and repeat the above process until the quality of the clustering remains unchanged (Liu and Xiong 2011). There are two disadvantages of the use of k-medoid clustering algorithm. The first disadvantage is that it needs a large number of repetitions to reach convergence and slow implementation. This happens because each iteration requires the computation of similarity or distance measures. The second disadvantage is that the k-medoid clustering algorithms are not compatible with sparse text collection. In a large division and non-uniform distribution of documents, the text does not have several words in common, and the similarity value is very small between such document pairs (Aggarwal and Zhai 2012).

3.10 Meta-heuristics Algorithms for Text Document Clustering Technique The recent works on partitional TD clustering using metaheuristic algorithms are summarized in the following subsections.

3.10.1 Genetic Algorithm GA-based clustering is an evolutionary algorithm inspired by the process of natural selection. With respective of text clustering, the genetic population presents the number of solutions, each chromosome (solution) represents all the documents in the dataset, each gene presents a document which belongs to a specific cluster, and the fitness function is used to evaluate the solutions to judge which one is the best (Karaa et al. 2016). GA begins with a set of solutions (population). Solutions from one population are taken to generate a newly improved population. This algorithm works to find population better than the old one. Solutions which are selected to

3.10 Meta-heuristics Algorithms for Text Document Clustering Technique

35

generate (i.e., crossover and mutation) new solutions are selected according to their fitness. A method based on GA is applied to TC using the ontology strategy and the Thesaurus strategy (Song et al. 2009). Two hybrid methods were applied using various similarity measures. The GA combined with the proposed similarity measures enhanced the performance of the TC method. A new technique for TC based on partitioning a dataset into subset groups and applying GA separately to each cluster rather than to the entire dataset (Akter and Chung 2013). GA was applied to the partitions separately to avoid the local minima, which is the main problem when using GA. The proposed GA achieved superior performance compared with the previously used approaches. Recently, a new approach has been developed to deal with text data structure, i.e., MEDLINE abstract dataset, based on the combination of GA with VSM and an agglomerative algorithm (Karaa et al. 2016). Experiments were conducted on a subset of MEDLINE dataset, which is used in real applications. The proposed method could be applied to any text dataset to enhance information retrieval.

3.10.2 Harmony Search Algorithm Harmony search (HS) is a music-based optimization algorithm. It was inspired by the research that the aim of music is to explore for a perfect state of harmony. This harmony in music is similar to find the optimality in an optimization method. The search process in HS optimization algorithm can be compared to a jazz musician’s improvisation process. With respective of text clustering, HS algorithm has a set of solutions (harmonies), each musician presents a document which belongs to a specific cluster, the play presents an iteration, and a note presents as a fitness function to evaluate the solutions for finding the best harmony (global optimum) (Geem et al. 2001). HS begins with a set of solutions. In each iteration the algorithm seeks to find a better solution based on the following steps: (i) initialize the feature selection problem and HS parameters, (ii) initialize the harmony memory, (iii) improvise a new solution, (iv) update HM, and (v) check the stopping criterion. The HS algorithm is proposed as a novel approach to address the problem of data clustering (Moh’d Alia et al. 2011). The proposed algorithm depends on two stages. The first stage explores the search space of the HS algorithm to identify the optimal clusters centroids. The optimal clusters centroids are calculated using the c-mean as the objective function. In the second stage, the best clusters centroids are used as the initial cluster centroids for the c-mean clustering algorithm. The experiments were conducted using standard benchmark datasets. The proposed HS algorithm reduced the difficulty of selecting the initial clusters centroids. HS is also used for document clustering technique. In this study (Devi et al. 2015), the authors proposed a concept of factorization method to improve TD clustering. An experiment was conducted on a sample comprising 50 documents, and the proposed concept outperformed the other methods.

36

3 Literature Review

A novel TD clustering algorithms based on HS algorithm is used for document clustering with improved computational time (Forsati et al. 2013). The experimental results showed that the HS algorithm is comparable with the other global search algorithm.

3.10.3 Particle Swarm Optimization Algorithm In the domain of the computer science, particle swarm optimization (PSO) is a computational optimization method that optimizes a problem by iteratively working to improve a candidate solution with respect to a given degree of quality. With respective of text clustering, PSO works to solve any TC by having a population of candidate solutions which contain all the documents arrangement, each particle presents a document which belongs to a specific cluster, moving these particles around in the search-space based on a simple mathematical formula over the particle’s position and velocity, and the fitness function is used to evaluate the solutions to judge which one is the best (Eberhart et al. 1995). The PSO algorithm generates particles with random positions. Each candidate solution, called particle, is evaluated by the fitness function. In the PSO, the solutions contain several single entities (features). Each solution determines its movement by combining aspects of the historical information according to its own current and best fitness. The subsequent iteration selects a location after all solutions have moved. Finally, the solutions, which are similar to a flock of birds collectively searching for food, will likely reach the optimal fitness value (Bharti and Singh 2016b). PSO algorithm is inspired by social behaviors, such as swarming, flocking, herding, and natural phenomena in vertebrates. The PSO algorithm is applied to TC to generate a globally optimal solution (Cui et al. 2005). The authors used four text datasets and compared PSO with k-mean and hybridized PSO with k-mean. The hybridized PSO with k-mean improved the TC results in comparison with the PSO and k-mean algorithms. A new environmental factor with PSO algorithm (EPSO) is proposed to improve the TC performance (Song et al. 2014a). Work was done in two stages to utilize the environmental PSO to solve the TC problem. In the first stage, the evolutionary PSO factor is introduced as a pure search technology to reach a multi-local optimal solution. In the second stage, different types of information are considered to enhance the global search solution. Such a method can efficiently overcome the defect of PSO, i.e., trapped in the local optimal solution. Random parts from datasets were used in the experiment, and the results were compared with those of other state-of-the-art methods. EPSO performs better than the comparative clustering algorithms in most cases. Two objective functions were proposed to solve the clustering problem based on the ideas of data density and cohesion. In this study, a partitional clustering method is proposed based on a multi-objective optimization problem (Armano and Farmani 2016). The purpose of this study is to obtain well-separated, similar, and dense clus-

3.10 Meta-heuristics Algorithms for Text Document Clustering Technique

37

ters. These objective functions are the essence of the multi-objective PSO algorithm. It is contrived for an automatic clustering of massive unlabeled datasets. Experimental results are compared with the other comparative algorithm in the domain of the clustering technique. The results revealed that the proposed method achieved accurate clusters and outperformed the other comparative method in most cases.

3.10.4 Cuckoo Search Algorithm In the domain of operations research, the cuckoo search is a meta-heuristic optimization algorithm. It was inspired by the obligate brood parasitism of several cuckoo species by laying their eggs in the nests of other host birds. With respective of text clustering, some host birds can involve direct conflict with the intruding cuckoos. For example, if a host bird (presents a solution to grouping a set of documents into a subset of clusters) finds the eggs (egg presents a document which belongs to a specific cluster) are not their own, it will either lose these alien eggs or simply abandon its nest and build a new nest elsewhere (Yang and Deb 2010). CS is population-based algorithm works on reproduction of cuckoo birds (population). Cuckoos normally lay their fertilized eggs in other cuckoos’ nests with the hope finding better off-springs. The fitness of a solution is determined from the values of the objective function. The main aim is to use the new better solution to replace a bad solution in the nest. Cuckoo search (CS) is an optimization algorithm inspired by cuckoos sitting on their eggs in the nests of other host birds (Manikandan and Selvarajan 2014). The original CS algorithm is applied to solve the text data clustering problem (Zaw and Mon 2015). The CS outperformed k-mean and PSO algorithms in the text data clustering problem. An efficient method is introduced to improve the data clustering technique based on cuckoo optimization algorithm (COA) and fuzzy cuckoo optimization algorithm (FCOA) (Amiri and Mahmoudi 2016). The authors have used COA to cluster a set of data objects into a subset of clusters. The fuzzy logic technique obtained optimal results. This algorithm generates a random solution to the cuckoo population that calculates the cost function for each solution. Finally, fuzzy logic aims to obtain the optimal solution. The performance of the proposed algorithms was evaluated and compared with other algorithms, thereby showing that the proposed algorithm improves the performance of data clustering. The CS algorithm is proposed for data clustering technique because of its easy implementation and stable convergence characteristics (Saida et al. 2014). The experiments were assessed using four different benchmark datasets from the UCI Machine Learning Repository. The cuckoo search algorithm improved the performance of data clustering by obtaining the best values in three cases compared with popular and recent algorithms.

38

3 Literature Review

3.10.5 Ant Colony Optimization Algorithm In domains of computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic optimization technique for solving computational problems which can be decreased to finding safe paths through graphs. ACO is a one of the ant colony algorithms family, in swarm intelligence methods, and it establishes some metaheuristic optimization process. With respective of text clustering, ACO was aiming to explore for an optimal path in a graph, based on the behavior of ants (ant present solutions to solve the TC problem by grouping a set of documents into a subset of clusters) attempting a path between their colony (colony presents a document which belongs to a specific cluster) and a source of food (cluster centroid) (Dorigo and Di Caro 1999). Ant Colony optimization (ACO) algorithm is a probabilistic technique that is used to find good paths. It mimics the behavior of ants while searching between their colony and food. The ACO metaheuristic algorithm is applied to solve the TC problem (Machnik 2007). ACO is considered a flexible and scalable algorithm based on multi-agent cooperation. An experiment was conducted over a text dataset. The algorithm has good clustering quality, speed, and flexibility in determining the number of clusters. Two methods using ACO-based clustering were reviewed (Handl and Meyer 2007). The first method directly imitated the clustering behavior spotted in the real ACO behavior. In the second method, the clustering process was formulated as an optimization problem, and ACO was used to find the optimal or near optimal clusters. The ACO algorithm was also applied to improve the performance of the TC technique by obtaining a high term weight to consider important features during the preprocessing mechanism (Rajeswari and GunaSekaran 2015). The proposed algorithm performed better in terms of accuracy and computation time.

3.10.6 Artificial Bee Colony Optimization Algorithm In the Artificial Bee Colony Optimization Algorithm(ABC) model, it consists of three groups of bees: employed bees, scouts, and onlookers. This algorithm is found that there is only one artificial applied bee for each food source. With respective of text clustering, the number of food sources around is equal to the hive the number of employed bees in the colony (colony presents a solution to solve the TC problem by grouping a set of documents into a subset of clusters). Employed bees (each bee presents a document which belongs to a specific cluster) move to their food source (cluster centroid) and come back to hive and dance in this range. The employed bee whose food source has been left becomes a scout and begins to search for finding a different food source. Onlookers watch the dances of employed bees and keep food sources depending on dances (Karaboga and Basturk 2007).

3.10 Meta-heuristics Algorithms for Text Document Clustering Technique

39

The ABC is a population-based algorithm which consists of three groups of bees: employed bees, onlookers, and scouts. The main steps of the ABC algorithm are: Initial food sources are produced for all employed bees (solution), each solution goes to a food source to discover a neighbor source, then evaluate the solution using the fitness function, each onlooker chooses one of their sources according to on the dances, and then goes to that source (Bharti and Singh 2014a). The ABC is one of the most recently introduced metaheuristic optimization algorithms (Karaboga et al. 2014). It mimics the intelligent foraging behavior of honey bees. It is considered a powerful optimization algorithm to reach a global optimal solution and to partition a large TD into a set of groups. ABC was suggested for improving the text document clustering technique by selecting an optimal text cluster center (Bharti and Singh 2014a). The chaotic map model is used as a local search model to enhance ABC exploitation capability. Experiments were conducted on Reuters-21,578 and Classic4 text datasets. The proposed ABC outperformed other competitive algorithms. ABC algorithm is applied in the selection of appropriate text cluster centers for creating TD clusters (Bharti and Singh 2015a). The authors established two local search models, namely gradient search and chaotic local search in the pure ABC, to enhance its exploitation capability. The proposed algorithm is named chaotic gradient artificial bee colony. The performance of the proposed algorithm is tested using three variant text datasets, namely, WebB, Classic 4, and Reuters-21578. The results are compared with global best guided ABC, a variant of the proposed method, memetic ABC, and K-mean clustering algorithm. The proposed methods showed encouraging improvements in the quality of clusters and convergence speed.

3.10.7 Firefly Algorithm Firefly Algorithm (FA) is a population-based optimization technique works on the flashing models and behavior of fireflies. FA works in three idealized rules: fireflies are unisex so that one firefly (solution) will be drawn or attracted to other solutions regardless of their sex (With respective of text clustering, firefly presents a document which belongs to a specific cluster); the attractiveness is comparative to the brightness (fitness function), and they both minimize as their distance increases (to compute the distance between each document and the cluster centroid). Thus for any twins flashing fireflies, the less fitness function one will move to the best one. If there is no best one than a selective solution, it will move randomly; the fitness function of a solution is defined by the landscape of the objective function (Yang and He 2013). A new clustering algorithm is presented, namely, termed Gravity Firefly Clustering (GF-CLUST) for dynamic TD clustering using the firefly (Mohammed et al. 2016). Benefits of the proposed clustering algorithm include the ability to recognize the appropriate number of document clusters for a given text document collection, which is one of the main problems in the domain of the document clustering. It defines some documents that are sufficiently strong to be cluster centers; it pro-

40

3 Literature Review

duces document clusters based on cosine similarity. Experiments were conducted on various text document datasets to evaluate the efficiency and effectiveness of the proposed algorithm. The proposed algorithm outperformed the other existing clustering techniques, such as k-mean, and PSO in terms of purity, F-measure, and entropy. Moreover, the number of retrieved documents in each cluster using the proposed algorithm is near the actual number of documents clusters.

3.11 Hybrid Techniques for Text Document Clustering HS algorithm is applied to the TC algorithm to find the optimal clusters (Forsati et al. 2013). HS hybridizes with the k-mean algorithm in three ways to acquire a better clustering technique by combining the preliminary efficiency of HS with the strengths and of the k-mean algorithm through the local search of the k-mean algorithm. The experimental results showed a better clustering performance of the hybridized HS with k-mean compared with other algorithms. A fuzzy control is used in GA with a new hybrid semantic similarity version for the TC method using the vector space model. A new semantic space model for dimension reduction was introduced (Song et al. 2014b). The authors used GA to balance the convergence to the optimal solution and to explore new solutions to reach a global optimal solution. An experiment was conducted on text datasets. The fuzzy control using GA and the hybrid semantic similarity measure performed better in comparison with the other comparative algorithms. A new meta-heuristic method is proposed using a CS algorithm combined with k-mean for web document clustering, which reduces the time spent in exploring web pages (Manikandan and Selvarajan 2014). This method aims to increase the performance of the web search results by presenting them in groups. The proposed method improves the search engine clustering results compared with k-mean, c-mean, fuzzy PSO, and GA. A hybridized GA with PSO algorithm is applied to enhance the TC technique (Song et al. 2015), where the authors applied GA to obtain a global solution as an initial strategy to PSO and a normalized search space to update the PSO positions to obtain a proper range space. Experiments were conducted on text datasets, and the results showed that the hybridization method improved the TC method compared with other algorithms. An hybridized bee colony optimization (BCO) with the k-mean algorithm, namely, improved bee colony optimization (IBCO), is applied to improve BCO and to enhance effectiveness of the TC technique by generating cloning and fairness concepts for the BCO algorithm (Forsati et al. 2015). These features resulted in strong exploration and exploitation, thereby effectively leading the search process and obtaining high quality solutions. Experiments were conducted on text datasets. The improved BCO could be successfully applied to TC technique as compared with other global search algorithms.

3.11 Hybrid Techniques for Text Document Clustering

41

A hybridized HS with the k-mean algorithm is proposed to improve the TC method. It applies the cover factor and concept factorization to extract good features to find optimal clusters (Devi et al. 2015). The experiment was conducted on a subset of text datasets. The results showed better performance when the hybridization and the cover factor were used together. The results were evaluated by precision, recall, and F-measure. A new PSO-based CS clustering algorithm is proposed to combine the strengths of CS and PSO. The CS solutions were based on the PSO solutions (Zaw and Mon 2015). An experiment was conducted on web document datasets. The the results showed that the proposed algorithm performed well in the TC area according to the F-measure values. An efficient hybrid optimization algorithm, Tabu-KM, is proposed to solve data clustering problem (Saida et al. 2014). This algorithm concurrently gathers the optimization characteristic of tabu search and the local search ability of the k-mean algorithm. The proposed algorithm creates tabu space to escape the trap in the local optimal solution and find optimal solutions. The proposed algorithm is tested on several standard datasets, and its performance is compared to popular algorithms in the domain of TC. The experimental results showed that the robustness and efficiency of the proposed algorithm are suitable for alleviating the data clustering problem. A new hybrid strategy based on firefly algorithm (FA) and k-mean clustering method is proposed to improve the data clustering technique (Hassanzadeh and Meybodi 2012). In this method, the authors initially used FA to find optimal clusters centroids and then initialized the k-mean clustering algorithm with the FA clusters centers to improve the centers. This method was tested using five datasets. Experimental results are analyzed based on the optimization of fitness function (intra-cluster distance). The proposed clustering algorithm, namely, FA, achieved better results than the other comparative algorithms, namely, PSO, hybrid PSO with k-mean algorithm (KPSO), and K-means. As one of the most successful clustering methods, fuzzy C-means (FCM) algorithm is the basis of other fuzzy clustering analysis methods in theory respects. However, FCM algorithm is basically a local search algorithm. Therefore, sometimes, it may fail to find the global solution. For the determination of getting over the disadvantages of FCM algorithm, krill herd (KH) algorithm with elitism strategy, called KHE, is proposed to solve the text clustering problem (Li et al. 2015). Elitism strategy has a powerful ability to prevent the krill solution from degrading. In addition, the well-selected parameters are employed in the KHE method instead of originating from life. Experiments were done on text datasets, the results prove that the KHE is indeed a good alternative for solving general benchmark problems and fuzzy clustering analyses.

42

3 Literature Review

3.12 The Krill Herd Algorithm This section presents an overview description of the reviewed studies and their key conclusions. A new sensorless control scheme is designed based on KH algorithm for a permanent magnet synchronous motor (PMSM) (Younesi and Tohidi 2015). The parameters of the speed and torque of PI-Controllers are optimized by the authors to minimize the speed tracking error in steady state. They utilized the discrete-time model, which does not depend on initial conditions of integrators and tested under variable operating conditions. The results of the simulation demonstrated that their proposed KH algorithm has a satisfactory performance against load disturbances and robustness against machine parameters’ variations. The proportional integral derivative (PID) control system is tackled to obtain optimal PID parameters using KH algorithm (Alikhani et al. 2013). The plant error over time is defined by using three cost functions, and the KH algorithm is utilized to obtain the optimal solution to cost functions. This is achieved by searching the PID parameter space for global minimum, as well as by fine-tuning the controller effectively. The numerical results showed that their strategy solved the problem. The KH algorithm is adopted to tackle some issues in the peer-to-peer network (P2P) where the n-grams technique that splits the query strings into the substrings is employed for searching the nodes based on the query (Amudhavel et al. 2015b). The authors concluded that reduction in the search process led to decreases in bandwidth, thereby reducing the congestion in network and network traffic. The challenges in the Smart Phone Ad Hoc Network (SPAN), such as synchronization, bandwidth, and power conservation, are tackled with the KH algorithm (Amudhavel et al. 2015a). The intensification strategy of the algorithm is employed to resolve the issues in the SPAN. The bandwidth and power consumption were respectively reduced and efficiently influenced with the help of the intensification strength of the KH algorithm. The emergence of artificial neural networks as an important tool in the domain of artificial intelligence and optimization could not be over emphasized. The performance of the KH algorithm is studied to train artificial neural networks and evaluate its obtained results with other stochastic methods that worked on the same problem instances drawn from the UCI machine learning repository (Kowalski and Łukasik 2015). They concluded that the KH algorithm produced promising results in terms of classification error (CE), sum of square errors (SSE), and time taken for the training of the artificial neural network (ANN). In another development, KH algorithm was used to improve the network structure of the ANN, in which the process was based on three components (i.e., induced movement by the other krill, random diffusion, and foraging motion) along with a genetic operator (Lari and Abadeh 2014b). The authors proved that their method produced better performance in terms of high classification accuracy and low mean square error when compared with the previous methods that worked on the same instances of the UCI dataset. A similar study that adapted KH algorithm for training of ANN is proposed by the same authors (Lari and Abadeh 2014a).

3.12 The Krill Herd Algorithm

43

3.12.1 Modifications of Krill Herd Algorithm 3.12.1.1

Binary-Based Krill Herd Algorithm

The modification of the KH algorithm based on binary concept is presented to tackle the feature selection problem (Rodrigues et al. 2014), in which the krill individuals are positioned according to the binary coordinates. The proposed technique outperforms three other approaches when evaluated on several feature selection datasets.

3.12.1.2

Chaotic-Based Krill Herd Algorithm

The motivation to improve the performance of the KH algorithm led to the modification of its components using chaotic theory concept by some researchers in the domain. The performance of the KH algorithm is improved using a series of chaotic particle-swarm named CPKH to solve numerical optimization problems within limited time (Wang et al. 2013). They integrated chaos sequence into the KH algorithm in order to enhance its global search capability. The method can accelerate the global convergence speed, as well as maintain the strong robustness of the classical KH algorithm. In another study (Wang et al. 2014b), the various chaotic maps are utilized to change the three main movements of the KH algorithm during the search process. The modification with an appropriate chaotic map performs better than the original KH and obtained comparable results with other existing approaches.

3.12.1.3

Fuzzy-Based Krill Herd Algorithm

Fuzzy-based KH algorithm is proposed where fuzzy system is utilized to fine-tune the parameters during the search cycle to strike a balance between the exploration and exploitation capabilities while solving the problems (Fattahi et al. 2014). The fuzzy system is used to assign suitable values to the respective variables that control the amount of local exploitation and global exploration in order to enhance the search capability of the algorithm while solving the problems. When the authors tested the proposed fuzzy-based KH algorithm on different bench-mark functions, it showed higher performance than others.

3.12.1.4

Discrete-Based Krill Herd Algorithm

Application of discrete KH algorithm to graph-based network search and optimization problems was proposed (Sur and Shukla 2014), where the continuous nature of the algorithm was modified to cope with the optimization problems of discrete variables. The performance of the KH algorithm is better in terms of decision making and path planning for graph-based network and other discrete event-based optimization

44

3 Literature Review

problems. The flexible job-shop scheduling problem (FJSSP) is solved with discrete KH method (Wang et al. 2016), where some heuristic strategies are incorporated in order to develop an effective solution approach. The solution approach is divided into two stages. In the first stage, the multilayer coding strategy is employed in preprocessing phase, thereby enabling the KH method to deal with FJSSP. Then, the proposed discrete KH method is utilized to find the best scheduling sequence. They also introduced elitism strategy into their proposed method to drive the krill swarm toward the better solutions during the search. When the performance of the proposed discrete KH algorithm was evaluated using two FJSSP instances, the results clearly demonstrated that the approach outperformed some existing state-of-the-art algorithms.

3.12.1.5

Opposition-Based Krill Herd Algorithm

The concept of opposition-based learning strategy is employed in the modification of the classical KH algorithm to tackle the optimal location of the capacitor (Sultana and Roy 2016). The modification is aimed at improving the performance of the algorithm in terms of generating good results, as well as enhancing the speed of convergence. The authors employed 33-bus and 69-bus radial distribution networks to test the performance of their modified KH algorithm. The proposed technique achieved good quality convergence characteristics and obtained better quality results when compared with classical KH algorithm and other existing nature-inspired techniques available from the literature. A similar strategy has been implemented to tackle the same problem in another study (Sultana and Roy 2015).

3.12.1.6

Other Modifications

The modification of the KH algorithm is presented using linear decreasing step to strike a balance between exploration and exploitation in solving the optimization problem (Li et al. 2014). When authors verified the effectiveness of their improved KH algorithm with 20 benchmark functions, they discovered that the performance of their modified version is better than that of the original KH algorithm. KH algorithm is modified to tackle global optimization function (Guo et al. 2014). Better solutions were generated based on the exchange of information between top krill motion calculation processes. The authors utilized a new Levy flight distribution and elitism scheme to update the motion calculation of the KH algorithm and accelerate the global convergence speed, as well as preserve the robustness of the basic KH algorithm. When several standard benchmark functions are employed to verify the efficiency of their method, the proposed algorithm showed superior performance compared with the original KH algorithm. The proposed algorithm was highly competitive with other robust population-based methods.

3.12 The Krill Herd Algorithm

45

3.12.2 Hybridizations of Krill Herd Algorithm This section reviews the hybridization of KH algorithm with other operators in order to improve its performance when utilized for complex optimization problems.

3.12.2.1

Hybridization with Local Search-Based Algorithm

Normally, the population-based approaches such as KH algorithm are strong in terms of scanning the search space of multiple regions at the same time. However, it is not that efficient in navigating each region deeply. In contrast, local search-based algorithm is very efficient in deeply navigating a single search space region but cannot scan the whole search space region. Therefore, the hybridization of local search within the population search algorithm is very promising in complementing the advantages of both types in a single optimization algorithm (Blum and Roli 2003). The main aim of this type of hybridization is to strike a balance between a wide range exploration and nearby exploitation of the problem search space.

3.12.2.2

Hybridization with Population-Based Algorithm

This section summarizes the hybridization of KH algorithm with operators of other population-based algorithms. A hybrid algorithm termed a biogeography-based krill herd (BBKH) algorithm is proposed to solve complex optimization (Wang et al. 2014). The authors improved the performance KH algorithm with the introduction of a new krill migration (KM) operator during update process to tackle the problems efficiently. The KM operator is used to enhance the exploitation capability by allowing the krill to cluster around the best solutions at the later run of the search. The performance of a novel BBKH approach is better than that of basic KH and other optimization algorithms, as shown in the experimental results. The performance of the KH algorithm improved with a Lèvy-flight mechanism for tackling the optimization tasks within the limited computing time (Wang et al. 2013). The authors integrated a new local Lèvy-flight (LLF) operator during the updating krill process to improve its efficiency and reliability while solving global numerical optimization problems. The usage of LLF operator is used to enhance the exploitation. It allows individuals to carefully exploit the search space. In addition, they also applied elitism scheme to maintain the best krill during the updating process. The performance of their LKH version was tested on 14 standard benchmark functions, thereby showing that the algorithm is superior to the standard KH algorithm, and it is highly competitive with other existing population-based methods.

46

3 Literature Review

3.12.3 Multi-objective Krill Herd Algorithm Literature has showed the achievement of the KH algorithm as a single-objective optimization algorithm when used to solve problems with continuous search space. Thus, researchers have extended its usage to multi-objective areas. A multi-objective binary KH algorithm is developed for the classification problems in which the classical KH algorithm was converted to binary algorithm (Mohammadi et al. 2014a). The breast cancer data set was employed by the authors to test the performance of their methods. The accuracy achieved by their algorithm was higher than that achieved by existing algorithms. A multi-objective KH algorithm and a modified KH approach with the beta distribution (MOKH) in the tuning of inertia weight are developed for electromagnetic optimization (Ayala et al. 2012). Similar study is evaluated on a brushless direct current (DC) wheel motor benchmark (Brisset and Brochet 2005), it was found that MOKH algorithms showed a promising performance on a multiobjective constrained brushless DC motor design problem.

3.13 Critical Analysis This study is concerned with TDCP. A wide variety of approximation methods have been investigated and researchers normally working dwell in two main problems, namely, TC and FS. In the literature, these problems still include several weaknesses, including uninformative features, high-dimensional space, term weighting scheme, trapped in the local search, week exploration search, premature convergence, balance between local and global search, and others (Abd-Alsabour 2014; Bharti and Singh 2014a, 2016a; Song et al. 2014a; Wang et al. 2015). The following points illustrate the main challenges and gaps faced by this study in the TD clustering domain: First, any optimization problem including the TFSP, which is known to be a highly demanding constraint combinatorial optimization problem, is difficult to address using classical techniques (Shah and Patel 2016). Consequently, the complexity of adapting the optimization algorithm to the TFSP requires considerable effort. The optimization algorithm for the TFSP is a time-consuming process that encompasses knowledge of the problem-specific domain together with an exceptional ability to adapt the right operators for the optimization method to deal with this type of problem (Bharti and Singh 2014b, 2016b; Lu et al. 2015). Hence, several weaknesses in the FS domain include the following: (i) Most of the researchers focused on the supervised FS methods to eliminate the uninformative features. These methods deal with the documents classified under a certain class label. (ii) The metaheuristic algorithms have been successfully applied to solve several optimization problems. Thus, the unsupervised FS problem should be modeled as an optimization problem in a precise manner. Thus, the problem could be solved easily and effectively. An overview of the text feature selection algorithms is shown in Table 3.1. This table contains the feature selection methods with its names, kind of evaluation measures that used to evaluate

Measures

Precision, recall and F-measure

Accuracy, F-measure

F-measure and accuracy

Accuracy

Method

(GA) GA+TF-IDF

(GA) GA-TC

(GA) FSGATC

(HS) HS

Real-world problem

Reuters21578, 20Newsgroups

Reuters-21578

Reuters-21,578 and Classic3

Datasets

Table 3.1 Overview of the feature selection algorithms

Diao (2014)

Abualigah et al. (2016b)

Shamsinejadbabki and Saraee (2012)

Uˇguz (2011)

Authors

(continued)

In this research, FS and feature extraction are used to enhance the performance of text categorization. (i) information gain method is used to rank each term. (ii) all terms are ranked using GA, PCA, and FE methods. The proposed algorithm is able to produce high categorization effectiveness as measured by precision, recall and F-measure A new unsupervised FS method is introduced to evaluate all terms in each group. In addition, for evaluating groups of terms, a new modified TV method is used. Furthermore, the most valuable groups of terms are found using GA. The proposed algorithm is very promising and obtained better average accuracy and F1-measure than the conventional term variance method UFSP is solved by using the GA, namely, (FSGATC). This method creates a new subset of informative features in order to facilitate the clustering process. The proposed algorithm obtain better results in comparison with k-mean clustering without feature selection A novel FS is presented using HS to find a new subset of features. This method works on generating and reducing the features by classifier ensembles; FS and adaptive classifier ensemble. The proposed algorithm obtain better results with leading methodologies in their respective areas

Description

3.13 Critical Analysis 47

Measures

F-measure and accuracy

Sensitivity, specificity, and accuracy

Accuracy, precision, recall, F-measure

Accuracy, MACf1

Method

(HS) FSHSTC

(PSO) MBPSO

(PSO) H-FSPSOTC

(PSO) PSO model

Table 3.1 (continued)

KNN

Reuters21578, 20Newsgroups

Emails

Reuters21578, 20Newsgroups

Datasets

Lu et al. (2015)

Abualigah and Khader (2017)

Zhang et al. (2014)

Abualigah et al. (2016c)

Authors

UFSP is solved by using the HS, namely, (FSHSTC). This method creates a new subset of informative features in order to facilitate the clustering process. The proposed algorithm obtain better results in comparison with GA and k-mean clustering without feature selection A novel spam detection method is proposed. This method focuses on reducing the false positive error through four steps. (i) the FS is used based on wrapper approach to extract crucial features, (ii) the decision tree as the classifier model, (iii) the cost matrix is used give weights, and (vi) the binary PSO with mutation operator is applied as a search strategy. The proposed algorithm can reduce the false positive error without compromising the accuracy measures In this research, the feature selection problem is solved by using a hybrid of PSO algorithm with GOs for. The k-means text clustering is used to assess the effectiveness of the features subsets. The proposed algorithm obtained better results in comparison with original GA, HS, and PSO In this research, three PSO models are proposed based on a standard PSO algorithm and two PSO models are proposed to increase the effectiveness of text FS through PSO algorithm. And two PSO models are proposed based on the asynchronously improved PSO model and both functional constriction factor and functional inertia weight. The proposed algorithm obtained better results in comparison with all models both in the effect of text classification and different dimensions

Description

48 3 Literature Review

3.13 Critical Analysis

49

the performances, kind of datasets used, and references. The majority of researchers focus on the development of unsupervised feature selection using meta-heuristic optimization algorithms or dimension reduction techniques. Second, all of the current weighting schemes used in text FS and TC, do not consider number of features of each document (De Vries 2014; Mohammed et al. 2014b; Prakash et al. 2014). In the literature on TFSP, considerable effort has been exerted to further develop the current weight scheme that plays a major role in evaluating text features for improving the FS technique. The text FS method needs a weighting scheme that considers the weight (score) of the informative text features in each document. This weighting scheme will facilitate the process of selecting the informative features in each document by distinguishing between the features through their weight (score). Third, the common DR techniques work with statistic role to reduce the dimensional space of the documents. This statistic role is unsuitable for all types of documents because each document has different contents. This can be improved by providing a dynamic TF value for each feature compatible with the size of its effect on the document to determine the useless features. The dynamic DF value for each feature should coincide with all documents compatible with the size of its effect on these documents in partnership with its TF value. The smaller size of the dimensional space will result in better performance (obtaining accurate clusters) of TC and less computation time (Bharti and Singh 2015b; van der MLJP and van den HH 2009; Yao et al. 2012). Fourth, metaheuristic algorithms have been successfully applied to improve the TC technique. There are several problems faced the TC algorithms in the literature such as convergence speed, premature convergence, local exploitation aspects, and global exploration aspects of the algorithms are considered to solve certain problems. Important problems are still being addressed by the metaheuristic algorithm in the TC (Bharti and Singh 2015a; Devi et al. 2015; Forsati et al. 2015; Song et al. 2015; Zaw and Mon 2015). Thus, the improvements of evolutionary algorithm operators could not produce universal improvements compared with the existing improved operators. However, this condition can be justified by the “no free lunch” theorem (Wolpert and Macready 1997). The behavior of several metaheuristic optimization algorithms is investigated to identify the algorithm that has a compatible behavior to encourage and enhance the procedures of the TC technique. The KH is a suitable algorithm for the TC technique based on the similarities between the behavior of the KHA and the behavior of the TC technique. Another point is that KH algorithm obtained better results in solving many problems in comparison with others common algorithms published in the literature. The compatibility between KHA and document clustering involves searching for the closest food (centroid) (Bolaji et al. 2016). Density is one of the main factors that influences the success of all the algorithms used to achieve coherence and similar groups. If documents in the same cluster are relevant, then density is high, and vice versa. If the krill individuals are close to the food, then density is high, and vice versa. Thus, the behavior of krill individuals is exactly the same as that of TDs (both of them are a swarm). An overview of the text clustering algorithms is shown in Table 3.2. This

Measures

Similarity

F-measure

F-measure

Precision, recall, F-measure

Method

(GA) GA-LSI

(GA) EA-TC

(GA) EGA

(GA) Fuzzy-GA

20Newsgroups, Reuters21578

MEDLINA

Reuters21578

Text datasets

Datasets

Table 3.2 Overview of the text clustering algorithms Authors

Song et al. (2014b)

Karaa et al. (2016)

Akter and Chung (2013)

Song et al. (2009)

(continued)

In this research, GA is proposed for solving the TC problem using an ontology method to gain the advantage of thesaurus-based and corpus-based ontology. Moreover, a corpus-based ontology is proposed by using transformed latent semantic indexing (LSI). The proposed algorithm enhances the performance of TC technique in comparison with original GA and k-means In this research, GA is proposed for solving TC problem using partitions of datasets separately. Finally, they apply another version of the GA to overcome the earlier one. The proposed algorithm got better performance in comparison with the previous approaches In this research, GA and agglomerative algorithm are combined with the VSM as an extension of an evolutionary algorithm. This new approach is proposed for clustering MEDLINE abstracts datasets. The proposed combined algorithm obtained better results in comparison with the other algorithm A fuzzy control is added to the basic GA with a novel hybrid semantic similarity measure for solving TC problem. Since the common clustering algorithms have been used VSM to represent the document, they use the semantic similarity measures to solve the problem of ignored conceptual relationships between related terms. The proposed algorithm demonstrated that it got better performance in comparison with conventional GA

Description and Results

50 3 Literature Review

Precision, recall, F-measure

Precision, recall, F-measure

(HS) FCM

(HS) HSM

(HS) HSCLUST

(PSO) PSO-TC ADDC

Measures

Accuracy

Method

Table 3.2 (continued)

Datasets

TREC

News, politics, wap, massage, dmoz

Text datasets

Iris, bupa, glass, diabets

Description and Results

(continued)

A new method based on HS is proposed to addressing TC problem. This method consists of two stages. (i), the HS is used to explore the search space of the given dataset and (ii) the best cluster centers are used as the initial cluster centers for the c-means algorithm. The proposed algorithm minimized the difficulty of determining an initialization for the c-means clustering algorithms and got better results Devi et al. (2015) In this research, hybridization of HS algorithm with K-means is proposed, namely, HSM, to get better results. Generally, in conventional clustering methods, the documents are clustered based on TF-IDF. The proposed algorithm obtained better results by hybridizing K-means with HSM Forsati et al. (2013) HS algorithm is proposed to solve the TC problem by modeling the clustering problem as an optimization problem. First, they proposed the original HS to find near-optimal clusters. Second, HS is combined with the K-means algorithm to achieve better clustering. The proposed algorithm can find better clusters and the quality of clusters is comparable in terms of F-measure, Entropy, Purity, and ADDC Cui et al. (2005) PSO algorithm is presented for TC problem. They used the local search of the K-means algorithm and combined with the global search of PSO clustering algorithm. The proposed algorithm obtained better results through generating more compact clustering results than the K-means algorithm

Moh’d Alia et al. (2011)

Authors

3.13 Critical Analysis 51

Datasets

Data clustering

(PSO) PSO-CA Accuracy

Reuters21578, 20Newsgroups

Data dataset

(PSO) EPSO

(PSO) MCPSO Accuracy

Measures

F-measure

Method

Table 3.2 (continued)

Zaw and Mon (2015)

Armano and Farmani (2016)

Song et al. (2014a)

Authors

(continued)

PSO algorithm is proposed for TC (EPSO). Its process is divided into two stages. (i) the environmental factor is imported in this method as a refined search technology to obtain the multi-local optimums, and (ii) the manifold information is considered to improve its global search capacity. The proposed algorithm performs better than other comparative clustering algorithms in most cases In this research, partitional clustering is defined as a multiobjective optimization problem to obtain well-separated clusters. So, two objective functions are proposed based on the concepts of data connectivity and cohesion to be an efficient multiobjective. The proposed algorithm can obtain the optimal number of clusters and outperforms the other methods In this research, a hybrid algorithm is presented based on combining PSO and CS algorithms to obtain the strengths of them in solving the TC problem. The solutions of new cuckoos are inherited from the solutions of PSO. The proposed algorithm performed well in web document clustering area

Description and Results

52 3 Literature Review

Error rate

(FA) KFA

Iris, wdpc, sonar, glass, wine

Balance, cancer, iris, wine, etc

Error rate

(FA) CEP

Datasets

20Newsgroups, Reuters21578, TREC

Measures

(FA) GF-CLUST Purity, entropy, F-measure

Method

Table 3.2 (continued)

Hassanzadeh and Meybodi (2012)

Senthilnath et al. (2011)

Mohammed et al. (2016)

Authors

In this research, a new clustering algorithm, namely, termed Gravity Firefly Clustering (GF-CLUST), is utilized for dynamic TC. The GF-CLUST has used FA, and its features are identified appropriate number of clusters, which is a challenging problem in TC. The proposed algorithm outperformed the ones produced by existing clustering techniques, such as K-means, PSO and pGSCM In this research, FA is used for TC, it is applied on data benchmark problems and the performance of the FA is compared with other methods used in the literature. The proposed algorithm can be efficiently used for clustering in comparison with other algorithms A new hybrid approach using FA and k-mean is presented to data cluster problem. FA is used to find the cluster centroids. The k-mean algorithm is used to make the clustering process. The proposed algorithm obtained better results in comparison with original algorithms

Description and Results

3.13 Critical Analysis 53

54

3 Literature Review

table contains the text clustering methods with its names, kind of evaluation measures that used to evaluate the performances, kind of datasets used, and references. The majority of researchers focus on the development of unsupervised text document clustering using meta-heuristic optimization algorithms (i.e., clustering technologies, basic algorithms, modified algorithms, and hybrid algorithms). Fifth, in the literature on objective function, a difference in the values of the similarity and distance measures for the TC was observed (Forsati et al. 2013, 2015; Ghanem and Alhanjouri 2014). An experiment is conducted to investigate and compare the values of the aforementioned measures using the same document with the same cluster centroid (Abualigah et al. 2016a). The results show the differences between the values of the same case. The similarity value between the used document and the used cluster centroid is 0.9, and the distance in the same case is 0.5. The similarity measure determined that the similarity value is high (i.e., 0.9). Meanwhile, the distance measure determined that the distance value is close but low (i.e., 0.5). This can be improved by providing a multi-objective function together.

3.14 Summary This chapter presents a comprehensive review on TC, FS, and KH algorithm. In particular, the algorithms and common techniques that have been used to improve the performances of TDCP and TFSP have been discussed in detail. The KH algorithm in terms of various areas of application, modification, and hybridization to different formulations of combinatorial optimization problems are studied. Conclusively, it can be seen from the studies, there still many interesting research directions ahead that can be conquered.

References Abd-Alsabour, N. (2014). A review on evolutionary feature selection. In 2014 European Modelling Symposium (EMS) (pp. 20–26). Abualigah, L. M., & Khader, A. T. (2017). Unsupervised text feature selection technique based on hybrid particle swarm optimization algorithm with genetic operators for the text clustering. The Journal of Supercomputing, 1–23. Abualigah, L. M., Khader, A. T., & Al-Betar, M. A. (2016a, July). Multi-objectives based text clustering technique using k-mean algorithm, 1–6. https://doi.org/10.1109/CSIT.2016.7549464 Abualigah, L. M., Khader, A. T., & Al-Betar, M. A. (2016b, July). Unsupervised feature selection technique based on genetic algorithm for improving the text clustering. In 7th International Conference on Computer Science and Information Technology (CSIT) (pp. 1–6). https://doi.org/ 10.1109/CSIT.2016.7549453 Abualigah, L. M., Khader, A. T., & Al-Betar, M. A. (2016c, July). Unsupervised feature selection technique based on harmony search algorithm for improving the text clustering. In 7th International Conference on Computer Science and Information Technology (CSIT) (pp. 1–6). https:// doi.org/10.1109/CSIT.2016.7549456

References

55

Agarwal, P., & Mehta, S. (2015). Comparative analysis of nature inspired algorithms on data clustering. In 2015 IEEE International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN) (pp. 119–124). Aggarwal, C. C., & Zhai, C. (2012). A survey of text clustering algorithms. In Mining text data (pp. 77–128). Berlin: Springer. Akter, R., & Chung, Y. (2013). An evolutionary approach for document clustering. IERI Procedia, 4, 370–375. Alghamdi, H. M., Selamat, A., & Karim, N. S. A. (2014). Improved text clustering using k-mean bayesian vectoriser. Journal of Information & Knowledge Management, 13(03), 1450026. Alikhani, A., Suratgar, A. A., Nouri, K., Nouredanesh, M., & Salimi, S. (2013). Optimal PID tuning based on krill herd optimization algorithm. In 2013 3rd International Conference on Control, Instrumentation and Automation (ICCIA) (pp. 11–15). Amiri, E., & Mahmoudi, S. (2016). Efficient protocol for data clustering by fuzzy cuckoo optimization algorithm. Applied Soft Computing, 41, 15–21. Amudhavel, J., Kumarakrishnan, S., Gomathy, H., Jayabharathi, A., Malarvizhi, M., & Kumar, K. P. (2015a). An scalable bandwidth reduction and optimization in smart phone ad hoc network (span) using krill herd algorithm. In Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015) (p. 26). Amudhavel, J., Sathian, D., Raghav, R., Pasupathi, L., Baskaran, R., & Dhavachelvan, P. (2015b). A fault tolerant distributed self organization in peer to peer (p2p) using krill herd optimization. In Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology (ICARCSET 2015) (p. 23). Armano, G., & Farmani, M. R. (2016). Multiobjective clustering analysis using particle swarm optimization. Expert Systems with Applications, 55, 184–193. Ayala, H. V. H., Segundo, E. H. V., Mariani, V. C., & dos Santos Coelho, L. (2012). Multiobjective Krill Herd algorithm for electromagnetic optimization. Evolutionary Computation, 6(2), 182– 197. Bharti, K. K., & Singh, P. (2014a). Chaotic artificial bee colony for text clustering. In 2014 Fourth International Conference of Emerging Applications of Information Technology (EAIT) (pp. 337– 343). Bharti, K. K., & Singh, P. K. (2014b). A three-stage unsupervised dimension reduction method for text clustering. Journal of Computational Science, 5(2), 156–169. Bharti, K. K., & Singh, P. K. (2015a). Chaotic gradient artificial bee colony for text clustering. Soft Computing, 25, 1–14. Bharti, K. K., & Singh, P. K. (2015b). Hybrid dimension reduction by integrating feature selection with feature extraction method for text clustering. Expert Systems with Applications, 42(6), 3105– 3114. Bharti, K. K., & Singh, P. K. (2016a). Chaotic gradient artificial bee colony for text clustering. Soft Computing, 20(3), 1113–1126. Bharti, K. K., & Singh, P. K. (2016b). Opposition chaotic fitness mutation based adaptive inertia weight BPSO for feature selection in text clustering. Applied Soft Computing, 43, 20–34. Blum, C., & Roli, A. (2003). Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Computing Surveys, 35(3), 268–308. Bolaji, A. L., Al-Betar, M. A., Awadallah, M. A., Khader, A. T., & Abualigah, L. M. (2016). A comprehensive review: krill herd algorithm (KH) and its applications. Applied Soft Computing, 49, 437–446. Brisset, S., & Brochet, P. (2005). Analytical model for the optimal design of a brushless DC wheel motor. COMPEL-The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, 24(3), 829–848. Cui, X., Potok, T. E., & Palathingal, P. (2005). Document clustering using particle swarm optimization. In Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005 (pp. 185–191). Cunningham, P. (2008). Dimension reduction. Machine learning techniques for multimedia (pp. 91–112). Berlin: Springer.

56

3 Literature Review

De Vries, C. M. (2014). Document clustering algorithms, representations and evaluation for information retrieval. Deepa, M., Revathy, P., & Student, P. (2012). Validation of document clustering based on purity and entropy measures. International Journal of Advanced Research in Computer and Communication Engineering, 1(3), 147–152. Devi, S. S., Shanmugam, A., & Prabha, E. D. (2015). A proficient method for text clustering using harmony search method. Diao, R. (2014). Feature selection with harmony search and its applications (Unpublished doctoral dissertation). Aberystwyth University. Dorigo, M., & Di Caro, G. (1999). Ant colony optimization: a new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation, 1999. CEC 99 (Vol. 2, pp. 1470–1477). Eberhart, R. C., Kennedy, J., et al. (1995). A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science (Vol. 1, pp. 39–43). Fan, Z., Chen, S., Zha, L., & Yang, J. (2016). A text clustering approach of Chinese news based on neural network language model. International Journal of Parallel Programming, 44(1), 198–206. Fattahi, E., Bidar, M., & Kanan, H. R. (2014). Fuzzy krill herd optimization algorithm. In 2014 First International Conference on Networks & Soft Computing (ICNSC) (pp. 423–426). Fodor, I. K. (2002). A survey of dimension reduction techniques. Technical Report UCRL-ID148494, Lawrence Livermore National Laboratory. Forsati, R., Mahdavi, M., Shamsfard, M., & Meybodi, M. R. (2013). Efficient stochastic algorithms for document clustering. Information Sciences, 220, 269–291. Forsati, R., Keikha, A., & Shamsfard, M. (2015). An improved bee colony optimization algorithm with an application to document clustering. Neurocomputing, 159, 9–26. Geem, Z. W., Kim, J. H., & Loganathan, G. (2001). A new heuristic optimization algorithm: Harmony search. Simulation, 76(2), 60–68. Ghanem, O., & Alhanjouri, M. (2014). Evaluating the effect of preprocessing in Arabic documents clustering (Unpublished doctoral dissertation). Master’s thesis, Computer Engineering Department, Islamic University of Gaza, Palestine. Gomaa, W. H., & Fahmy, A. A. (2013). A survey of text similarity approaches. International Journal of Computer Applications, 68(13), 0975–8887. Guo, L., Wang, G.-G., Gandomi, A. H., Alavi, A. H., & Duan, H. (2014). A new improved krill herd algorithm for global numerical optimization. Neurocomputing, 138, 392–402. Hafez, A. I., Hassanien, A. E., Zawbaa, H. M., & Emary, E. (2015). Hybrid monkey algorithm with krill herd algorithm optimization for feature selection. In 2015 11th International Computer Engineering Conference (ICENCO) (pp. 273–277). Handl, J., & Meyer, B. (2007). Ant-based and swarm-based clustering. Swarm Intelligence, 1(2), 95–113. Hassanzadeh, T., & Meybodi, M. R. (2012). A new hybrid approach for data clustering using firefly algorithm and k-means. In 2012 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP) (pp. 007–011). Holland, J. H. (1975). Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. University of Michigan Press, Ann Arbor. Hong, S.-S., Lee, W., & Han, M.-M. (2015). The feature selection method based on genetic algorithm for efficient of text clustering and text classification. International Journal of Advances in Soft Computing & Its Applications, 7(1), 22–40. Jaganathan, P., & Jaiganesh, S. (2013). An improved k-means algorithm combined with particle swarm optimization approach for efficient web document clustering. In 2013 International Conference on Green Computing, Communication and Conservation of Energy (ICGCE) (pp. 772–776). Jajoo, P. (2008). Document clustering (Unpublished doctoral dissertation). Indian Institute of Technology Kharagpur.

References

57

Jensi, R., & Jiji, G. W. (2016). An improved krill herd algorithm with global exploration capability for solving numerical function optimization problems and its application to data clustering. Applied Soft Computing, 46, 230–245. Kadhim, A. I., Cheah, Y., Ahamed, N. H., Salman, L. A., et al. (2014). Feature extraction for cooccurrence-based cosine similarity score of text documents. In 2014 IEEE Student Conference on Research and Development (SCOReD) (pp. 1–4). Karaa, W. B. A., Ashour, A. S., Sassi, D. B., Roy, P., Kausar, N., & Dey, N. (2016). Medline text mining: An enhancement genetic algorithm based approach for document clustering. Applications of Intelligent Optimization in Biology and Medicine (pp. 267–287). Berlin: Springer. Karaboga, D., & Basturk, B. (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimization, 39(3), 459–471. Karaboga, D., Gorkemli, B., Ozturk, C., & Karaboga, N. (2014). A comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artificial Intelligence Review, 42(1), 21–57. Kowalski, P. A., & Łukasik, S. (2015). Training neural networks with krill herd algorithm. Neural Processing Letters, 1–13. Lari, N. S., & Abadeh, M. S. (2014a). A new approach to find optimum architecture of ANN and tuning it’s weights using krill-herd algorithm. In 2014 International Congress on Technology, Communication and Knowledge (ICTCK) (pp. 1–7). Lari, N. S., & Abadeh, M. S. (2014b). Training artificial neural network by krill-herd algorithm. In 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference (ITAIC) (pp. 63–67). Li, Y., Luo, C., & Chung, S. M. (2008). Text clustering with feature selection by using statistical data. IEEE Transactions on Knowledge and Data Engineering, 20(5), 641–652. Li, J., Tang, Y., Hua, C., & Guan, X. (2014). An improved Krill Herd algorithm: krill herd with linear decreasing step. Applied Mathematics and Computation, 234, 356–367. Li, Z.-Y., Yi, J.-H., & Wang, G.-G. (2015). A new swarm intelligence approach for clustering based on krill herd with elitism strategy. Algorithms, 8(4), 951–964. Liao, H., Xu, Z., & Zeng, X.-J. (2014). Distance and similarity measures for hesitant fuzzy linguistic term sets and their application in multi-criteria decision making. Information Sciences, 271, 125– 142. Lin, Y.-S., Jiang, J.-Y., & Lee, S.-J. (2014). A similarity measure for text classification and clustering. IEEE Transactions on Knowledge and Data Engineering, 26(7), 1575–1590. Lin, K.-C., Zhang, K.-Y., Huang, Y.-H., Hung, J. C., & Yen, N. (2016). Feature selection based on an improved cat swarm optimization algorithm for big data classification. The Journal of Supercomputing, 72(8), 1–12. Liu, F., & Xiong, L. (2011). Survey on text clustering algorithm. In 2011 IEEE 2nd International Conference on Software Engineering and Service Science (pp. 901–904). Ljp, P. E., Van Den, H., & H.,. (2007). Dimensionality reduction: A comparative review. Rrep: Tech. Lu, Y., Liang, M., Ye, Z., & Cao, L. (2015). Improved particle swarm optimization algorithm and its application in text feature selection. Applied Soft Computing, 35, 629–636. Lv, Y., & Zhai, C. (2011). Lower-bounding term frequency normalization. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management (pp. 7–16). Machnik, Ł. (2007). A document clustering method based on ant algorithms. Task Quarterly, 11(1– 2), 87–102. MacQueen, J., et al. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (Vol. 1, pp. 281–297). Maitra, R., & Ramler, I. P. (2012). A k-mean-directions algorithm for fast clustering of data on the sphere. Journal of Computational and Graphical Statistics, 19(2), 377–396. Manikandan, P., & Selvarajan, S. (2014). Data clustering using cuckoo search algorithm (CSA). In Proceedings of the Second International Conference on Soft Computing for Problem Solving (SocProS 2012), December 28–30, 2012 (pp. 1275–1283).

58

3 Literature Review

Moayedikia, A., Jensen, R., Wiil, U. K., & Forsati, R. (2015). Weighted bee colony algorithm for discrete optimization problems with application to feature selection. Engineering Applications of Artificial Intelligence, 44, 153–167. Mohammadi, A., Abadeh, M. S., & Keshavarz, H. (2014a). Breast cancer detection using a multiobjective binary Krill Herd algorithm. In 2014 21th Iranian Conference on Biomedical Engineering (ICBME) (pp. 128–133). Mohammed, A. J., Yusof, Y., & Husni, H. (2014b). Weight-based firefly algorithm for document clustering. In Proceedings of the First International Conference on Advanced Data and Information Engineering (DaEng-2013) (pp. 259–266). Mohammed, A. J., Yusof, Y., & Husni, H. (2016). GF-CLUST: A nature-inspired algorithm for automatic text clustering. Journal of Information & Communication Technology, 15(1). Moh’d Alia, O., Al-Betar, M. A., Mandava, R., & Khader, A. T. (2011). Data clustering using harmony search algorithm. In International Conference on Swarm, Evolutionary, and Memetic Computing (pp. 79–88). Murugesan, A. K., & Zhang, B. J. (2011). A new term weighting scheme for document clustering. In 7th International Conference Data Min. (DMIN 2011-WORLDCOMP 2011), Las Vegas, Nevada, USA. Nanda, S. J., & Panda, G. (2014). A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm and Evolutionary Computation, 16, 1–18. Nebu, C. M., & Joseph, S. (2016). A hybrid dimension reduction technique for document clustering. Innovations in bio-inspired computing and applications (pp. 403–416). Berlin: Springer. Paik, J. H. (2013). A novel TF-IDF weighting scheme for effective ranking. In Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 343–352). Prakash, B., Hanumanthappa, M., & Mamatha, M. (2014). Cluster based term weighting model for web document clustering. In Proceedings of the Third International Conference on Soft Computing for Problem Solving (pp. 815–822). Qian, G., Sural, S., Gu, Y., & Pramanik, S. (2004). Similarity between Euclidean and cosine angle distance for nearest neighbor queries. In Proceedings of the 2004 ACM Symposium on Applied Computing (pp. 1232–1237). Rajeswari, M. R., & GunaSekaran, G. (2015). Improved ant colony optimization towards robust ensemble co-clustering algorithm (IACO-RECCA) for enzyme clustering. Lateral, 4(4). Rodrigues, D., Pereira, L. A., Papa, J. P., & Weber, S. A. (2014). A binary krill herd approach for feature selection. In 2014 22nd International Conference on Pattern Recognition (ICPR) (pp. 1407–1412). Roul, R. K., Varshneya, S., Kalra, A., & Sahay, S. K. (2015). A novel modified apriori approach for web document clustering. Computational intelligence in data mining-volume 3 (Vol. 3, pp. 159–171). Berlin: Springer. Saida, I. B., Nadjet, K., & Omar, B. (2014). A new algorithm for data clustering based on cuckoo search optimization. Genetic and evolutionary computing (pp. 55–64). Berlin: Springer. Senthilnath, J., Omkar, S., & Mani, V. (2011). Clustering using firefly algorithm: Performance study. Swarm and Evolutionary Computation, 1(3), 164–171. Shafiei, M., Wang, S., Zhang, R., Milios, E., Tang, B., Tougas, J., et al. (2006). A systematic study of document representation and dimension reduction for text clustering. Shafiei, M., Wang, S., Zhang, R., Milios, E., Tang, B., Tougas, J., & Spiteri, R. (2007). Document representation and dimension reduction for text clustering. In 2007 IEEE 23rd International Conference on Data Engineering Workshop (pp. 770–779). Shah, F. P., & Patel, V. (2016). A review on feature selection and feature extraction for text classification. In International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET) (pp. 2264–2268). Shah, N., & Mahajan, S. (2012). Document clustering: A detailed review. Int’l Journal of Applied Information Systems, 4(5), 30–38.

References

59

Shamsinejadbabki, P., & Saraee, M. (2012). A new unsupervised feature selection method for text clustering based on genetic algorithms. Journal of Intelligent Information Systems, 38(3), 669–684. Singh, P., & Sharma, M. (2013). Text document clustering and similarity measures. Department of Computer Science & Engineering. Singhal, A., Buckley, C., & Mitra, M. (2017). Pivoted document length normalization. In ACM SIGIR Forum (Vol. 51, pp. 176–184). Song, W., Li, C. H., & Park, S. C. (2009). Genetic algorithm for text clustering using ontology and evaluating the validity of various semantic similarity measures. Expert Systems with Applications, 36(5), 9095–9104. Song, W., Ma, W., & Qiao, Y. (2014a). Particle swarm optimization algorithm with environmental factors for clustering analysis. Soft Computing, 1–11. Song, W., Liang, J. Z., & Park, S. C. (2014b). Fuzzy control GA with a novel hybrid semantic similarity strategy for text clustering. Information Sciences, 273, 156–170. Song, W., Qiao, Y., Park, S. C., & Qian, X. (2015). A hybrid evolutionary computation approach with its application for optimizing text document clustering. Expert Systems with Applications, 42(5), 2517–2524. Sorzano, C. O. S., Vargas, J., & Montano, A. P. (2014). A survey of dimensionality reduction techniques. arXiv:1403.2877. Sultana, S., & Roy, P. K. (2015). Oppositional Krill Herd algorithm for optimal location of distributed generator in radial distribution system. International Journal of Electrical Power & Energy Systems, 73, 182–191. Sultana, S., & Roy, P. K. (2016). Oppositional Krill Herd algorithm for optimal location of capacitor with reconfiguration in radial distribution system. International Journal of Electrical Power & Energy Systems, 74, 78–90. Sur, C., & Shukla, A. (2014). Discrete krill herd algorithm-a bio-inspired metaheuristics for graph based network route optimization. Distributed computing and internet technology (pp. 152–163). Berlin: Springer. Tang, B., Shepherd, M., Milios, E., & Heywood, M. I. (2005). Comparing and combining dimension reduction techniques for efficient text clustering. In Proceeding of SIAM International Workshop on Feature Selection for Data Mining (pp. 17–26). Tsai, C.-F., & Eberle, W.,& Chu, C.-Y. (2013). Genetic algorithms in feature and instance selection. Knowledge-Based Systems, 39, 240–247. Tunali, V., Bilgin, T., & Camurcu, A. (2016). An improved clustering algorithm for text mining: Multi-cluster spherical k-means. International Arab Journal of Information Technology (IAJIT), 13(1), 12–19. Uˇguz, H. (2011). A two-stage feature selection method for text categorization by using information gain, principal component analysis and genetic algorithm. Knowledge- Based Systems, 24(7), 1024–1032. van der MLJP, P. E., & van den HH, J. (2009). Dimensionality reduction: A comparative review (Technical Report). Tilburg, Netherlands: Tilburg Centre for Creative Computing, Tilburg University, Technical Report: 2009-005. Vergara, J. R., & Estévez, P. A. (2014). A review of feature selection methods based on mutual information. Neural Computing and Applications, 24(1), 175–186. Wang, G.-G., Deb, S., & Thampi, S. M. (2016). A discrete krill herd method with multilayer coding strategy for flexible job-shop scheduling problem. Intelligent systems technologies and applications (pp. 201–215). Berlin: Springer. Wang, G., Guo, L., Gandomi, A. H., Cao, L., Alavi, A. H., Duan, H., et al. (2013). Lévy-flight krill herd algorithm. Mathematical Problems in Engineering, Article ID 682073, 14 p. https://doi.org/ 10.1155/2013/682073,2013. Wang, G.-G., Hossein Gandomi, A., & Hossein Alavi, A. (2013). A chaotic particle- swarm krill herd algorithm for global numerical optimization. Kybernetes, 42(6), 962–978.

60

3 Literature Review

Wang, G.-G., Gandomi, A. H., & Alavi, A. H. (2014a). An effective krill herd algorithm with migration operator in biogeography-based optimization. Applied Mathematical Modelling, 38(9), 2454–2462. Wang, G.-G., Guo, L., Gandomi, A. H., Hao, G.-S., & Wang, H. (2014b). Chaotic krill herd algorithm. Information Sciences, 274, 17–34. Wang, Y., Liu, Y., Feng, L., & Zhu, X. (2015). Novel feature selection method based on harmony search for email classification. Knowledge-Based Systems, 73, 311–323. Wang, S., Lu, J., Gu, X., Du, H., & Yang, J. (2016). Semi-supervised linear discriminant analysis for dimension reduction and classification. Pattern Recognition, 57, 179–189. Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67–82. Wu, G., Lin, H., Fu, E., & Wang, L. (2015, October). An improved k-means algorithm for document clustering. In 2015 International Conference on Computer Science and Mechanical Automation (CSMA) (pp. 65–69). https://doi.org/10.1109/CSMA.2015.20 Yang, Y., & Pedersen, J. O. (1997). A comparative study on feature selection in text categorization. In Icml (Vol. 97, pp. 412–420). Yang, X.-S., & Deb, S. (2010). Engineering optimisation by cuckoo search. International Journal of Mathematical Modelling and Numerical Optimisation, 1(4), 330–343. Yang, X.-S., & He, X. (2013). Firefly algorithm: Recent advances and applications. International Journal of Swarm Intelligence, 1(1), 36–50. Yao, F., Coquery, J., & Lê Cao, K.-A. (2012b). Independent principal component analysis for biologically meaningful dimension reduction of large biological data sets. BMC Bioinformatics, 13(1), 1. Younesi, A., & Tohidi, S. (2015). Design of a sensorless controller for PMSM using krill herd algorithm. In 2015 6th Power Electronics, Drives Systems & Technologies Conference (PEDSTC) (pp. 418–423). Zaw, M. M., & Mon, E. E. (2015). Web document clustering by using PSO-based cuckoo search clustering algorithm. Recent advances in swarm intelligence and evolutionary computation (pp. 263–281). Berlin: Springer. Zhang, Y., Wang, S., Phillips, P., & Ji, G. (2014). Binary PSO with mutation operator for feature selection using decision tree applied to spam detection. Knowledge-Based Systems, 64, 22–31.

Chapter 4

Proposed Methodology

4.1 Introduction This chapter presents and summarizes the proposed method that was used to achieve the research objectives of the present study, including (i) a new weight scheme, that is, length feature weight (LFW) in Sect. 4.4.1; (ii) three models for TFSP to find the best algorithm for the FS problem in Sect. 4.5; (iii) a dynamic DR technique in Sect. 4.6; (iv) three models of the KHA for TDCP in Sect. 4.8; (v) a new multiobjective function for enhancing the clustering decision of the local search algorithm in Sect. 4.8; (vi) experiments and results in Sect. 4.9; and (vii) conclusion in Sect. 4.10.

4.2 Research Methodology Outline Text preprocessing steps (i.e., tokenization, stop word removal, and stemming) were used to prepare TDs by converting the components of each document from the text to the data. After the TFs are extracted, the proposed methods are divided into two stages. (i) In the first stage, a new term weighting scheme is used to obtain an accurate weight score for features within the same document. Then, the text FS method is modeled and formulated as an optimization problem to improve the selection mechanism by adapting the basic GA, HS, and PSO to the FS problem. A new DR technique is applied to produce a new subset of useful features for improving the performance of TD clustering. After a new subset of low-dimensional informative features are obtained, the TD clustering technique is modeled and formulated as an optimization problem. (ii) In the second stage, the BKHA is adapted for the TDCP. The MKHA is adapted for the TDCP to improve the exploration capability of the BKHA. An enhanced hybrid, HKHA with the k-mean algorithm for the TDCP, is used to enhance the © Springer Nature Switzerland AG 2019 L. M. Q. Abualigah, Feature Selection and Enhanced Krill Herd Algorithm for Text Document Clustering, Studies in Computational Intelligence 816, https://doi.org/10.1007/978-3-030-10674-4_4

61

62

Fig. 4.1 The research methodology stages

4 Proposed Methodology

4.2 Research Methodology Outline

63

exploitation capability. Finally, a multi-objective function is adapted for the hybrid TD clustering algorithm to enhance the local search decision. The remainder of this chapter provides a comprehensive explanation of the proposed work. The research methodology is outlined in more detail in Fig. 4.1.

4.3 Text Pre-processing The proposed system is based on the text FS and TD clustering methods, thereby needing preprocessing for TD representation. The preprocessing steps are divided into three main steps, namely, (1) tokenization, (2) stop word removal, (3) stemming, and (4) text document representation (Bharti and Singh 2015b; Zaw and Mon 2015).

4.3.1 Tokenization Tokenization is the task of cutting it up into pieces (words), called tokens, possibly at the same time losing some characters, such as punctuation. These tokens are usually referred to as terms or words, but it is important to make a type/token distinction. A token is an example of a sequence of characters in a document that is classified together as a useful semantic unit. A type is the set of all tokens including the same character sequence. A term is a type which is included in the information retrieval system’s dictionary (Zhong et al. 2012).

4.3.2 Stop Words Removal Stop-words are common popular words, such as “in”, “an”, “that”, and “some”, as well as other common words that are much frequently used and small functional words in the TD. These words need to be removed from text documents as they normally have high frequency, which decreases the performance of the TC technique. List1 of stop words contains a total of 571 words (Bharti and Singh 2016).

4.3.3 Stemming Stemming is the method of reducing inflected words to their word stem (root). The stem process is not same to the morphological root process; it is usually to map words

1 http://www.unine.ch/Info/clef/.

64

4 Proposed Methodology

to the same stem, even if this stem is not in itself a valid root. Porter2 stemmer is the most common stemming method used in text mining (Bharti and Singh 2015b; Zhong et al. 2012). All text preprocessing steps come from Python NLTK Demos for Natural Language Text Processing.3

4.3.4 Text Document Representation Vector space model (VSM) is a successful model that is used to represent TDs in a standard format (Salton et al. 1975). It was introduced in the early 1970s. Each document is represented as vector of term weight to facilitate similarity calculation. Each term in the collection represents a dimension of the weighted value to improve the quality of the text analysis algorithm and to reduce the time cost (De Vries 2014). VSM is used in many text mining domains, such as information retrieval (Abualigah and Hanandeh 2015), text classification (Hong et al. 2015), and TC (De Vries 2014). Term weighting is represented by the vector space model (VSM) to display the TDs in a standard format (Mahdavi and Abolhassani 2009), as shown in Eq. (4.1). This model represents each document as a vector, as shown in Eq. (4.2) (De Vries 2014; Forsati et al. 2013; Ghanem and Alhanjouri 2014). Equation (4.1) represents n documents and t terms in a common standard format using the VSM, as follows: ⎡

w1,1 ··· .. . .. .

w1,2 ··· .. . .. .

⎢ ⎢ ⎢ ⎢ V SM = ⎢ ⎢ ⎢ ⎢ ⎣w(n−1),1 w(n−1),2 wn,1 wn,2

⎤ · · · w1,(t−1) w1,t ··· ··· ··· ⎥ ⎥ . .. ⎥ .. .. . . ⎥ ⎥ .. .. ⎥ .. . . . ⎥ ⎥ · · · · · · w(n−1),t ⎦ · · · wn,(t−1) wn,t

di = (wi,1 , wi,2 , . . . , wi,j , . . . , wi,t ),

(4.1)

(4.2)

4.4 Term Weighting Scheme Term weighting is an important numerical statistic step to consider the weighting of document words (terms or features) for TD clustering processes according to the term frequency (Shah and Mahajan 2012). A popular term-weighting scheme has been used in text-mining is the term frequency-inverse document frequency (TF-IDF)

2 Porter

stemmer. Website at http://tartarus.org/martin/PorterStemmer/.

3 http://text-processing.com/demo/.

4.4 Term Weighting Scheme

65

(Bharti and Singh 2016; Forsati et al. 2013; Moayedikia et al. 2015; Mohammed et al. 2014). TF-IDF is a common weight scheme that is used to calculate term weighting in text mining for document representation, each document is represented as a vector of terms weighting (Wang et al. 2012). The term weight is calculated by Eq. (4.3), as follows: (4.3) wi,j = tf (i, j) ∗ log(n/df (j)), where, wi,j represents the weight of term j in the document number i, tf(i,j) is the occurrences of term j in the document number i, n is the number of all documents in dataset, and df(j) is the number of documents that contain the term number j (Bharti and Singh 2014).

4.4.1 The Proposed Weighting Scheme In text mining domains, the term weighting schemes are used to assign an appropriate weight score for the terms (features) of the documents to improve term classification (discrimination). A weighting scheme, that is, LFW, is proposed to obtain a better term weight (feature score) to facilitate and improve the FS process by distinguishing between informative and uninformative text features more efficiently. In the literature on term weighting schemes, TF-IDF is the most common weight scheme. This study focuses on improving the weakness of the current TF-IDF that affects the assessment of the terms weight of each document. Three main factors have been developed to improve the effectiveness of the weighting scheme for the unsupervised text FS technique, as follows: • First, TF value (tf (i, j)) is taken from the common weighing scheme (TF-IDf) without any modification. This factor is used to distinguish further between useful and useless features by considering the number of times each term occurs in each document instead of the binary factor (i.e., appear or does not appear in each document). • Second, the document frequency df value of each term is ignored and focused on the inverse document frequency (IDF) of each term. Also, the DF (df ) value of each term is not considered a major factor that magnifies the term weight in TF-IDF. Thus, a variable (DF), is added to exactly determine how many times the term appears in all of the documents. This variable affects the importance of a term by increasing its weight value (score) if it appears in many documents. This factor was used to make a normalized value for the weight score and it was not used to determine the weighting score thereby finding how many times each term appears in all of the documents. • Third, the number of terms in the current document (a) has not been added into the common weight schemes so far. One of the major objectives in adding this

66

4 Proposed Methodology

variable to the LFW is to facilitate the unsupervised FS. In a certain situation, the document that has a large number of features will have many uninformative features. In another situation, the document that has a small number of features will loose many uninformative features. LFW increases the term weighting of the term that appears in the documents that contain a limited number of terms. • Fourth, the maximum TF (maxtf (i)) is an important factor that plays an essential role in assigning a better term weight (score). This factor is added into the LFW to moderate the importance of terms, and vice versa (this factor is added to the LFW to distinguish the document’s terms.). Generally, if the document contains a large number of terms, then the importance of the terms is smaller than that of the document that contains a small number of terms. The maximum TF increases the term weight in cases where the maximum TF is small in comparison with the other documents. LFW is formulated as Eq. (4.4).  LF Wi,j =

tf (i,j)∗df (j) ai

maxtf (i)



∗ log

n df (j)

(4.4)

where, LF Wi,j is the weighing value of the term j in the document i, tf (i, j) is the TF of the term j in document i, df (j) is the number of documents that contain feature j, ai is the number of the new selected features for the document i, maxtf (i) is maximum TF in the document i, and n is the number of all documents in the D.

4.4.2 Illustrative Example Tables 4.1, 4.2 and 4.3 present a toy example to show LFW advantage when it is compared with classics schemes (TF-IDF). This example is applied on eight documents with ten terms (features). The terms frequency of eight documents is represented in Table 4.1. The terms weight using TF-IDF as shown in Table 4.2 and using LFW is calculated by Eq. (4.4) as shown in Table 4.3. It is clear from the example of the weighting schemes that the proposed weighting scheme (LFW) distinguished between the document’s features effectively in comparison with the classic term weight scheme (TF-IDF). In the case of the tenth feature in the second document. TF-IDF gave a high weighting score for the tenth feature (i.e., 1.806). It means that the document frequency of the feature is under an exaggerated consideration. But, LWF achieved the goal of the first factor by giving low weight score value (i.e., 0.090) for the feature appears in a few documents. In the case of the first feature in documents number two, four and five. TF-IDF gave the same weighting score for the first feature in documents number two, four and five (i.e., 0.602, 0.602, and 0.602). It means that the number of features at the level of the document are not under consideration. But, LWF achieved the goal of the second factor by giving different values (i.e., 0.120, 0.177, and 0.133) for the

4.4 Term Weighting Scheme Table 4.1 Terms frequency Term 1 2 3 Doc1 Doc2 Doc3 Doc4 Doc5 Doc6 Doc7 Doc8

5 2 0 2 2 0 0 0

0 2 0 2 2 0 0 0

0 2 2 2 2 2 2 2

67

4

5

6

7

8

9

10

0 2 0 2 2 0 0 0

0 2 0 2 2 0 0 0

0 2 0 2 2 0 0 0

0 2 0 2 2 0 0 0

0 2 0 2 2 0 0 0

0 2 0 5 2 0 0 0

0 2 0 0 0 0 0 0

Table 4.2 Terms weight using TF-IDF Term 1 2 3 4 Doc1 Doc2 Doc3 Doc4 Doc5 Doc6 Doc7 Doc8

1.505 0.602 0 0.602 0.602 0 0 0

0 0.850 0 0.850 0.850 0 0 0

0 0.114 0.114 0.114 0.114 0.114 0.114 0.057

0 0.850 0 0.850 0.850 0 0 0

Table 4.3 Terms weight using LFW Term 1 2 3 4 Doc1 Doc2 Doc3 Doc4 Doc5 Doc6 Doc7 Doc8

0.240 0.120 0 0.117 0.133 0 0 0

0 0.127 0 0.056 0.141 0 0 0

0 0.039 0.339 0.017 0.044 0.339 0.399 0.008

0 0.127 0 0.056 0.141 0 0 0

5

6

7

8

9

10

0 0.850 0 0.850 0.850 0 0 0

0 0.850 0 0.850 0.850 0 0 0

0 0.850 0 0.850 0.850 0 0 0

0 0.850 0 0.850 0.850 0 0 0

0 0.850 0 2.125 0.850 0 0 0

0 1.806 0 0 0 0 0 0

5

6

7

8

9

10

0 0.127 0 0.056 0.141 0 0 0

0 0.127 0 0.056 0.141 0 0 0

0 0.127 0 0.056 0.141 0 0 0

0 0.127 0 0.056 0.141 0 0 0

0 0.127 0 0.141 0.141 0 0 0

0 0.090 0 0 0 0 0 0

68

4 Proposed Methodology

same feature based on the number of features of each document. In this case, the first feature in the second document had taken lowest weight score in comparison with all documents because the high number of features in that document. In the case of the fifth feature in documents number four and five. TF-IDF gave the same weighting score for the fifth feature in documents number four and five (i.e., 0.850, and 0.850). It means that the max terms frequency of features at the level of the document are not under consideration. But, LWF achieved the goal of the third factor by gave different values (i.e., 0.056, and 0.141) for the same feature based on the max terms frequency of the document. In this case, the fifth feature in the fourth document taken lowest weight score in comparison with fifth document because the max terms frequency of the document number five is high (i.e, 5). It is worth to decrease the weight of terms in document number 4 because the max frequency is bigger in document number 4 which can decrease the importance of all feature.

4.5 Text Feature Selection Problem The TFSP is formulated as an optimization problem to find an optimal subset of informative text features to improve the TD clustering technique by creating a new subset with less dimensions and containing more informative text features.

4.5.1 Text Feature Selection Descriptions and Formulations Given that D is a set of TDs D = {d1 , d2 , . . . , di , . . . , dn }, d1 is the document number 1 and n is the number of all documents in the given set of documents. di is the document number i, which contains a set of text features in document i (fi ). fi represents a vector of terms (features) fi = {fi,1 , fi,2 , . . . , fi,j , . . . , fi,t }, where t is the number of all unique text features in D. Let sfi be a new subset of text features sfi = {sfi,1 , sfi,2 , . . . , sfi,j , . . . , sfi,m }, which was derived from fi by the FS technique. m is the new number of all unique features in D1 (Bharti and Singh 2016; Cole 1998; Zhao and Wang 2010a, b). If sfi,j = 1, then the jth text feature is selected as informative text feature in document i, if sf i, j = 0, then the jth text feature is not selected as informative text feature in document i. Finally, the result of this stage is a new subset of documents with more informative text features, which can be called D1 .

4.5.2 Representation of Feature Selection Solution The text FS method is applied based on metaheuristic algorithms, which begin with random initial solutions and enhance the population by obtaining a global optimal

4.5 Text Feature Selection Problem

69

solution (Bharti and Singh 2016; Cobos et al. 2010; Inbarani et al. 2015). Each unique term or feature in the document is considered a dimension of the search space. The population of the any algorithm includes a set of solutions represented as a matrix A of size S × t as formulated in Eq. (4.5). Notably, S indicates the size of population and t is the number of features in each solution (Zhao and Wang 2010b). ⎡

f1,1 f1,2 ⎢f2,1 f2,2 ⎢ A=⎢ . .. ⎣ .. . fS,1 fS,2

... ... .. . ...

... ... .. . ...

... ... .. . ...

f1,t f2,t .. .

⎤ ⎥ ⎥ ⎥ ⎦

(4.5)

. . . fS,t

Each row represents an example of a single solution representation for FS technique. In cases where the value f1,2 is equal 1, feature number 2 in solution number 1 is selected as informative feature. However, if the value f4,3 is equal 0, then feature number 3 in solution number 4 is not selected. Note, the algorithm of the feature selection can manipulate just in these two values. Otherwise, the value is equal −1, which indicates that the feature does not appear in the document. Thus, the algorithm of the feature selection can not manipulated in this position (e.g. −1).

4.5.3 Fitness Function The fitness function (FF) is used to evaluate the FS solutions, as shown in Eq. (4.6). The mean absolute difference (MAD) was applied based on the term weight as an objective function to assign a relevant score for the text features (Abualigah and Khader 2017; Bharti and Singh 2014). It is a simplified form of term variance to make a comparative between each feature (i.e., xi,j ) with all the document features (i.e., x¯i ). t 1 |xi,j − x¯i |, (4.6) MADXi = ai j=1 where, MADXi represents the FF of solution Xi, xi,j is the term weight value of the feature j in document number i, ai is the number of selected features in document i, t is the number of unique features, and x¯i is the mean of solution number i computed using Eq. (4.7).

t 1 x¯i = xi,j , (4.7) ai j=1 where, Eq. (4.6) is used to compute the mean of the feature i from j = 1 to t, t refers to the number of all features.

70

4 Proposed Methodology

Table 4.4 The text feature selection problem and optimization terms in the genetic algorithm context GA terms Optimization TFSP Population



Generation ↔ Period of an ↔ organism lives Chromosome ↔ Gene



Optimal ↔ chromosome

Candidate solutions Iteration Objective function Solution vector Decision variable Optimal solution



TFSP solutions

↔ ↔

Selection Fitness function formalized by Eq. (4.6)



A feature selection solution (A new subset of text features) Feature

↔ ↔

An optimal subset of informative text features

4.5.4 Metaheuristic Algorithms for Text Feature Selection Problem This section shows the proposed metaheuristic algorithms, namely, GA, HS, and PSO, used to solve the TFSP. These three optimization methods are proposed to investigate the unsupervised TFSP. These three optimization methods have been successfully applied to solve other FS problems. These types of algorithms can also avoid being trapped in the local optimal solution and achieve improved solutions by efficiently searching the available search space. Through these algorithms, the best algorithm will be chosen for the TFSP. Notably, the best algorithm yields the results that will be used in the subsequent step (DR technique).

4.5.4.1

Genetic Algorithm: Procedure

The relationships and equivalences are mapped to determine the link between the TFSP and the GA context, as shown in Table 4.4. In the GA, each individual is a chromosome of a set of genes. The set of chromosomes are stored in a population. Three main operators, namely, selection, crossover, and mutation, are processed during evolution. TF-IDF and LFW are used as objective functions to evaluate each text feature (Abualigah et al. 2016b; U˘guz 2011). The main process of the GA is to eliminate the uninformative text features. The pseudo-code of the GA is shown in Algorithm 2. Selection methods: Roulette wheel selection, tournament selection, ranking selection, and random selection are standard methods used in the GA. The random

4.5 Text Feature Selection Problem

71

Algorithm 2 Genetic Algorithm 1: Input: Generate the initial population randomly. 2: Output: Optimal chromosome and its fitness value. 3: Algorithm 4: Initialize population and parameters of the genetic algorithm Cr , Mu , Imax and etc. 5: Evaluate all chromosomes using the fitness function by Eq. (4.6). 6: while Termination criteria do 7: Selection operator 8: if rand < Cr then 9: crossover operator. 10: end if 11: if rand < Mu then 12: mutation operator. 13: end if 14: Evaluate the offspring chromosomes. 15: Replaces the worst chromosome with best chromosome. 16: end while 17: Return a new subset of informative features D1 .

selection method was used in this study to select two random chromosomes to apply the genetic operators (Abualigah et al. 2016b). Crossover operator: Crossover is an operator used to deal with a pair of randomly selected parent chromosomes to generate offspring chromosomes by swapping two chromosomes to enhance the current candidate solutions. The patterns of the crossover operator are divided into one point, two points, and uniform crossover. This study used two-point crossover. This operator is applied according to a probability parameter Cr where Cr ∈ [0, 1]. Mutation operator: This process involves a flip for predetermined genes based on patterns as one point or uniform flips to generate improved chromosomes. Uniform two-point mutation is used in this study. This operator applies a probability parameter Mu where Mu ∈ [0, 1] (Abualigah et al. 2016b). Replacement: When new offspring solutions are reproduced using the crossover and mutation operators, the solutions must be compared with the population. This approach examined the produced solutions and replaced the current offspring with the worst solutions if it has better fitness value than the worst solution. Otherwise, the process is disregarded (Abualigah et al. 2016b).

4.5.4.2

Harmony Search Algorithm: Procedure

The relationships and equivalences are mapped to determine the link between the TFSP and the HS context, as shown in Table 4.5. The HS algorithm has been successfully applied to several complex optimization problems, such as TC (Forsati et al. 2013). The pseudo-code of the HS is shown in Algorithm 3.

72

4 Proposed Methodology

Table 4.5 The text feature selection problem and optimization terms in the harmony search algorithm context HS terms Optimization TFSP Harmony memory ↔ solution Improvisation ↔ Audio-aesthetic standard ↔

Candidate solutions



TFSP solutions

Iteration Objective function

↔ ↔

Selection Fitness function formalized by Eq. (4.6) A feature selection solution (A new subset of text features) Feature An optimal subset of informative text features

Harmony



Solution vector



Musician Pleasing harmony

↔ ↔

Decision variable Optimal solution

↔ ↔

Algorithm 3 Harmony search algorithm 1: Input: Generate the initial harmonics randomly. 2: Output: Optimal chromosome and its fitness value. 3: Algorithm 4: Initialize the parameters of the harmony search HMCR, PAR and etc. 5: Initialize harmony memory (HM) 6: Evaluate all harmonics using the fitness function by Eq. (4.6). 7: while Termination criteria do 8: new solution=φ. 9: if rand < HMCR then 10: Memory consideration. 11: if rand < PAR then 12: Pitch adjustment 13: end if 14: else 15: Random consideration. 16: end if 17: Evaluate the fitness function of the new solution. 18: Replaces the worst harmony in HM by the new solution. 19: end while 20: Return a new subset of informative features D1 .

Initialize the feature selection problem and HS parameters: The FS problem is defined as an optimization problem by specifying the FF based on the maximum value fXi , where fXi is the FF value of solution i, and xi,j is the jth feature in solution i. The control parameters of the HS algorithm are also initialized in this step, which include (i) the number of solutions or documents (S) similar to the population size in GA; (ii) the harmony memory consideration rate (HMCR) used in the generation process to define the rate of exploiting the harmony memory (HM ) solutions; (iii) the pitch adjusting rate (PAR) utilized in the generation process to define the probability of adjusting the features to their neighboring features; and

4.5 Text Feature Selection Problem

73

(iv) the maximum number of generations (Imax ) corresponding to the number of iterations (Abualigah et al. 2016c; Al-Betar et al. 2015). Initialize the harmony memory: HM is a matrix filled by generating S random solutions. The randomly produced outcomes by the HM solution in the initialization step are shown in Eq. (4.8). xij = rand

mod 2,

(4.8)

The HM is initially filled by the value of −1 for the features which not appear in the original document and (0 or 1) for the features which appear in the original document (see Eq. (4.5)). rand generates a random integer number, i = 1, 2, . . . , S and j = 1, 2, . . . , t; and t is the number of terms in each document. Notably, term j in the document i (i.e., xi,j ) takes a value of 0 if it is not selected as informative features, 1 if it is selected as informative feature, and −1 if it does not exist in the document. Improvise a new solution: In this step, a new solution as in Eq. (4.9) is generated using three operators, namely, memory consideration, pitch adjustment, and random selection (Abualigah et al. 2016c). The HS algorithm dynamically generates a new solution, as indicated in Eq. (4.10), by the improvisation process to improve the HM solutions. The detailed process of improvisation in the binary HS algorithm for text FS is conducted as follows: The pseudo-code of the improvisation of a new solution process is provided in Algorithm 4. Xi = (xi,1 , xi,2 , . . . , xi,j , . . . , xi,t )

(4.9)

Algorithm 4 : Improvise a new solution 1: Input: Harmony memory HM solutions 2: Output: A new solution (see (4.9)). 3: for each j ∈ [1, t] do 4: if rand [0, 1]  HMCR then 5: xij = HM [i][j]wherej ∈ U (1, 2, . . . , t) 6: if rand (0, 1)  PAR then 7: xij = xij ± rand × bw(i), where rand ∈ [0, 1] 8: end if 9: else xij = LBi + rand × (U Bi − LBi) 10: end if 11: end for

xnew,j ←

xnew,j ∈ {x1,j , x2,j , . . . , xS,j }; if rand < HMCR otherwise xnew,j ∈ {0, 1};

(4.10)

74

4 Proposed Methodology

Table 4.6 The text feature selection problem and optimization terms in the particle swarm optimization algorithm context PSO terms Optimization TFSP Swarm Iteratively Cost function

↔ ↔ ↔

Candidate solutions Iteration Objective function

↔ ↔ ↔

Particle



Solution vector



Position Optimal particle

↔ ↔

Decision variable Optimal solution

↔ ↔

TFSP solutions Selection Fitness function formalized by Eq. (4.6) A feature selection solution (A new subset of text features) Feature An optimal subset of informative text features

where xnew,j is the jth feature of a new harmony and each variable is chosen according to Eq. (4.10). The candidate value of each feature is chosen from the existing values in the current HM with the probability of HMCR (Abualigah et al. 2016c). Update HM: The newly generated solution will be evaluated using the FF value formulated in Eq. (4.6). If the FF value of the new solution is better than the FF of the worst solution in the HM, then the new solution will replace the worst solution in the HM (Forsati et al. 2013). Check the stopping criterion: When the maximum number of iterations is reached, the HS algorithm will terminate. Otherwise, Steps 3 and 4 are repeated to improvise a new harmony again.

4.5.4.3

Particle Swarm Optimization: Procedure

The relationships and equivalences are mapped to determine the link between the TFSP and the PSO context, as shown in Table 4.6. The PSO algorithm uses the global best solution concept to obtain the optimal solution. In each iteration, the global best solution is recorded and updated (Bharti and Singh 2016; Zhang et al. 2014). The pseudo-code of the PSO is shown in Algorithm 5. The PSO algorithm generates particles with random positions according to Eq. (4.8). Each candidate solution, called particle, is evaluated by the FF, as formulated in Eq. (4.6). In the PSO, the solutions contain several single entities (features). The PSO is placed in the search space of the FS problem and evaluates the FF at its current location. Each solution determines its movement by combining aspects of the historical information according to its own current and best fitness. The subsequent iteration selects a location after all solutions have moved. Finally, the solutions, which are similar to a flock of birds collectively searching for food, will likely reach the optimal fitness value (Bharti and Singh 2016).

4.5 Text Feature Selection Problem

75

Algorithm 5 Particle swarm optimization 1: Input: Generate the initial particles randomly. 2: Output: Optimal particle and its fitness value. 3: Algorithm 4: Initialize swarm and parameters of the particle swarm optimization c1 , c2 and etc. 5: Evaluate all particles using the fitness function by Eq. (4.6). 6: while Termination criteria do 7: Update the velocity using Eq. (4.12). 8: Update each position using Eq. (4.11). 9: Evaluate the fitness function. 10: Replaces the worst particle with best particle. 11: Update LB and GB. 12: end while 13: Return a new subset of informative features D1 .

The PSO works based on two main factors, namely, particle position (as indicated in Eq. (4.11)) and velocity (as shown in Eq. (4.12)), to update each particle position. The velocity of each particle is updated according to the particle movement effect, and each particle attempts to move to the optimal position (Zhang et al. 2014). xij = xij + vij

(4.11)

where, vi,j = w ∗ vij + c1 ∗ rand1 ∗ (LBI − xi,j ) + c2 ∗ rand2 ∗ (GBI − xi,j ),

(4.12)

The value of inertia weight often changes based on the iteration in the range of [0, 1]. LBI is the current best local solution at iteration number I , and GBI is the current best global solution at iteration number I . rand1 and rand2 are random numbers in the range of [0, 1], and c1 and c2 are usually two constants. The inertia weight is determined by Eq. (4.13).

Imax − I w = (wmax − wmin ) Imax

+ wmin

(4.13)

where wmax and wmin are the largest and smallest inertia weights, respectively. The values of these weights are constants in the range of (0.5–0.9). The proposed algorithms deal with binary optimization problems (Bharti and Singh 2016). Thus, the algorithms are modified to update the positions of the solutions using the discrete value for each dimension. Equation (4.14) represents the sigmoid function used to determine the probability of the ith position, and Eq. (4.15) is used to update the new position. The sigmoid function values of the updating process are presented in Fig. 4.2.

1 1 if rand < 1+exp −vi,j si,j = (4.14) 0 otherwise

76

4 Proposed Methodology 1 0.9

Singmoid function (v)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -5

-4

-3

-2

-1

0

1

2

3

4

5

v values

Fig. 4.2 Sigmoid function used in binary PSO algorithm

where rand is a random number between [0, 1], xi,j represents the values of position j, and −vi,j denotes the velocity of particle i at position j, j = 1, 2, . . . , t.

xi,j =

1, rand < si,j 0, otherwise

(4.15)

4.6 Proposed Detailed Dimension Reduction Technique This section illustrates the proposed DR technique based on an improved DF method, which is employed to reduce the dimensional space by eliminating useless text features. After the informative subset of features is produced by the FS method in the previous step, the proposed DR technique will be triggered to reduce the dimensional space corresponding to the informative features. DR is a major preprocessing technique used in text mining. An enhanced detailed dimension reduction (DDR) technique is proposed to obtain a subset of more useful features (terms) to improve the performance of the TC algorithm. On the other hand, the proposed detailed dimension reduction technique is used to reduce non-useful features from the informative features which come from the feature selection method to deal with a small number of useful informative features. This proposed method based on introducing new variables to improve the original variables (i.e., the terms appear or do not appear in each document, importance of each term over all documents when the document contains several/fewer useless terms, and the value of the maximum TF of the document).

4.6 Proposed Detailed Dimension Reduction Technique

77

DF is commonly used in the DR (Bharti and Singh 2015b; Nebu and Joseph 2016). DF is based only on the number of documents where the term appears. The current reduction technique can be improved by increasing the precision of eliminating useless features. Thus, the DDR technique in Eq. (4.16) is improved based on the following four factors: n DTFi,j (4.16) DDFj = i=1 sumDTFj where DDFj is the detailed document frequency of term j, DTFj is the detailed term frequency of term j in document i, n is the number of all documents, and sumDTFj is the number of documents that contain term j. DTF(i, j) =

tf (i, j) ∗ ADFj ∗ ai MNFi

(4.17)

where ADFj is the average frequency of term j at the level of all documents, ai is the number of terms in document i, and MNFi is the maximum TF in document i. Notably, DTF values are used to update the current TF. The DDF is computed using Eq. (4.16) and takes the summation of the term DTF instead of the number of terms that appear in the document (original TF). The main aim of this equations is to update each term values at the level of all documents as a score which is used to decide that the term is useful or non-useful (see Algorithm 6). • The first factor is the TF value (tf (i, j)). This factor is used to distinguish further between useful and useless features by considering the number of times each term occurs in each document instead of the binary factor (i.e., appear or does not appear in each document). • The second factor is the average frequency of term j over all documents (ADF). This factor assists the proposed technique in measuring the importance of each term in all documents. By contrast, ADF can find the effect and importance of each term over all documents. • The third factor indicates the number of terms in the document (i.e., ai represents the number of terms that appear in document i). This factor is utilized to tune the first factor (i.e., TF) effectively when the document contains several/fewer useless terms to reflect the value of DF. By contrast, if the document contains a few terms, then these terms dominate to represent the document’s contents. • The fourth factor is the maximum TF at the level of each document, called MNF, where all the documents are added to map the DTF into a high value if the maximum frequency value at the level of the document is low. In other words, when the value of the maximum TF of the document is low, the DTF value will be high. This factor considers the behavior of each document according to the value of the maximum TF. The rational justification for the Eq. (4.17) is as follows: The first factor (tf ) is added to further distinguish between the features by considering the number of times for each term which occurs in each document instead of appearing or does not appear

78

4 Proposed Methodology

Algorithm 6 The proposed detailed dimension reduction (DDR) technique 1 , and adjust the threshold value. 1: Input: A collection of text documents Dn∗m 2 . 2: Output: A new subset of useful features with low dimension Dn∗v 3: Algorithm 4: Find the number of text features in each document (ai ). 5: Find the max features frequency over all documents (MNFi ). 6: Find the average frequency of feature j over all documents (ADFj ). 7: for j = 1 to m do do 8: for i = 1 to n do do 9: DTFi,j = (tf (i, j) * ADFj * ai ) / MNFi 10: end for 11: end for 12: for j = 1 to m do do 13: for i = 1 to n do do 14: if term j appears in the document i then 15: Updating DDFj = DDFj +DTFi,j 16: Compute the number of the document that contains term j by sumDTFj 17: end if 18: DDFj = DDFj / sumDTFj 19: end for 20: end for 21: The new dimension size v = m 22: for j = 1 to m do do 23: if DDFj

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.