Applications of Data Management and Analysis

This book addresses and examines the impacts of applications and services for data management and analysis, such as infrastructure, platforms, software, and business processes, on both academia and industry. The chapters cover effective approaches in dealing with the inherent complexity and increasing demands of big data management from an applications perspective. Various case studies included have been reported by data analysis experts who work closely with their clients in such fields as education, banking, and telecommunications. Understanding how data management has been adapted to these applications will help students, instructors and professionals in the field. Application areas also include the fields of social network analysis, bioinformatics, and the oil and gas industries.

109 downloads 6K Views 7MB Size

Recommend Stories

Empty story

Idea Transcript


Lecture Notes in Social Networks

Mohammad Moshirpour Behrouz H. Far Reda Alhajj Editors

Applications of Data Management and Analysis Case Studies in Social Networks and Beyond

Lecture Notes in Social Networks Series editors Reda Alhajj, University of Calgary, Calgary, AB, Canada Uwe Glässer, Simon Fraser University, Burnaby, BC, Canada Huan Liu, Arizona State University, Tempe, AZ, USA Rafael Wittek, University of Groningen, Groningen, The Netherlands Daniel Zeng, University of Arizona, Tucson, AZ, USA Advisory Board Charu C. Aggarwal, Yorktown Heights, NY, USA Patricia L. Brantingham, Simon Fraser University, Burnaby, BC, Canada Thilo Gross, University of Bristol, Bristol, UK Jiawei Han, University of Illinois at Urbana-Champaign, Urbana, IL, USA Raúl Manásevich, University of Chile, Santiago, Chile Anthony J. Masys, University of Leicester, Ottawa, ON, Canada Carlo Morselli, School of Criminology, Montreal, QC, Canada

More information about this series at http://www.springer.com/series/8768

Mohammad Moshirpour • Behrouz H. Far Reda Alhajj Editors

Applications of Data Management and Analysis Case Studies in Social Networks and Beyond

123

Editors Mohammad Moshirpour Department Electrical & Computer Engineering University of Calgary Calgary, AB, Canada

Behrouz H. Far Department Electrical & Computer Engineering University of Calgary Calgary, AB, Canada

Reda Alhajj Department of Computer Science University of Calgary Calgary, AB, Canada

ISSN 2190-5428 ISSN 2190-5436 (electronic) Lecture Notes in Social Networks ISBN 978-3-319-95809-5 ISBN 978-3-319-95810-1 (eBook) https://doi.org/10.1007/978-3-319-95810-1 Library of Congress Control Number: 2018954658 © Springer Nature Switzerland AG 2018 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The quantity of data in various science and engineering domains is increasing at a phenomenal rate, in structured and semi-structured formats. The data is characterized by its complexity, volume, high dimensionality, and velocity. Together with this growth, there are a large number of data science-related tools and techniques available for analyzing data and extract useful, actionable and reusable knowledge. Data science is an interdisciplinary field. Theories, techniques, and tools are drawn from various fields within mathematics, statistics, information science, software engineering, signal processing, probability models, machine learning, statistical learning, data mining, database systems, data engineering, pattern recognition, visualization, predictive analytics, uncertain modeling, data warehousing, data compression, artificial intelligence, and high performance computing. Practitioners in each domain need to know the characteristics of their data sets and select appropriate data science tools and techniques and fit them to their own problem. The goal of this volume is to provide practical examples of data science tools and techniques fitted to solve certain science and engineering problems. In this volume, the contributing authors provide examples and solutions in various engineering, business, medicine, bioinformatics, geomatics, and environmental science. This will help professionals and practitioners to understand the benefits of data science in their domain and understand where a particular theory, technique, or tool would be applicable and useful. Calgary, AB, Canada Calgary, AB, Canada Calgary, AB, Canada

Mohammad Moshirpour Behrouz H. Far Reda Alhajj

v

Contents

Predicting Implicit Negative Relations in Online Social Networks .. . . . . . . . Animesh Gupta, Reda Alhajj, and Jon Rokne

1

Automobile Insurance Fraud Detection Using Social Network Analysis . . Arezo Bodaghi and Babak Teimourpour

11

Improving Circular Layout Algorithm for Social Network Visualization Using Genetic Algorithm .. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Babak Teimourpour and Bahram Asgharpour Live Twitter Sentiment Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Dayne Sorvisto, Patrick Cloutier, Kevin Magnusson, Taufik Al-Sarraj, Kostya Dyskin, and Giri Berenstein Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Ehsan Amirian, Eugene Fedutenko, Chaodong Yang, Zhangxin Chen, and Long Nghiem A Sliding-Window Algorithm Implementation in MapReduce . . . . . . . . . . . . . Emad A. Mohammed, Christopher T. Naugler, and Behrouz H. Far A Fuzzy Dynamic Model for Customer Churn Prediction in Retail Banking Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Fatemeh Safinejad, Elham Akhond Zadeh Noughabi, and Behrouz H. Far

17 29

43

69

85

Temporal Dependency Between Evolution of Features and Dynamic Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 103 Kashfia Sailunaz, Jon Rokne, and Reda Alhajj

vii

viii

Contents

Recommender System for Product Avoidance .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 117 Manmeet Dhaliwal, Jon Rokne, and Reda Alhajj A New 3D Value Model for Customer Segmentation: Complex Network Approach .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 129 Mohammad Saeedi and Amir Albadvi Finding Influential Factors for Different Types of Cancer: A Data Mining Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 147 Munima Jahan, Elham Akhond Zadeh Noughabi, Behrouz H. Far, and Reda Alhajj Enhanced Load Balancer with Multilayer Processing Architecture for Heavy Load Over Cloud Network . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 169 Navdeep Singh Randhawa, Mandeep Dhami, and Parminder Singh Market Basket Analysis Using Community Detection Approach: A Real Case .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 177 Sepideh Faridizadeh, Neda Abdolvand, and Saeedeh Rajaee Harandi Predicting Future with Social Media Based on Sentiment and Quantitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 199 Sahil Sharma, Jon Rokne, and Reda Alhajj Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 211

Predicting Implicit Negative Relations in Online Social Networks Animesh Gupta, Reda Alhajj, and Jon Rokne

Introduction Social network analysis provides a plethora of information on relationships between the users of that social network. A social network contains both positive and negative links. The link prediction problem [1, 2] can be used to study the latent relationship between people because it is always hard to find out explicitly what other people think [3]. Although positive link prediction is quite common in a social network analysis, there exists a scarcity of research on negative link prediction. One of the reasons for this is the lack of datasets available to carry out the analysis for negative link prediction. This is because many social networking firms such as Facebook, Twitter, and LinkedIn consider it pointless to collect negative link information, and therefore they do not even allow users to show dislike toward a post or a comment. But there are a few websites which allow users to express dislike toward other users. Two such websites are Epinions and Slashdot. We, therefore, use the dataset from these websites to carry out the prediction of negative links in social networks. Slashdot is a technology news website. It features news stories on science and technology that are submitted and evaluated by site users [4] which allows the users to express both positive and negative links toward other users [5]. The site makes use of a user-based moderation system in which the moderators are selected randomly. Moderation applies either −1 or +1 to the current rating, based onwhether the

A. Gupta () · R. Alhajj · J. Rokne Department of Computer Science, University of Calgary, Calgary, AB, Canada e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_1

1

2 Fig. 1 Summary of the dataset

A. Gupta et al.

Here is the summary of the Epinions and Slashdot dataset: Nodes Edges Positive Edges Negative Edges

Epinions 131,828 841,372 85% 15%

Slashdot 82,144 549,202 77.4% 22.6%

Fig. 2 Missing links in a social network

comment is perceived as either “normal,” “offtopic,” “insightful,” “redundant,” “interesting,” or “troll” (among others) [4]. The dataset chosen for this work is a signed dataset from February 21, 2009. Epinions is a product review website established in 1999 where users can express approval or discontent toward the reviews posted by other users. This helps them to choose between buying a product or not by reading other people’s reviews for the product they are interested in. The members can also choose to either trust or distrust other members of the website [6–8]. All the trust relationships interact and form the Web of Trust which is then combined with review ratings to determine which reviews are shown to the user [9, 10]. Here is the summary of the Epinions and Slashdot dataset (Fig. 1). We have used the Slashdot and Epinions dataset to create a logistic regression classifier based on the given information about the already existing links (positive or negative) between users. The objective of this work is to predict the links between those users who do not have any link between them but may have positive or negative links with other users of the network (Fig. 2).

Related Work In [11], the author predicts both positive and negative links in online social networks by using a set of 23 feature sets. The first class of feature sets are based on the degree of a node. In the second class of feature sets, each triad involving an edge (u,v) is considered. For each set (u,v), there also exists a “w” which completes the triad and either has an edge coming from or going into u and similarly has an edge

Predicting Implicit Negative Relations in Online Social Networks

3

coming from or going into v. Since the edge can be in either direction and can be a positive or a negative edge, the total number of possible combinations is 16, and these combinations translate into 16 different feature sets. It is believed that each of these 16 triad sets can provide different evidence about the sign of the edge between u and v. The datasets used for this work are Epinions, Slashdot, and Wikipedia. A balanced dataset is then created to match the number of negative links to the number of positive links because positive links usually outnumber the number of negative links. A logistic regression classifier is used to combine the evidence from these individual features into an edge sign prediction. The predictive accuracy of detecting a link is about 85% for degree feature set and triad feature set when considered independently whereas jumps to about 90% when both feature sets are considered together. In [12], only positive links and content centric interactions are used to predict a negative link in a social network. This is a novel technique as there is an abundance of the presence of positive links in a social network. They propose an algorithm NeLP (negative link prediction) which can exploit the positive links and content centric interactions to predict negative links [12]. This approach is based on the idea that although explicit information is not available in most cases for social networks, a combination of positive links and content centric interaction can help to detect implicit negative links. Jiliang Tang et al. also use the Epinions and Slashdot dataset for NeLP. The F1-measure achieved is 0.32 and 0.31 when the training is done on Epinions and Slashdot dataset, respectively. The precision achieved by the negative link prediction algorithm is 0.2861 for Epinions dataset and 0.2139 for Slashdot dataset. Compared to other baseline approaches, this framework achieves impressive performance improvement. Positive, implicit, and negative information of all users in a network is used by Min-Hee Jang et al. [13] and Cheng et al. [14]. Based on belief propagation, the trust relationship of a user is determined. They use a metric called belief score and calculate it for every user in the network based on that user’s interaction with his neighbors. Every user which is a node in the system is assigned one of the two values of being either trustable or distrustful. This method of trust prediction proposes a higher accuracy by up to 10.1% and 20.6% compared to ITD and A BIT_L (similar trust prediction algorithms), respectively. Trustworthiness of users based on local trust metrics is studied upon in [10] and used to determine whether a controversial person is trustworthy or not. The same problem is also tackled in [15] using the trust antecedent framework. Also, Yang et al. [16] show that it is possible to infer signed social ties with good accuracy solely based on users’ behavior of decision making (or using only a small fraction of supervision information) via unsupervised and semi-supervised algorithms. A survey of link prediction in complex network is done in [17] by Zhou et al.

4

A. Gupta et al.

Methodology Dataset Description The Slashdot signed social network dataset from February 21, 2009, has 82,144 nodes and 549,202 edges. Since this is a signed dataset, it contains information about the nature of relationship between two users. Positive relationship is denoted by a “+1” and a negative relationship is denoted by a “−1.” About 77.4% of the edges in this dataset are positive edges and only 22.6% are negative edges. Semantic Web applications [18] and Web spam detection are used by Slashdot to control deviant behavior and develop user rating mechanisms. Similarly, signed dataset of Epinions is used as well. Since Epinions is a product review website, “+1” indicates a helpful review about a product by another user, and a “−1” indicates a non-trustable review by a user (Fig. 3). Fig. 3 Distribution of signed edges

Predicting Implicit Negative Relations in Online Social Networks

5

Formulation in R Loading the Data The datasets are taken from the Stanford Network Analysis Project [4]. The extracted text file is loaded into a data matrix in R. Within the data, the negative link information between two nodes is denoted by −1, and a positive link information is denoted by a +1. Since we plan to apply logistic regression to our model, the classifier takes as input only those values which are either 0 or 1. To make the values consistent with the input/output of the classifier, all the values in the data which are −1 are converted to 0. All the negative links between the nodes in our dataset are now represented by a 0, and the positive links are represented by +1. We do this for both the datasets and create two data matrices which are transformed in the subsequent steps to load the features and train the model.

Transforming the Data by Loading Features It is important to choose the right set of features to base our machine learning model on. Common neighbors can reveal a lot of information about a user of a social network. Exploiting information about the number of friends and enemies can be used to predict his/her relationship with other users in the network. A user who is listed as a foe by majority of the other users has a high probability of being disliked by a random user in the network with whom he may not have any relation earlier. Based on the assumption that relationships with neighbors are a good basis for selecting the feature set for the classifier, we choose the following four feature sets to be used for the model: • • • •

udoutpos—Number of outgoing positive edges from “u” udoutneg—Number of outgoing negative edges from “u” vdinpos—Number of incoming positive edges into “v” vdoutneg—Number of incoming negative edges into “v”

We want to predict the relationship between the pair of users (u,v). Since we have individual information about the relationship of u and v with their respective neighbors, we use that to create out a list of feature sets. This is implemented by taking all the pairs of users in the dataset and counting the total number of +1’s or 0’s for each pair. A new column is then added to the data matrix which is populated with the total number of the positive or negative link for every pair.

Logistic Regression The dependent variable is dichotomous in logistic regression, and there are one or more independent variables that determine an outcome. The dependent variable in our case is the edge sign between u and v. Since the edge sign could either be 0

6

A. Gupta et al.

Fig. 4 Standard logistic function

indicating a negative relationship or +1 indicating a positive relationship, we use the logistic model. The logistic function is: P (+|x) =

1

n 1 + e−(b0 + i bi xi )

where the feature vector is denoted by x and b0 , b1 , . . . , bn are the weights or coefficients which are determined by the classifier based on the training dataset. Also, here is a graph for a standard logistic function (Fig. 4).

Splitting the Data into Training and Testing Data We follow the 90:10 split to divide the dataset into training and testing datasets. The first 90% of the data records are used for training the classifier, and the remaining 10% are used for testing our model. Under the 90:10 split, 757,233 data entries from Epinions are used for training the model, and the remaining 84,137 are used to test our model. Similarly, 494,280 data entries from Slashdot are used for training the model, and 54,920 data entries are used for testing.

Fitting a Logistic Regression Model Using Training Data To fit a linear model to predict a categorical outcome, we make use of the generalized linear models (GLM). GLM are an extension of linear regression models with which we can define the dependent variables to be non-normal. The function glm (generalized linear model) in R is used to fit the logistic regression to our training dataset. Glm in R is used to fit generalized linear models, specified by giving a symbolic description of the linear predictor and a description

Predicting Implicit Negative Relations in Online Social Networks

7

of the error distribution. The feature sets defined earlier are passed as the first set of parameters. The training data and the glm family information are passed as the second and the third parameters, respectively. The parameter “family information” is specified as binomial because the results we are trying to predict are binomial and can take only two values, 0 for negative relationship and +1 for positive relationship between users.

Using the Fitted Model to Do Predictions for the Test Data The results returned by glm are stored in a variable called “linkpredictor.” This is then used to predict the signs of the unknown links between nodes. The function “predict” in R is used to obtain predicted values based on linear model objects. We pass three parameters, respectively, to this function as follows: • Linkpredictor • Testing data • Type The type of prediction is chosen as response.

Results and Discussion The logistic regression classifier converges successfully under the given feature set for the training data. The standard error and z-statistic values for features 1 and 2 and features 3 and 4 are as follows:

Intercept Feature 1 and 2 Feature 3 and 4

Estimate −2.469888 0.017103 2.286318

Standard error 0.378926 0.001817 0.328871

Z-value −6.518 9.410 6.952

Pr(>|z|) 7.12 × 10−11 3).

14

A. Bodaghi and B. Teimourpour

Fig. 2 Example for detection of cycles Table 1 Information about the constructed network Table 2 Number of cycles

Numbers of nodes 20,015

Numbers of edges 37,472

Running time (n > 5) 3:32.05

Numbers of cycles 458

Total network Algorithm With BFS

Figure 3 demonstrates the example of detected group in the network of collisions which might represent the occurrence of organized fraud by drivers involved in accidents; this could be investigated by experts. These cycles include different numbers of nodes. Since there are some different types of group analysis which can be performed within the social network analysis approach, such as community, cluster, connected components, and bi-connected components, we will show how effective would be detection of cycles compared to detection of communities for revealing organized groups of fraudsters. The community detection algorithm such as leading eigenvector has been used for this comparison. Table 3 includes the comparison results. According to the provided table, it is clear that the number of detected cycles is far fewer than the found communities, while the likelihood of fraud are more for them (cycles); these difference leads to investigation of a smaller number of entities that reduce the time and cost of investigation.

Automobile Insurance Fraud Detection Using Social Network Analysis

15

Fig. 3 An example of detected fraud bands with six nodes

Table 3 Comparison of detected fraudulent groups’ numbers using various methods

Algorithms Total number

Leading eigenvector 3644

Cycle detection 458

Summary and Conclusion Fraud is rocketing impressively with the growth of modern technology. Though prevention technologies are the best way to decrease fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. The insurance industry is concerned with the identification of fraudulent behavior. The number of automobile claims constituting several suspicious activities is high, and companies are interested in this subject. This paper proposed a new theoretical method for detection of organized fraud in automobile insurance. The insurance companies can detect the presence of organized fraud rings using social network analysis and analyze the relationships between different entities. As cycles play key roles in network analysis, we concentrated on this structural characteristic and found cycles through a cycle detection algorithm using BFS tree. Final results were compared to the extracted results from a community detection algorithm (leading eigenvector) in order to show the efficiency of cycle analysis for detecting organized groups of perpetrators in the network of collisions. A real data has been used for evaluating the system. Finally we note that detection of cycles is cost and time effective for both companies and experts.

References 1. White Paper. (2011). The insurance fraud race: Using information and analytics to stay ahead ® of criminals. Featuring as an example: SAS Fraud Framework for Insurance. 2. Nian, K., Zhang, H., Tayal, A., Coleman, T., & Li, Y. (2016). Auto insurance fraud detection using unsupervised spectral ranking for anomaly. The Journal of Finance and Data Science, 2(1), 58–75.

16

A. Bodaghi and B. Teimourpour

3. Brockett, P. L., & Levine, A. (1977). Characterization of RIDITs. Annals of Statistics, 5(6), 1245–1248. 4. Šubelj, L., Furlan, Š., & Bajec, M. (2011). An expert system for detecting automobile insurance fraud using social network analysis. Expert Systems with Applications, 38(1), 1039–1052. https://doi.org/10.1016/j.eswa.2010.07.143. 5. Zhou, J., Xu, W., Guo, X., & Ding, J. (2015). A method for modeling and analysis of directed weighted accident causation network (DWACN). Physica A: Statistical Mechanics and Its Applications, 437, 263–277. 6. Noble, C. C., & Cook, D. J. (2003). Graph-based anomaly detection. In Proceedings of the ACM SIGKDD international conference on knowledge discovery and datamining (pp. 631– 636). 7. Holder, L. B., & Cook, D. J. (2003). Graph-based relational learning: Current and future directions. SIGKDD Explorations, 5(1), 90–93. 8. Maxion, R. A., & Tan, K. M. C. (2000). Benchmarking anomaly-based detection systems. In Proceedings of the international conference on dependable systems and networks (pp. 623–630). 9. Sun, J., Qu, H., Chakravarti, D., & Faloutsos, C. (2005). Relevance search and anomaly detection in bipartite graphs. ACM SIGKDD Explorations Newsletter, 7(2), 48–55. 10. Freeman, L. C. (1979). Centrality in social networks—Conceptual clarification. Social Networks, 1(3), 215–239. 11. Cormen, T., Leiserson, C., Rivest, R., & Stein, C. (2002). Introduction to algorithms (2nd ed.p. 1312). Cambridge, MA: The MIT Press ISBN: 9780262033848.

Improving Circular Layout Algorithm for Social Network Visualization Using Genetic Algorithm Babak Teimourpour and Bahram Asgharpour

Introduction Various aspects of social networks have been studied in various scientific fields. An important aspect of social network analysis is the visualization of the data related to communication in social networks. Graph and its related concepts are used to represent social networks [1]. Graph drawing is a graphical representation of the layout of nodes and edges on the screen. Graphs are used visual analytics of big data networks such as social, biological, traffic, and security networks. Graph drawing has been intensively researched to enhance aesthetic features. It becomes trickier to analyze the generated data by the social networks due to their complexity, which hides the underlying patterns [2]. The graph layout can have a great impact on better understanding and more efficient graph drawing [3]. An important factor in network visualization is the number of intersections among the edges that is called edge crossing number. The crossing number of a graph is the least possible number of edge crossings of a plane drawing of the graph. A crossing means a point that is not a vertex where edges meet. In Fig. 1 the red arrows show where the one edge crossing occurs for both cases. The drawing of a graph is made so that no three edges cross at a single point. Fewer edge crossings improve the aesthetics and readability of the visualization. Edge crossing number is an NP-complete problem, and hence there is not likely to be any efficient way to design an optimal embedding [4]. One of the layout algorithms that is of high importance due to its compliance with the aesthetics and readability is the circular layout algorithm. In circular graph layout, the vertices of a graph are constrained to distinct positions along

B. Teimourpour () · B. Asgharpour Department of IT Engineering, School of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_3

17

18

B. Teimourpour and B. Asgharpour

Fig. 1 Edge crossing representation

Fig. 2 A graph with arbitrary coordinates for the nodes and a circular drawing of the same graph as produced by an implementation of circular algorithm. Figure taken from [9, 10]

the perimeter of a circle, and an important objective is to minimize the number of edge crossings in such layouts since circular crossing minimization is NP-hard [5, 6]. Different versions of circular layouts have been offered by researchers with different purposes such as commercial and academic uses [7, 8]. In all cases there is a fundamental problem which is related to minimization of the number of edge crossings. In this paper we have tried to reduce the number of edge crossings in a circular layout. Identifying the exact position and the number of the edge crossings is important for reduction of the number of edge crossings. After doing the initial tasks to reduce the number of intersections, we used the genetic algorithm to reduce crossings. Using the mutation and crossover operators, a new location is assigned to different nodes, and nodes whose edges have the greatest impact on the number of crossings will have a greater chance of displacement (Fig. 2).

Improving Circular Layout Algorithm for Social Network Visualization Using. . .

19

Initial Circular Layout The igraph software package provides handy tools for researchers in network science. It is an open-source portable library capable of handling huge graphs with millions of vertices and edges, and it is also suitable to grid computing. It contains routines for creating, manipulating, and visualizing networks [11]. Circular layout is of the graph drawing layouts that are implemented in python-igraph package. A circular drawing of a graph as shown in Fig. 2 is a visualization of a graph with the following characteristics [12]: • The graph is partitioned into clusters. • The nodes of each cluster are placed onto the circumference of an embedding circle. • Each edge is drawn as a straight line. In this paper we used the igraph package in order to create the initial layout and improve it. Throughout this paper, we assumed G = (V, E) be a simple undirected graph with n = |V| vertices and m = |E| edges.

Improvement In this paper, we have used a genetic algorithm to reduce the edge crossing number. Genetic algorithms are a family of computational models inspired by evolution. These algorithms encode a potential solution to a specific problem on a simple chromosome like data structure and apply recombination operators to these structures and use Darwin’s natural selection principles to find the optimal formula for predicting or matching patterns. In artificial intelligence, genetic algorithm is a programming technique that uses genetic evolution as a problem-solving model [13]. We produced random networks of given sizes. At each run, the size of m is given as the number of edges and the size of n as the number of nodes to produce the desired graph. The generated graph nodes are coordinated using the circular layout algorithm indicated in section “Initial Circular Layout.”

Genetic Algorithm Our genetic algorithm defined in Fig. 3 requires the following definitions: • • • • •

Chromosome representation Collect the first population Fitness function definition Selection operators Mutation and crossover

20

B. Teimourpour and B. Asgharpour

Fig. 3 Genetic algorithm flowchart

Start

Initialization Initial Population Selection New Population

Quiet? No

Old Population

Yes

Crossover Mutation

End

Chromosomes: The position of nodes of an edge along with the caused crossing number is stored in a list. Each of these lists is considered as chromosome. For example, in a list like [A, B, C], A and B are node positions that are implemented with circular layout algorithm, and C illustrates crossing number that this edge caused. Initial population: A list of chromosomes is used to initialize the initial population. To test different graph sizes and get the results, parameters n and m are given multiple times as input to the igraph package to create a graph and get the initial circle layout as a list. This list illustrates all vertex position on the two-dimensional plane and used as chromosomes list. Evaluation: After each round of testing, we should delete the two worst solutions and to breed two new ones from the best matching solutions. To achieve the goal, we need to define the fitness function. The fitness function is the edge crossing number in each stage. The goal is to minimize the target function. Given that there is no predetermined response to edge crossing number minimization, we use the time limit to finish the genetic algorithm. In the following sections, we will explain the function used to count edge crossing number in each step. Selection operator: In this research, roulette wheel method has been used to select parents for applying crossover and mutation operators. In this method, in the first step, for each chromosome, a selection probability is attributed. The probability of each chromosome shows the chances of choosing that chromosome as a parent for transfer to a genetic pool. In the second step, using the roulette wheel and considering the probability of the chromosomes, two of them are selected for the application of crossover and mutation operators [14].

Improving Circular Layout Algorithm for Social Network Visualization Using. . .

21

Fig. 4 The roulette wheel, in order to select the proper chromosomes to apply the operators

In Fig. 4, the roulette wheel is considered for five chromosomes. Of these five chromosomes, two chromosomes are selected and directed to genetic operators to produce new chromosomes. The percentage of winning each parent in the roulette crossing (E) wheel is obtained through the relationship P = Edge All edge crossing × 100. In this equation, edge crossing (E) shows the edge crossing number that is due to the presence of the edge E, and all edge crossings expresses all edge crossing number in the applied circular layout. The selection of edges with fewer intersections is also likely in the roulette wheel, which leads to extensive changes and helps improve the output. No matter how much the P-value is for a parent, it has a greater chance of displacement, as well as a greater impact on unreadability of the graph drawing. Crossover operator: Operators guide the algorithm toward a solution to a given problem and must work in conjunction with one another in order for the algorithm to be successful. Crossover is the process of taking more than one parent solutions (chromosomes) and producing a child solution from them. By recombining portions of good solutions, the genetic algorithm is more likely to create a better solution [15]. Various crossover techniques are built to get the optimum solution as early as possible in minimum generations. The selection of crossover operator has more impact on the performance of GA. In this research we used a single point crossover method due to chromosomes structure. This crossover uses the single point fragmentation of the parents and then combines the parents at the crossover point to create the offspring or child (Fig. 5).

22

B. Teimourpour and B. Asgharpour

Fig. 5 Single point crossover method

After crossover operation the algorithm evaluates new generated chromosome and adds its edge crossing number to the list. Two offspring are created by combining the parents at crossover point. A simple example is shown below which performs one-point crossover and creates two parents.      Parent 1 : A B Ax , Ay  Bx , By      Parent 2 : C D Cx , Cy  Dx , Dy Offspring 1: The coordinates of point A change with the coordinates of point C. Offspring 2: The coordinates of point B change with the coordinates of point D. In the above example, (x, y) the position of each vertex has changed to another vertex position due to one-point crossover rule, and then the new generated circular layout goes to edge crossing number evaluation. Mutation operator: Mutation is a genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution [16]. Given our research hypothesis, we need to use a circular layout algorithm. Therefore, conventional mutation methods cannot be used to produce a new gene, as this will cause the circular shape of the layout to be changed and interrupted. To generate the coordinates for our new gene, we use the circle formula (x2 + y2 = r2 ) and proceed with it. Mutation occurs during evolution according to a definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search. Therefore, the probability of selecting a mutation operator is considerably less than the crossover operator.

Edge Crossing Detection To find the edge crossing number, we were inspired by the solution to find the intersection of the two-line segment. The solution involves determining if three points are listed in a counterclockwise order.

Improving Circular Layout Algorithm for Social Network Visualization Using. . .

23

Fig. 6 Different possible orientations of (a, b, c)

For three points, we have three different possibilities, either the points are listed in the array in clockwise order or counterclockwise order or collinear order (Fig. 6). Two segments (p1 , q1 ) and (p2 , q2 ) intersect if and only if one of the following two conditions is verified: • (p1 , q1 , p2 ) and (p1 , q1 , q2 ) have different orientations. • (p2 , q2 , p1 ) and (p2 , q2 , q1 ) have different orientations. The rules above are visible in Fig. 6. Suppose that the first line consists of four integers, x1 , y1 , x2 , y2 , where p1 = (x1 , y1 ) and q1 = (x2 , y2 ) are the end points of the first line segment. The second line consists of four integers, x3 , y3 , x4 , y4 , where p2 = (x3 , y3 ) and q2 = (x4 , y4 ) are the end points of the second line segment. At first, three points (p1, q1 , q2 ) are selected to be tested. Suppose that the first point is furthest to the left, so x1 < x2 and x1 < y4 . Then the three points are in counterclockwise order if and only if the line p1 q1 is less than the slope of the line p1 q2. Therefore, it will be in counterclockwise order if and only if yx22 ––yx11 < y4 −y1 x4 −x1 . Since both denominators are positive, we can rewrite this inequality as (y4 –y1 )(x2 –x1 ) > (y2 –y1 )(x4 –x1 ). This final inequality is correct even if p1 is not the leftmost point. If the inequality is reversed, then the points are in clockwise order [17]. We can write this equality as a python function as below: def orientation (A, B, C): return (C.y − A.y) ∗ (B.x − A.x) > (B.y − A.y) ∗ (C.x − A.x) In Fig. 7, p1 q1 and p2 q2 lines intersect if and only if points p1 and q1 are separated by segment p2 q2 and points p2 and q2 are separated by segment p1 q1 . If points p1 and q1 are separated by segment p2 q2 , then p1 p2 q2 and q1 p2 q2 should have opposite orientation, meaning either p1 p2 q2 or q1 p2 q2 is counterclockwise but not both. Therefore, calculating if two line segments p1 q1 and p2 q2 intersect is a python function: def intersect (p1 , q1 , p2 , q2 ): return orientation (p1 , p2 , q2 )! = orientation (q1 , p2 , q2 ) and orientation (p1 , q1 , p2 )! = orientation (p1 , q1 , q2 )

24

B. Teimourpour and B. Asgharpour q1 q1

q2

q2

p2

p2

p1

p1 Example 1: Orientations of (p1, q1, p2) and (p1, q1, q2) are different. Orientations of (p2, q2, p1) and (p2, q2, q1) are also different)

Example 2: Orientations of (p1, q1, p2) and (p1, q1, q2) are different. Orientations of (p2, q2, p1) and (p2, q2, q1) are also different

q2

p2

q2

p2 q1 q1

p1 Example 3: Orientations of (p1, q1, p2) and (p1,q1, q2) are different. Orientations of (p2, q2, p1) and (p2,q2,q1) are same

p1 Example 4: Orientations of (p1, q1,p2) and (p1,q1, q2) are different. Orientations of (p2, q2, p1) and (p2, q2, q1) are same

Fig. 7 Two-segment intersection rules

if intersect function returns true, it means there is an intersection and we should count it.

Results Considering the fact that one of the goals of visualization and graph drawing algorithms is to reduce the number of edge crossings, the output of circular layout algorithm has fewer edge crossings than the random or accidental data spread mode. In this paper, a genetic algorithm has been used in order to reduce the number of edge crossings in circular layout. We tested this process on many graphs, for example, the reduction of the edge crossings in a graph with 100 nodes and 150 edges can be easily seen in Fig. 8, for a graph with 100 nodes and 200 edge in Fig. 9, for a graph with 100 nodes and 500 edges in Fig. 10, and for a graph with 100 nodes and 600 edges in Fig. 11. The number of edge crossings is reduced considerably after deploying genetic algorithm on graphs with circular layout drawing, but with care in the charts, it is determined that by increasing the number of edges in graphs with constant number of nodes, more time is needed to significantly reduce edge crossing number.

Improving Circular Layout Algorithm for Social Network Visualization Using. . .

25

Fig. 8 Plot of edge crossing number in a graph with 100 nodes and 150 edges based on runtime using genetic algorithm

Fig. 9 Plot of edge crossing number reduction in a graph with 100 nodes and 200 edges based on runtime using genetic algorithm

A comparison of the number of edge crossings in the circular layout algorithm with the number of edge crossings after the use of the genetic algorithm shows a considerable reduction of the number of edge crossings (Fig. 12). The obtained data have been calculated in a limited time, and allocation of greater time to the genetic algorithm will lead to better figures and statistics.

26

B. Teimourpour and B. Asgharpour

Fig. 10 Plot of edge crossing number reduction in a graph with 100 nodes and 500 edges based on runtime using genetic algorithm

Fig. 11 Plot of edge crossing number reduction in a graph with 100 nodes and 600 edges based on runtime using genetic algorithm

Improving Circular Layout Algorithm for Social Network Visualization Using. . .

27

Fig. 12 A comparison of the number of edge crossings before and after the use of the genetic algorithm in graphs with different sizes visualized with circular layout algorithm

Conclusion In this paper, we have studied the edge crossing reduction in the social network visualization. In this study, we have used the genetic algorithm in order to reduce the number of edge crossings in a circular layout which is one of the graph drawing algorithms; the output of the tests performed in this study indicates a drastic reduction in the number of edge crossings in the graphs visualized by circular layout.

References 1. Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications (1st ed.). Cambridge, New York: Cambridge University Press. 2. Chonbodeechalermroong, A., & Hewett, R. (2017). Towards visualizing big data with largescale edge constraint graph drawing. Big Data Research, 10(Supplement C), 21–32.

28

B. Teimourpour and B. Asgharpour

3. Herman, I., Melançon, G., & Marshall, M. S. (2000). Graph visualization and navigation in information visualization: A survey. IEEE Transactions on Visualization and Computer Graphics, 6(1), 24–43. 4. Garey, M., & Johnson, D. (1983). Crossing number is NP-complete. SIAM Journal on Algebraic Discrete Methods, 4(3), 312–316. 5. Masuda, S., Kashiwabara, T., Nakajima, K., & Fujisawa, K. On the NP-completeness of a computer network layout problem [Online]. Retrieved October 13, 2017, from https:// www.researchgate.net/publication/247043450_On_the_NP-completeness_of_a_computer_ network_layout_problem. 6. Bachmaier, C., Buchner, H., Forster, M., & Hong, S.-H. (2010). Crossing minimization in extended level drawings of graphs. Discrete Applied Mathematics, 158(3), 159–179. 7. Brandenburg, F. J. (1997). Graph clustering I: Cycles of cliques. In G. DiBattista (Ed.), Graph drawing. GD 1997. Lecture notes in computer science (Vol. 1353). Berlin: Springer. 8. Do˘grusöz, U., Madden, B., & Madden, P. (1997). Circular layout in the Graph Layout toolkit. In S. North (Ed.), Graph drawing. GD 1996. Lecture notes in computer science (Vol. 1190). Berlin: Springer. 9. Six, J. M., & Tollis, I. G. (2006). A framework and algorithms for circular drawings of graphs. Journal of Discrete Algorithms, 4(1), 25–50. 10. Six, J. M., & Tollis, I. G. (1999). Circular drawings of biconnected graphs. In Selected papers from the International Workshop on Algorithm Engineering and Experimentation (pp. 57–73). London, UK. 11. Csardi, G., & Nepusz, T. (2006). The igraph software package for complex network research. InterJournal, Complex Systems, 1695, 1–9. 12. Tamassia, R. (Ed.). (2013). Handbook of graph drawing and visualization (1st ed.). New York: Chapman and Hall/CRC. 13. Kokosi´nski, Z., Kołodziej, M., & Kwarciany, M. (2004). Parallel genetic algorithm for graph coloring problem. In International Conference on Computational Science—ICCS 2004. LNCS 3036 (pp. 215–222). 14. Lipowski, A., & Lipowska, D. (2012). Roulette-wheel selection via stochastic acceptance. Physica A: Statistical Mechanics and its Applications, 391(6), 2193–2196. 15. Mitchell, M. (1998). An introduction to genetic algorithms. Cambridge, MA: MIT Press. 16. Ronco, C. C. D., & Benini, E. (2014). A simplex-crossover-based multi-objective evolutionary algorithm. In IAENG Transactions on Engineering Technologies (pp. 583–598). The Netherlands: Springer 17. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms (3rd ed.). Cambridge, MA: The MIT Press.

Live Twitter Sentiment Analysis Big Data Mining with Twitter’s Public Streaming API Dayne Sorvisto, Patrick Cloutier, Kevin Magnusson, Taufik Al-Sarraj, Kostya Dyskin, and Giri Berenstein

Introduction Sentiment analysis is the application of natural language, text analysis and computational linguistics to automate the classification of the emotional state of subjective text. It has uses in healthcare, business and epidemiology. It utilizes a variety of big data technologies and concepts including machine learning, natural language processing and cluster computing frameworks tuned for big data processing. Sentiment analysis is interesting and widely studied in both industry and academia. With the rise of social media and micro-blogging platforms like Twitter1 , sentiment analysis has found many applications. An interesting application of sentiment analysis is in healthcare where epidemiologists have used sentiment analysis to predict and control the spread of disease [1]. Exploiting social networks for healthcare purposes has been called infodemiology [1]. Twitter in particular has been used to detect flu trends, examine disease transmission in social contexts and improve healthcare delivery [1]. At the present time, there are a number of sentiment analysis services on the market. Key players in this area include Microsoft, IBM, Hewlett Packard Enterprise, SAP and Oracle. Hewlett Packard offers machine learning as a service

1 https://twitter.com

D. Sorvisto () · P. Cloutier · K. Magnusson · T. Al-Sarraj · K. Dyskin · G. Berenstein Business Intelligence Team: Big Data and Analytics, Groundswell Group Inc., Calgary, AB, Canada e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_4

29

30

D. Sorvisto et al.

in the cloud with its Haven OnDemand—a set of black box APIs which includes sentiment analysis. The APIs also include image processing, face and optical character (handwriting) recognition. Each of these major players has a sentiment analysis offering, including SAP2 which offers an in-memory sentiment analysis module for their in-memory database HANA. This offering is part of their BusinessObjects 4.0 platform which is linked to their acquisition of Inxight Software, a natural language processing company. Oracle3 has a similar offering called Oracle Data Miner which allows you to build a “Sentiment Analysis Model” for Oracle 11.2g and 12c database. The tool is part of Oracle’s SQL Developer integrated development environment and comes with generic classification algorithms for sentiment analysis including support vector machines. IBM4 has multiple products for text analytics and sentiment analysis including AlchemyAPI, Watson and Tone Analyzer, all of which have strict limits and privacy issues. Maluuba5, a Canadian start-up, uses artificial intelligence and natural language processing to understand human speech in its product VoiceMate. Maluuba was purchased by Microsoft in early 2017. Sentiment analysis comes in different forms: cloud services, open-source toolkits and general-purpose machine learning tools. Some of these tools are GUI based such as Microsoft’s cloud-based MLStudio or RapidMiner which enable the user to design custom workflows. IBM recently released its Tone Analyzer service, available on BlueMix. Tone Analyzer is similar to AlchemyAPI and is part of IBM Watson. The service uses linguistic analysis to detect different types of tones and route call centre customers based on their tone. The service processes text data and feeds it to an ensemble machine learning algorithm and outputs an emotion graph, predicted language styles and social tendencies. According to IBM, Tone Analyzer is able to predict customer satisfaction with 67% accuracy [2]. This might seem low, but considering 20% of human readers will disagree on the sentiment of sentences [2], 80% is considered an ideal target. It is important to note that none of these out-of-the-box solutions provide plugand-play levels of service. Each requires tuning, set-up and extensive training. Additionally, if we want to use these tools on Twitter data, the model should be trained on Twitter data. One of the motivations for this project was the observation that these solutions are too restricted to provide a versatile and robust solution for data-mining live Twitter streams. Another potential weakness of current solutions is the inability to handle customer data privately as many of these solutions are only available in the public

2 https://www.sap.com/index.html 3 https://www.oracle.com/index.html 4 https://www.ibm.com/ca-en 5 http://www.maluuba.com

Live Twitter Sentiment Analysis

31

cloud—this is a contentious issue in big data. Customer privacy is especially a concern for sentiment analysis applications that use customer data because it is not always possible to anonymize data prior to sending it to a web service in the public cloud without affecting the overall accuracy of the results. Our market research led us to consider other solutions that are robust and extensible. For example, it would be nice to have an on-premises solution where we could control the code and choose how data is stored and processed. Furthermore, since many of the solutions we evaluated only reported accuracy, it would be ideal to be able to track key metrics such as precision, recall and false-positive rate to get a more accurate picture of performance. In our paper, we focus on building an on-premises solution capable of handling live data streams. Our paper will be divided into the following sections: Related Work, Method, Results and Conclusion. Along these lines, we give a brief survey of the current research in the area of Twitter sentiment analysis and point out some of the areas we improve on including testing our method on live Twitter data.

Related Work Computing the sentiment of Tweets using ontology-based methods has been recently studied [3–5]. The technique uses predefined relationships between topics to distinguish between sentences that could be positive or negative. The idea is to extend the machine learning approach and assign a grade to each distinct notion in a Tweet [6]. This is very important for opinion mining but not necessary to have a robust sentiment analyser that is competitive with top solutions. Other authors [3–5] have devised similar methods to the ones in this paper to build sentiment analysers using the machine learning approach. The authors of “Semantic Sentiment Analysis of Twitter” have tested their algorithm against the AlchemyAPI and a few other open-source solutions such as Open Calais6 (a Reuter’s product), a web service that identifies entities in unstructured data. The weaknesses in this approach are twofold: 1. The algorithm is compared against static data sets such as the Stanford Twitter Sentiment Corpus (STS), Health Care Reform data set and Obama-McCain debate data set. This introduces some bias into the method as these sets are well studied. 2. Since the time this paper was written, the number of solutions available to do sentiment analysis on Twitter data has increased drastically, so a new survey of services available is required. In addition, the AlchemyAPI benchmarked in [3] has been improved significantly after its acquisition by IBM in March 2015.

6 http://www.opencalais.com/

32

D. Sorvisto et al.

We propose to solve these two issues by testing our algorithm against live data and using current solutions in 2017 which in addition to the AlchemyAPI include Python’s TextBlob library and several other players both open source and proprietary which is novel given the state of the current research.

Method Data Ingestion Data ingestion is gathering as much data as we could from Twitter. Twitter firehose provides the highest throughput but also has a cost associated with it with reduced rates for academic uses. As a low-cost option, Twitter provides a free public streaming API that provides access to 1–10% of matching the provided filters. Twitter has taken significant efforts in recent years to limit the amount and type of data developers can access. A huge challenge in big data mining is storing the data after it is streamed. For this we created an external Hive table and stored the data in the Hadoop file system including username, status, date, longitude, latitude, location and screen name.

Data Preparation and Analysis The most important aspect of the data preparation step was to remove all unnecessary characters from our data. Twitter uses UTF-8, but emoticon characters use the Supplementary Multilingual Plane. This is important for properly storing the data. Error distance can be used to deal with the problem of misspelled words in tweets. Corrections can be applied to each tweet before the feature extraction. This can improve the accuracy of the algorithm. It is possible to collect the most common word occurring in a specific hashtag. Polarity scores for these words could then be computed. Another issue that has to be considered is that training data will likely be biased towards positive, neutral or negative tweets depending on the hashtag. This can be solved in one of two ways: undersampling positive tweets or using a weighted support vector machine. Since the latter is not currently available with Spark ML, we created a more balanced training data set by using two linear classifiers: one for objective/neutral tweets (from TextBlob) and the other for positive and negative tweets and then undersampling from the set of positive tweets. It is possible to sample more tweets (negative or positive) by filtering on similar hashtags that contain the opposite sentiment to the original hashtag but addresses the same topic.

Live Twitter Sentiment Analysis

33

Corpus Building (Bootstrapping) We took an iterative approach beginning with the words from the MPQA lexicon; we were able to train our linear classifier using only English words, no slang. As expected, the initial results were not very accurate, only slightly more than random chance. After several iterations however we were able to achieve 60% accuracy which formed the basis for our bootstrapped data. We chose a simple rule to decide the polarity: The polarity of a token occurring in a tweet was the percentage of positive tweets the word occurred in divided by total number of tweets that word occurred in. With further research, we could use our algorithm to extend the MPQA subjectivity significantly for use with Twitter.

Model Building Feature engineering is the process of encoding domain knowledge using data in a way that machine learning algorithms can process. The process itself consists of encoding domain knowledge. In our case, we used the previous work available in the MPQA, made available under GNU General Public License. One of the challenges we initially faced was how to handle slang words and misspellings commonly found in Tweets. This is a weakness of many sentiment analysis tools. We wanted our algorithm to be smart enough to recognize the overall context of the tweet, and we decided the best way to achieve this was by bootstrapping our own subjectivity lexicon for Twitter. Our feature extraction step was accelerated by the use of our tuning dashboard. This enabled an exploration process as we could visually see how a change in the algorithm affected the type I and type II errors7 and were able to quickly drill down into the data for more details. Features included number of smile faces in the tweet, number of sad faces, total sum of polarities of all words in the tweet and the number of positive or negative words individually. Interestingly, the number of positive words had more of an effect on accuracy than negative words; Bi-grams (pairs of tokens) and parts of speech tagging could be used to account for the effect of surrounding tokens in a tweet, increasing the likelihood that the algorithm will learn the context of the whole sentence instead of individual tokens8. It is vital to normalize the features by the length of the tweets because a long tweet generally has more positive or negative words. Many state-of-the-art sentiment analysis algorithms use a continuous “bag-of-words” model in combination with manual feature extraction and a dimensionality reduction step.

7 Type I error: The incorrect rejection of a true null hypothesis (false positive). Type II error is incorrectly accepting a false null hypothesis (false negative). 8 We did not have to do low-level natural language processing like parts of speech tagging as TextBlob’s classifier does this already.

34

D. Sorvisto et al.

We experimented with Word2Vec, a tool built by Google that ships with Spark ML and used for computing vector representations of dictionary words. We did not find any improvement to algorithm but increased the memory footprint dramatically. One reason for using such a model is to increase the number of dimensions and makes it easier for the linear classifier to separate data. This is desirable since Tweets are linearly separable data. On the other hand, this increases the likelihood overfitting. This is the so-called curse of dimensionality. Our solution was to use a smaller number of aggregate features. We found this improved the accuracy of our algorithm. Further research into this area is required. In terms of design, the code that performs the feature extraction step can be written separately from the streaming part of the code in Python. This is possible because of how Spark ships code to tasks on the DataNodes, via closures. This is a big advantage of using Spark. We were able to keep the feature extraction logic separate from the streaming and model logic in our code.

The Hashtag Problem Hashtags are an important consideration in any Twitter-based sentiment analysis algorithm. Hashtags can be composed of one or multiple words. For example, a hashtag with one word is easy to extract the root word and use it as a filter. However, #BIDMACalgary is a hashtag with two words which may not be found in the English dictionary and may skew search results. Our solution was to use a Python script to create a list of hashtags. One of the ramifications of multiple words in a hashtag is the reduction of quality in sentiment analysis of twitter data as tweets are limited to 140 characters so people often use abbreviations or concatenate words together to evade the character limit.

Testing, Tuning and Security Testing and tuning are integral parts of machine learning. We evaluated a number of machine learning models including naïve bayes and support vector machines using standard k-fold cross-validation in Spark before selecting a support vector machine. Testing security policies is another important and often overlooked aspect of big data applications. In our case, we were not using real customer data, but in a production use case, an administrator would use data governance and security tools to impose access control policies on data written to Hadoop. We filtered the Twitter streaming data on specific keywords related to a retail client and used either as a hashtag or in the tweet text itself. Our results are displayed in Figs. 1 and 2.

Live Twitter Sentiment Analysis

35

Fig. 1 Number of Tweets for positive, negative and neutral (blank) for hashtag #Retail_name Fig. 2 Word cloud for hashtag #Retail_name—used in targeted sentiment analysis

Future Work and Observations In computational linguistics, there are four types of grammars: regular, contextfree, context-sensitive and contextual. Tweets are likely context-sensitive as they are most similar to natural language, but there are differences. We propose a model for targeted sentiment analysis of tweets based on the observation that concatenated words are common in tweets and many valid tweets can be “pumped” by repeating hashtags and keywords (up until the character limit). Furthermore, tweets differ from most written text in that some words on Twitter do not appear in a dictionary or are the concatenation of multiple dictionary words; thus a different approach to sentiment analysis may be valuable. We describe a theoretical model as follows. Let U be the set of all Unicode characters that make sense for English speakers. The language U* generated by the alphabet U contains as a strict subset, the set

36

D. Sorvisto et al.

of valid Tweets, say T. We seek an approximation of T that is computable. Since we are only interested in a finite list of hashtags (keywords), say h1 , h2 , . . . , hn , then define language T ≤ U* generated by these hashtags with the concatenation operator, that is, define T = < h1 , h2 , . . . .,hn > U* where ≤ is set-theoretic subset operator. The outcome is also a context-free language as the product of contextfree languages is itself context-free and therefore can be accepted as input to some pushdown automata. In the case of sentiment analysis, T can further be refined by defining a grammar G consisting of a finite set of relations on strings based on stemming words, synonyms and domain knowledge. For example, one trivial rule could be A → B where A is a misspelling of an English word and B is a proper spelling. This could provide a foundation to study sentiment analysis of Twitter in terms of formal languages and learning automata, contrasting with the usual approach of considering Twitter sentiment analysis as a subset of natural language sentiment analysis. The first step in building our solution was translating the tweets we streamed into a numerical representation that the support vector machines could process. Deciding which features of the data are important (e.g. length of tweet, number of “positive” words, etc.) is called feature extraction and also reduces the dimensionality of the data we want to analyse, making it more efficient to process than a text file full of tweets. There are two approaches we used to improve the accuracy of our algorithm; the first approach was collecting as much data as possible from Twitter so we could better train our model. We noticed a performance increase by removing @ symbols, brackets and URLs appearing in the tweet. The second way we improved accuracy was by modifying how we represent the data to the machine learning algorithm (this is the “feature extraction” step mentioned previously). All tweets were in English. There were five principal features that correlated with sentiment of tweets. Bigrams (pairs of words that occur beside each other in the tweet) are also important for high accuracy. Features included the count of so-called “positive” tokens in the tweet based on a precomputed database of scored words (each word is rated between 0 and 1 where 1 means very positive and 0 is very negative). It is also critical that the same feature extraction process be applied in the exact same way both to the training data sets and the live data so that the algorithm can learn the features and then apply what it learned [7]. Determining which words in the tweet are positive depends on context. For example, if a customer tweets “I just bought a pair of basketball shoes from #RetailStore wicked deal”, then clearly the customer is expressing a positive sentiment, but the slang word “wicked” by itself would be interpreted as negative and skew the outcome of the algorithm. Other features we chose to represent our tweets include the last “positive” token in the tweet, the largest positive polarity of a substring occurring in a hashtag, the number of negative words as well as more sophisticated measures like the number of smiley faces in the tweet. The importance of having a neutral class cannot be overestimated. Although this changes the type of classifier from binary to multiclass, only binary support vector machines are supported in Spark ML at this time. One weakness of the

Live Twitter Sentiment Analysis

37

Spark support vector machine implementation is the few configuration parameters provided compared to other tools like LIBSVM, a python library for support vector machines. Our solution to this problem was to use a second classifier to filter neutral tweets. These are fact-based tweets that occur with a very high frequency in live twitter feeds. Using this second classifier to filter out neutral tweets and then applying our first classifier to predict positive or negative sentiment were key to our high accuracy. Initial results with a one-vs.-one approach with three support vector machines gave an average of 55% agreement with TextBlob, motivating our decision to use TextBlob’s objectivity classifier for neutral tweets. We also recommend using L1 norm as it is used in many production use cases including IBM sentiment analysis technologies [2]. By default Spark uses the L2 norm, but the L1 norm is associated with sparser models, which can reduce the chance of overfitting [2]. As was previously mentioned, Spark’s machine learning libraries Spark ML and the older Spark MLLib provide few configuration options such as the ability to change kernels. Spark provides the implementation of support vector machine which fits our use case including options such as a binary linear kernel with hinge loss. It is also important for real-world data to use a soft-margin SVM otherwise outliers can have greater influence on the decision boundary which would be detrimental to sentiment analysis. In Spark, this is configured using a low value for the regularization option referred to in the Spark documentation [8]. We used a 4 node cluster, 2 NameNodes (one active and one standby) and 2 DataNodes with 8 gigabytes of ram per node. We did not use any special Hadoop services other than having to manually install the NLTK package (dependency for TextBlob) on each server in the cluster since it is too big to serialize. We used approximately 100,000 tweets for training. We also recommend separate Spark Streaming Receivers for each hashtag and a window above 500 ms. A list of dependencies used include Hortonworks Data Platform 2.4, NLTK installed on each node (required for TextBlob), Spark running cluster mode with 2 NameNodes, 2 DataNodes and Apache Ranger configured through Ambari with appropriate data access and authentication rules in place.

Results Sentiment analysis as it applies to social media consists of several steps: web scrapping (data ingestion), data preparation, model selection, model building and deployment. We benchmarked our results against Python’s TextBlob sentiment

38 Table 1 Results are relative to TextBlob

D. Sorvisto et al. Algorithm Our results Open University [3] NSERC [7]

Average F1 score (%) 91 75.95 69.02

analysis API and calculated precision9 recall, F110 and false-positive rate. We achieved an F1 score of 0.91. This was based on a precision of 93% and recall of 89% over our entire data set. The resulting false-positive rate was 35% which could be significantly reduced with further research. Our overall agreement rate with TextBlob was 86% for negative tweets, 77.91% for positive tweets and 96% for neutral tweets with noticeable improvements when drilling down into our data. A formal comparison of our algorithm with previous results is now possible although a direct comparison is not possible since this paper did not test on live Twitter data and our sentiment analysis training set was less general, more targeted to retail which may explain the discrepancy. For example, [3] which employed the more general ontology method precomputed over 15,000 entities and 29 concepts, whereas we used a targeted sediment analysis11 (Table 1). An actual example of an ambiguous tweet is the sentence “Sold 400,000 pairs nmds single day”. This tweet contains no hashtags and a misspelling and very little context. TextBlob labeled the sentence negative, whereas our algorithm labeled it positive. One of the challenges we faced was finding labeled Twitter data. Although the Multi-Perspective Question Answering (MPQA) Subjectivity Lexicon provides a starting point to begin the feature extraction process, we decided to bootstrap our own labeled data specifically for Twitter. In other words, our methods are considered semi-supervised machine learning. The first step was to create a Spark Streaming instance. We used the Twitter firehose by registering our app on the Twitter developer portal. The steps (which were implemented as software modules) are as follows. Algorithm Steps Step 1: Data collection Step 2: Data cleansing Step 3: Feature extraction Step 4: Training/tuning Step 5: Sentiment generation PySpark provides first-class support for Python in Spark. Many of the most powerful tools for doing natural language processing such as the natural language tp formula for precision is tp+fp where tp is the number of true positive and fp is number of false positives. 10 Statistics used in binary classification tasks. 11 Results were based on both SMS messages and Tweets. 9 The

Live Twitter Sentiment Analysis

39

Fig. 3 Interactive confusion matrix dashboard

processing toolkit are written in Python, including genism, spacy and NLTK (natural language toolkit). In order to control the false-positive rate, we created our own data tuning dashboard in Tableau (refer to Fig. 3) based on a confusion matrix with drill-down capabilities. This allowed us to visually compare our algorithm against TextBlob. We could then drill down into the data and see discrepancies between TextBlob and our algorithm. It is important to note that although we used TextBlob to bootstrap labeled data, our final implementation and results are modifications of TextBlob. The performance of our algorithm was on par with top solutions on the market with improved performance on Twitter data. Our algorithm agreed with TextBlob 86% of the time on predicting negative sentiment of tweets and 96% on neutral tweets as expected since we used TextBlob’s own objectivity classifier. We noticed an improvement on false-positive tweets (tweets that appear positive but are actually expressing negative sentiment). Spelling correction may also play a part as many tweets contain spelling errors. This is significant because many incorrect classifications result from slang or misspelled words. Adding edit distance to our algorithm could minimize this effect and lower our baseline false-positive rate even further. We compared our in-house algorithm with IBM’s AlchemyAPI, a sentiment analysis technology purchased by IBM. AlchemyAPI can do a wide variety of natural language processing tasks including targeted sentiment analysis for situations where we are interested in knowing the sentiment of a tweet with respect to a specific entity

40

D. Sorvisto et al.

Table 2 Evaluation of various sentiment analysers by feature Speed Accuracy Privacy options Real time Scalability Pricing Customizability

PySpark Fast Accurate Customizable Yes, if used with Spark Streaming instance High Free High

TextBlob Slow Accurate Customizable No Low Free Low

IBM Watson Fast Accurate SaaS with SSL optional Yes, if used with Spark Streaming API High Cloud pricing Low

or dimension which may not necessarily be the same as the sentiment of the tweet as a whole, especially for tweets that express a mixed sentiment. Additionally, we evaluated some open-source projects and proprietary solutions with mixed results including Rosette and Sentiment140 are Twitter sentiment analysis tools which are free for academic use. A similar offering from Stanford called the NLP Sentiment Project utilizes deep learning and a sentiment treebank that can be edited and improved by users online. The creators of the project claim 85.4% accuracy for single sentence positive and negative classification tasks [2], contradicting the assumption that humans disagree around 20% of the time by over 5%. This is likely the limit for sentiment analysis accuracy for the reasons stated previously. A comparative analysis is given in Table 2.

Conclusion In this work we built our own solution using open-source tools. Our methods demonstrate how to design and implement a near real-time sentiment analyser based on Twitter data from the ground up. We developed a variation of big data techniques including bootstrapping labeled data, applications of machine learning and visualization technology to sentiment analysis and how to tune a sentiment analysis algorithm on live Twitter data. We demonstrated that near real-time sentiment analysis is feasible with custom open-source tools based on Spark and has performance comparable to state-of-the-art targeted sentiment analysis solutions on live data. One weakness in current solutions is the inability to measure key performance metrics like false-positive rate in near real time which is important in real-world applications. Another weakness in the current research is most methods are tested on static data like the Stanford Twitter Sentiment Corpus. In this paper we test our methods against live Twitter data and compare our results against both proprietary and non-proprietary solutions that are current as of 2017.

Live Twitter Sentiment Analysis

41

Our solution achieved 86.5% agreement with TextBlob over all classes of tweets (negative, positive and neutral), and we estimated a false-positive rate of 35% with an F1 score of 0.91. Our research adopts TextBlob to live Twitter live data and provides a basis for further research in sentiment analysis of social media data).

References 1. Carchiolo, V., Longheu, A., & Malgeri, M. (2013). Using Twitter data and sentiment analysis to study diseases dynamics. In Proceedings of the 19th International Conference on World Wide Web (pp. 17–19). Springer. 2. Gundecha, P. (2016). Watson has more accurate emotion detection. Bluemix Blog. 3. Saif, H., He, H., & Alani, H. (2012). Semantic sentiment analysis of Twitter. Knowledge Media Institute. 4. Hu, X., Tang, L., Tang, J., & Liu, H. (2008). Exploiting social relations for sentiment analysis in microblogging. Tempe, AZ: Arizona State University. 5. Kumar, A., & Sebastian, A. (2016). Sentiment analysis on Twitter. Delhi Technological University. 6. Kontopoulos, E., Berberidis, E., Dergiades, E., & Bassiliades, E. (2013). Expert systems with applications. School of Science and Technology, International Hellenic University. 7. Mohammad, S. M., Kirtichenko, S., Zhu, X. (2013). NRC-Canada: Building the state-of-the-art in sentiment analysis of tweets. Second joint conference on lexical and computational semantics (*SEM), Volume 2: Seventh International Workshop on Semantic Evaluation (pp. 321–327). 8. Apache Spark Documentation. (2016). Linear methods—RDD-based API. Apache Project, version 2.10.

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance Ehsan Amirian, Eugene Fedutenko, Chaodong Yang, Zhangxin Chen, and Long Nghiem

Abbreviations ACI ANN BO GlobalHmError HL HM LHD LM ML NN NPV OF OL RBF SL SAGD UA

Artificial and computational intelligence Artificial neural network Black oil Global history matching error (%) Hidden layer History matching Latin hypercube design Levenberg-Marquardt Multilayer Neural network Net present value Objective function Output layer Radial basis function Single layer Steam-assisted gravity drainage Uncertainty analysis

E. Amirian () · Z. Chen Department of Chemical and Petroleum Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada e-mail: [email protected]; [email protected] E. Fedutenko · C. Yang · L. Nghiem Computer Modelling Group Ltd., Calgary, AB, Canada e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_5

43

44

E. Amirian et al.

Introduction Modeling of Big Data Based on Artificial and Computational Intelligence Data-driven proxy modeling comprises big data analytics involving a comprehensive analysis of all available data associated with the underlying system or process. Data is the foundation or a building block for this modeling technique. Data-driven proxy modeling employs machine learning methods for constructing models to forecast system behavior in the presence of new data. The most wellknown concepts integrated in data-driven proxy modeling include artificial and computational intelligence tools, data-mining and statistical methods, and fuzzy logic and fuzzy set theories. These theories are integrated into the theme of datadriven proxy modeling for complementing or replacing physically based models or processes. This modeling technique assumes that the underlying process has created a database for observed practical cases as well as domain expert experience and knowledge [1], as illustrated in Fig. 1. The mission of data-driven modeling is to exploit these various information sources in order to generate a representative model for the underlying process. If the presented data-driven model is a good approximation of the primary process, then it can be employed as a data-driven proxy model to address uncertainty-based development scenarios regarding the underlying process. A major challenge in datadriven modeling is continuous integration of massive available information into systems real time. This task is normally performed by artificial and computational intelligence (ACI). Before getting into the description of ACI workflow, we will demonstrate a general introduction to the concept of big data. For a better understanding of the big data concept, typically five Vs are employed to outline the theme of big data analytics. These five Vs are illustrated in Fig. 2 which denotes topics of Volume, Variety, Velocity, Variability, and Value. Volume refers to the massive amount of competitor data generated in every second during the underlying process. For most of the big data applications, the volume of collected data is zettabytes or brontobytes. Variety notion which is

Database of Cases

Underlying Process

Induced Model

Experience & Knowledge

Fig. 1 Illustration of data-driven modeling approach [1]

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

45

Fig. 2 Five Vs of big data Variety

Velocity

Volume 5 Vs of Big Data

Value

Variability

also known as complexity implies the various types or structures of data sets that can be employed in data-driven analytics. An example to the Variety concept is the collection of massages, conversations within social media, photos, and videos or any other recordings together for specific targeted application. This is a vital step in big data analytics that when we need to extract knowledge, all these different data categories have to be linked together. Velocity defines the speed of data generation and data circulation within a specified segment. For big data applications, data needs to be generated fast and have to be processed fast as well. The drawback associated with low-speed analytics is that one may miss the upcoming opportunities. Variability is different from Variety concept and is defined as a factor that represents inconsistency within the data. This inconsistency usually refers to discrepancy of recorded data at different times. Assume that you know a restaurant that serves ten different foods which is Variety. Variability is the situation in which you go to that restaurant every day a week and buy the same type of food, but each day the taste and the smell are different. The last but not the least V is Value which indicates the fact that big data needs to add value to the underlying process. This includes the value of collected big data as well as the value of analytics associated with big data. It is crucial to make sure that the generated insights are based on accurate data and would lead to the scenarios which can add value to the specified process. All these five Vs will be exemplified in Workflow and Design of Proxy Modeling section, with the focus to oil and gas industry problems to see how our research will fit into a big data application framework. ACI is an umbrella term for computational techniques that are inspired by nature. These computational techniques are used for learning, controlling, optimization, and forecasting purposes throughout the underlying process. ACI subsets include evolutionary algorithms, swarm intelligence, fuzzy logic, and artificial neural network. Artificial neural network (ANN) is a virtual intelligence method used to identify or

46

E. Amirian et al.

Input

Output

ACI-Base

Actual Target

Forecast Mismatch

Mismatch Meet Demand?

Stop

Yes

No

Adjust & Calibrate Unknown Parameters

Fig. 3 General flow chart for ACI-based training and calibration

approximate a complex nonlinear relationship between input and target variables. In computer science and related fields, artificial neural networks are biologically inspired computational models based on human’s central nervous systems (in particular, the brain) that are capable of machine learning and pattern recognition. Recently, researchers have made enormous contributions to neural networks and developed novel techniques such as self-organizing maps and associative memories [2]. Mehrotra et al. [2], Poulton [3], and Suh [4] provided detailed information on history, background, and improvements of ANN. Training of any neural network is implemented by a learning algorithm and a comprehensive training data set. In practice, learning algorithms are grouped into two different categories. These types include unsupervised learning and supervised learning techniques. In the unsupervised learning concept, the idea is to find intrinsic hidden structures existing in unlabeled data. The supervised learning theory is applied where the target values of objective functions are provided. A training data set is fed as an input vector into this method so that the rules related to the desired objective function are generated. This is done by adjusting the network unknown parameters including weights and biases. Network attributes involving weights and biases are then employed to process the input parameter space of a test data set. The weights are adjusted based on the error between network forecasts and observed values of the desired objective function. The network unknown parameters are modified until the forecast error meets the demand. This process is an iterative process and will be continued until the desired sopping criterion is reached. Figure 3 shows the general workflow of an ACI-based model which is a general term for ANN or other virtual intelligence techniques.

Application of ACI to Petroleum Engineering Petroleum engineering based on reservoir simulations is a natural area for ACI application. First, it produces great volumes of data that must be analyzed and quantified like in any other engineering field. Second, reservoir simulations themselves require

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

47

ACI proxies as an assisting tool. On one hand, simulations are now a standard part of generating production forecast [5–7]. For brown fields it is a common practice to use a reservoir simulation model, match the model with new reservoir data on a regular basis, and run the model in a forward prediction mode to generate forecasts of oil, gas, and water production volumes [8]. On the other hand, the computational cost associated with the simulations of complex oil reservoirs still can be extremely high, especially when the underlying simulation model must be solved by using a fine spatial grid and small time steps. Moreover, in the context of production optimization, history matching, and uncertainty quantification studies (which are normal parts of any reservoir analysis), those costs significantly increase and impose real challenges for reservoir engineers. Such costs and the simulation run times associated with them pose a considerate problem also for operation management for which generating reliable production forecasts is a key business process. Production forecasts are used to calculate cash flow using economic models and to assess reserves in the corporate portfolio. The number of simulations required to satisfy all these needs can be prohibitively high even for the modern computational clusters. All these problems motivate the search of alternative ways to computationally tackle the problem. Normally, in the reservoir modeling community, this is done through proxy models which are able to achieve the same predictability of full-fledged simulations through some approximate models and still get the solutions of the similar level of reliability. This is why ACI-based proxy techniques have been widely employed in different areas of petroleum engineering [9–11]. As a practical tool, it has been integrated in reservoir engineering for classification problems, reservoir dynamic and static properties prediction, and characterization of workflows [12–14]. It has been successfully applied for recovery performance forecasts during oil and gas recovery processes [15, 16]. Design and optimization of injection-production scenarios ([16–19]) and assisted history matching for recovery processes [20, 21] are other applications of ACI-based models. Proxy modeling for prediction of heavy oil recoveries is implemented using ACIbased techniques in recent years [22–27]. Screening of enhanced oil recovery (EOR) projects [28, 29], characterization of unconventional plays [30], and performance evaluation of a CO2 sequestration process [31] are other practical applications of this type of modeling techniques in petroleum engineering. In this work, a data-driven ACI proxy modeling layout is utilized to design an intelligent proxy model which can assist history matching workflows as well as forecasting schemes. The paper is organized as follows. Section “Workflow and Design of Proxy Modeling” introduces the basic design and the workflow of the proxy modeling of oil reservoirs. Section “Neural Network Interpolation Algorithms” describes the multidimensional interpolation techniques that are used to predict an output value for any given combination of operational/uncertain parameters. In section “Big Data Assembly and Base Cases,” we discuss the principles of data assembly and introduce the base cases of reservoirs being modeled. Section “Results and Discussion” presents the results comparing the proxy’s predictions with an

48

E. Amirian et al.

actual simulator’s outputs for the same operational/uncertain parameters. Finally, the paper is concluded by a short discussion.

Workflow and Design of Proxy Modeling The key elements of big data which are presented as five Vs are considered interesting challenges in oil and gas recovery processes. These Vs are already defined in the introduction section, so here we will focus on how oil and gas big data problems will account for them. Generally, there are two main categories of data existing within an oil and gas company. One is called static data set and the second is dynamic data set. Static data are the ones coming from well logs and core analyses, while the dynamic data sources are operational constraints, injection and production history, and results from simulation studies. All these data sources exist through the lifetime of a reservoir and can include at least 15 years of data on average. While these data sources are being instantaneously updated, the size of generated data can fit into the big data problem concept of Volume. Combining different sources of oil and gas recovery processes data including real field data alongside with the ones from simulation studies and all the abovementioned data sources is an indication of dealing with Variety concept within our research. Regarding the Velocity concept, most of data associated with specified oil and gas recovery process are being generated in a fast speed and have to be analyzed fast as well. Production and injection rates and changes in reservoir temperature and pressure are symptoms that can change within fraction of seconds. Thus, they need to be recorded and evaluated in a quick fashion to decide for the upcoming operational constraints or production and injection scenarios regarding the underlying oil and gas recovery process. Variability is an inseparable concept associated with any recorded data during oil and gas operations. Reservoir characteristics and other operational parameters may change daily due to the production and injection effects on the nature of reservoir resulting in generation of inconsistency which account for Variability concept within the specific data. Collected data from oil and gas different sectors is such a superb valuable source which yields into better understandings of the oil and gas reservoir nature. This knowledge will boost the accuracy of decisions being made for field development plans and accounts for the Value concept of big data. Thus, in oil and gas industry, data can add a huge value to the profits made by companies investing in these areas. Figure 4 illustrates a typical data-driven proxy workflow which is employed in this paper. The designed proxy is developed to forecast any desired objective function (OF) representing performance of the underlying oil and gas recovery process. Furthermore, this tool can be used to perform uncertainty analysis (UA), global history matching (GHM), and net present value (NPV) optimization.

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

49

Generate an initial experimental design (Latin Hypercube)

Run reservoir simulations using the experimental design

Get the initial set of training data

Build a user-specified proxy using training data

Validate possible optimum solutions with reservoir simulations Add validated solutions to the training data Find a set of possible optimum solutions using the proxy model

No

Stop criteria satisfied?

Yes

Stop

Fig. 4 Data-driven proxy workflow implemented in this study

The algorithm has five key steps: 1. Experimental design. The purpose of experimental design is to construct combinations of the input parameter values such that the maximum information can be obtained from the minimum number of simulation runs. For the numerical case studies in this research, Latin hypercube design (LHD) is employed for generating a comprehensive data set to model the data-driven proxy for the underlying recovery processes. We prefer LHD over the classical Monte-Carlo (MC) design due to the fact that the latter relies on pure randomness, so it can be inefficient. Sampling may end up with some points clustered closely, while other intervals within the space get no samples. LHD, on the other hand, aims to spread the sample points more evenly across all possible values [32]. It divides

50

2. 3.

4.

5.

E. Amirian et al.

each input distribution into N intervals of equal probability and chooses one sample from each interval. It shuffles the sample for each input so that there is no artificial correlation between the inputs. Initial simulations according to the experimental design. The basic training data results are collected which form the basis of the proxy model. Proxy modeling. The selected proxy model is built from the training data obtained from the previous step. In this paper we compare two proxy models: (1) singlelayer (SL) radial basis function (RBF) NN and (2) multilayer (ML) LevenbergMarquardt (ML) NN. Detailed proxy formulation and algorithms for single- and multilayer neural networks will be explained in the next section. Proxy-based optimization. Optimization is performed according to the specified criteria of minimization of some key parameter (e.g., like a mean-square error for GHM or an inverse NPV) of the given study. A pre-defined number of possible optimum solutions (i.e., suboptimal solutions of the proxy model) are generated using a brute-force method which takes advantage of the relatively high computational efficiency of a selected proxy model in comparison with the actual simulation cost. Of those, the top n of the most optimal parameters (n is a user-specified number) are selected to be validated by the actual simulations. Validation and iteration. For each possible optimal solution found through proxy optimization, a reservoir simulation needs to be conducted to obtain a true objective function value. To further improve the predictive accuracy of the proxy model at the locations of possible optima, the validated solutions are added to the initial training data set. The updated training data set is then used to build a new proxy model. With the new proxy model, a new set of possible optimal solutions can be obtained. This iteration procedure is continued for a given number of iterations or until a satisfactory optimal solution is found.

The performance of the described workflow crucially depends on the predictive quality of the proxy model, as it can significantly reduce the number of actual simulation runs. Therefore, importance of a proper proxy interpolation model cannot be overestimated. One of the most popular and simple models is a polynomial proxy [33]. It is computationally cheap both from construction and evaluation standpoints; however, its prediction quality does not improve when the training data size exceeds some threshold value and, on average, it is not as good as NN’s for complex cases. In this paper, we will concentrate on the latter, particularly on single-layer RBF and multilayer ML NN. Nine different case studies including steam-assisted gravity drainage (SAGD), black oil (BO), and unconventional reservoirs will be presented to illustrate the approach. The research settings are organized as follows: 1. For each case study, different numbers (from 500 to 7000) of direct numerical simulations are conducted. These simulations correspond to different combinations of operational/uncertain parameters sampled according to the LHD. 2. The results of the simulated production performance are used to build the corresponding RBF NN and ML LM NN proxies which represent production performance as objective functions of the operational/uncertain parameters.

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

51

3. The proxy models are then used to predict the production data for the given reservoir for any combination of the operational/uncertain parameters. 4. The predicted data is compared with the actual simulation results for the same combinations of the parameters to evaluate the prediction quality.

Neural Network Interpolation Algorithms As it was already mentioned in the Introduction, the main constraint of oil and gas reservoir modeling is reservoir simulation time which sometimes can be days and weeks per one single run. As a result, training data sets are usually not very big, at least from big data point of view. Therefore, there is a common belief in oil and gas reservoir community that multilayer NNs are not very practical for complex reservoir cases, as nobody can afford tens of thousands simulations per such case. In this work, we show that this belief is not correct, and many complex cases can benefit from multilayer NN modeling even if a training data set is not very big. In this section we describe both algorithms in detail, as a comparison between their performances is one of our main goals.

Radial Basis Function Networks (RBF) A single-layer RBF network consists of an input layer of source nodes, a single hidden layer of nonlinear processing units, and an output layer of linear weights. The input-output mapping performed by the RBF network can be described as: M  −  → −  Y˜ → x p = w0 + wi ϕ − xp , xi, →

(1)

i=1

where ϕ(xi , xp ) is the radial basis function that depends on the distance between the input parameter vector xp and the center xi and, in the general case, on the mutual orientation of those vectors in a N-dimensional parameter space. The distance and the orientation scales as well as the precise shape of the radial function are fixed parameters of the model. Here y˜ xp stands for the value of an objective function (i.e., NPV, cumulative oil, or SOR) at the point xp in the N-dimensional parameter space, and M is a size of training data. The centers xi correspond to the training parameters in the initial LHD. The basic architecture of the RBF NN is presented in Fig. 5. RBFs are simply a class of functions that can be employed to approximate any function and as a building block in a network. A most typical example of such functions is the Gaussian kernel which is based on the assumption that the network’s

52

E. Amirian et al.

Fig. 5 Schematic of a basic RBF NN architecture

response monotonically decreases with distance from a central point. In our case, however, it has been found out that the network’s response increases as a power function of the distance between two N-dimensional points in the parameter space. Namely, the RBF is best approximated by the following power function:     ϕi xi , xp = 0.01 L0.75 xi , xp ,

(2)

where the function L defines the square of the distance between points xi and xp in the general case of anisotropic metrics. Normally, the anisotropy factor is completely neglected for high-dimensional cases due to the infeasibility of a fullscale multidimensional analysis, and the function L is selected simply in the form → → → of the squared Euclidean distance between parameters − x +− z and − x . While this approach performs satisfactorily as well, this study tries to improve performance by getting some insight into the inherently anisotropic nature of Eq. (2). It uses an approach that can be considered somewhat intermediate between the full-scale multidimensional fitting (which is still infeasible, at least for the limited amount of training data) and the complete anisotropy neglecting. Namely, the non-Euclidean squared distance in the M-dimensional parameter space is introduced according to the following non-Euclidean diagonal metric relation: L=

M 

gα xα 2 ,

(3)

α=1

where xα stands for the difference between α-components of parameters at the considered points and gα are some parameters to be found by fitting of the experimental variogram into an analytical function of the non-Euclidean distance defined by Eq. (3) under the constraint gα ≥ 0, ∀ α. The latter constraint guarantees the positive definition of the metrics in Eq. (4). It has been found out that, for the base case considered below, the RBF neural network algorithm based on the metrics in Eq. (4) provides better interpolation than the corresponding algorithm based on the Euclidean distance (the latter corresponds

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

53

to the case gα = 1, ∀ α). This fact allows one to decrease the size of necessary training data and, consequently, the overall time of proxy engine construction. The parameters wi are estimated by exact fitting of the model, Eq. (1), to the training data with an additional constraint of normalization of nodal weights as: M 

  wi xp = 1.

(4)

i=1

The values of parameters wi represent the “importance” of each parameter in the parameter space. In the current CMOST implementation, the proxy dashboard allows the user to choose between isotropic and anisotropic versions of RBF proxy (i.e., between Euclidean and non-Euclidean metrics); however, when the size of the Latin hypercube exceeds 400, only the isotropic version will be used as the construction of the anisotropic RBF takes too much time.

Multilayer Neural Network Algorithm: Levenberg-Marquardt Optimization Multilayered neural network (ML NN) consists of the following elements: 1. An input layer (D-dimensional input parameter) 2. Any number of hidden layers (HL) with any number of nodes in each HL 3. An output layer (OL) with the same number of nodes as in the last HL The basic architecture of ML NN is presented in Fig. 6: Assuming that ML NN contains M HLs, any objective function (OF) is represented by the OL as: Y˜ = T

⎧ M +1 ⎨L ⎩

j =1

Hj (M) wj

⎫ ⎬ ⎭

(5)

,

where transformation function T (z) = 1+e1 −z for z ≥ 0.0 or T(z) = tan H(z) for − ∞ ≤ z ≤ ∞, LM stands for the number of neurons in the Mth HL, Hj (M) stands for the output of the Mth HL (its dimensionality is LM + 1 and HLM +1 (M) ≡ −1.0), and wj stands for the weight vector of the OL. For any Kth HL (1 ≤ K ≤ M), an output is represented as: Hj (L) = T

⎧ ⎨LK−1 +1 ⎩

l=1

⎫ ⎬

Hl (K−1) wlj (K) , ⎭

(6)

54

E. Amirian et al.

Hidden Layer Input Layer

Output Layer H1

X1

I1

X2

I2

wij O1

Y1

O2

Y2

OM

YM

H2

H3

XN

IN HK

Fig. 6 Schematic of a basic ML NN architecture Fig. 7 Flow of the feedforward signal and backward error in multilayer training

where Hl (K − 1) is an output of the previous step, (K − 1)th HL (assuming that Hl (1) = Pl , where Pl is an input parameter, 1 ≤ l ≤ D, and HD + 1 (1) = − 1.0), and wlj (K) stands for the weight matrix of the Kth HL. The goal of the NN training is to optimize all the weight matrices of HLs and the weight vector of the OL according to the provided training data set. The optimization target is the minimization of the mean-square error for the given weight distribution: −  E2 → w =

N    1  − (i) (i) → 2 → Y → x − Y˜ − x ,− w , N −1

(7)

i=1

where Y and Y˜ are the simulator’s and proxy’s outputs for the objective functions, respectively. Every time for the set of prediction, the gradient error information is propagated back, as presented in Fig. 7.

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

55

Below is the step-by-step workflow of the current CMOST implementation of ML NN that utilizes the gradient-based Levenberg-Marquardt algorithm of the NN optimization: 1. Randomly populate initial weights of the network. 2. For N training parameters, compute nonlinear output (OF) according to a set of nonlinear rules. 3. Compute an error by comparing with the actual OF for that parameter. → − 4. Update the model’s and coefficients to find such weight correction h  weights  − → → → that minimizes χ 2 − w + h where − w is the current weight. 5. Update each layer weights at the (i + 1)th iteration step according to the → − → → w (i) + h , minimization equation − w (i+1) = −

 − → − →  y −− y JT J + λI h = JT →

(8)

6. J is a Jacobian (second derivatives contribution is usually neglected), if calculated properly for each layer—making it possible to be used for any number of layers (like in deep learning). 7. λ is a control parameter balancing between Gradient (λ → ∞) and Newtonian (λ → 0) BEP. The crucial part of any gradient-based optimization algorithm is the proper evaluation of derivatives of an objective function with respect to the NN weights. In our case it is done analytically, under an assumption that the transformation function  is differentiable as T (z) = F(T(z)). Assuming that we have M hidden layers, it can be shown that for the output layer   ∂ Y˜ = Hα (M) F Y˜ , ∂wα

(9)

where Hα (M) is an output of the Mth (the last) hidden layer. Then it can be proven that for each hidden layer ∂ Y˜ ∂wαβ (M−K)

= Hα (M−K−1) Gβ (M−K) ,

(10)

where 0 ≤ K ≤ M − 1, Hα (0) is an input parameter, and the function G is defined as  NM−K+1   Gβ (M−K) = F Hα (M−K) wβμ (M−K+1) Gβ (M−K+1) , μ=1

(11)

56

E. Amirian et al.

where for K = 0 (the last, Mth hidden layer), the value of Gβ (M) can be easily calculated as     (12) Gβ (M) = F Hβ (M) F Y˜ wβ , wβ indicates output layer weights, and Ni is the number of nodes in the ith hidden layer. Equations (9)–(12) provide a simple and reliable recurrent algorithm of back propagation of derivative information from the output layer through all hidden layers back to the input layer in accordance with the general approach in Fig. 7.

Big Data Assembly and Base Cases This study integrates various data collections of comprehensive dynamic flow simulation practices for different recovery scenarios. The master data set includes records from nine different case studies that will be explained in the next section. Each case study comprises an input parameter space as well as actual values for the desired objective functions.

Applied Ranges for Model Parameter Space Input parameter spaces of the employed data set include various parameters resembling characteristics associated with both reservoir heterogeneity and operational constraints. Reservoir heterogeneities are quantified using different rock and fluid properties, while different injection/production scenarios represent operational constraints within each case study. All these parameters are used either in the pertinent input parameter space or as the desired objective function to be predicted. Table 1 presents a dimension analysis of the big data set provided for this study. The training data set size, verification or testing data set size, and dimensions of input and output parameter spaces for each case study are presented. This big data set involves 18,972 training samples and 5167 verification scenarios where in total we deal with 24,139 data points. 1. SAGD Six-Well Pairs This case study represents a SAGD process with six-well pairs in the simulation model. The commercial simulator STARS is employed to perform numerical simulations under different injection/production scenarios. Its parameter space includes 23 input parameters indicating operational constraints and 1 objective function that is the net present value (NPV). Training and verification data sets include 1071 and 401 samples, respectively.

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

57

Table 1 Dimensional analysis of the assembled big data set Case study SAGD six-well pairs SPE9 Eagle Ford SAGD one-well pair X11 X22 X44 X82 X88

No. of data samples Training Verification 1071 401 1250 232 550 173 1000 200 900 210 1799 400 3600 780 1802 603 7000 2168

No. of input parameters 23 33 22 18 11 22 44 82 88

No. of output parameters 1 1 1 1 1 1 1 1 1

2. SPE9 As a synthetic case, the SPE comparative solution project (SPE9) simulation model [34] is studied, utilizing different uncertain parameters to forecast the uncertainty during the history matching process. This is a black oil model where the commercial simulator is IMEX and 33 uncertain variables are employed as input parameters. The output attribute for this case study is the global history matching error. There are 1250 and 232 data samples used as training and verification subsets, respectively. 3. Eagle Ford This is an unconventional oil field case in which IMEX is used as the commercial simulator for dynamic flow simulations of this black oil model. The input parameter space includes 22 uncertain simulation variables, and the output objective function is the global history matching error from simulation results. This case study includes 550 training and 173 testing data points. 4. SAGD One-Well Pair A SAGD recovery practice is designed with one-well pair for this case study. Different scenarios are subjected to STARS which is the commercial simulator. Twenty-two simulation uncertain attributes are comprised in the input parameter space, and the desired output objective function is the global history matching error from simulation results. In addition, this data set includes 1000 training and 200 verification data samples. 5. X11, X22, X44, X82, and X88 Cases The X11, X22, X44, X82, and X88 case studies are artificial data sets generated to investigate the global history matching error. X11, X22, X44, X82, and X88 include 11, 22, 44, 82, and 88 input parameters, respectively. Furthermore, all of them have one objective function which is the global history matching error from numerical flow simulation results. X11 has 900 training and 210 verification data samples. X22 comprises 1799 training and 400 testing samples. X44, X82, and X88 involve 3600, 1802, and 7000 training and 780, 603, and 2168 verification data points, respectively. The commercial flow simulator used for these case studies is STARS.

58

E. Amirian et al.

Results and Discussion The assembled data set includes nine different case studies of the thermal recovery method known as SAGD and also black oil (BO) simulation practices. These cases comprise practical applications of the underlying recovery process and highlight history matching (HM), uncertainty analysis (UA), and optimization workflows. For each case study, the data-driven SL NN proxy model using the RBF algorithm and the ML NN with LM algorithm is developed with respect to the desired objective functions. The objective functions can be an indicator of production output or HM error forecasts. A sensitivity analysis for the selection of ML NN configurations is implemented, and based on the results, two different candidate architectures are selected. First, 2 hidden layers include 12 and 10 nodes on each layer, respectively. Second, a combination of 6 hidden layers has 3, 4, 3, 3, 5, and 3 nodes, respectively. Latin hypercube design (LHD) is used to sample the input parameter space of the assembled data set. The LHD size N for training purposes changes in the range of 550–7000, and for verification scenarios, L is between 173 and 2168 samples. In this section we will demonstrate the results from this data-driven proxy modeling workflow. The obtained results from proxy will be plotted against the actual values from the commercial simulator which is either IMEX or STARS. Furthermore, RBF results will be compared with LM to evaluate the predictability performance when moving from a single-layer NN to a multilayer NN. Within these nine case studies, in five cases on the results from RBF and LM on par, we will only plot the results from ML LM NN in the figures. There are four case studies in which significant improvements in predictions are made when moving from SL RBF NN to ML LM NN. For these four cases, results from both algorithms will be plotted alongside each other. Figure 8 represents the obtained results of proxy modeling for the SAGD six-well pairs case study. The objective function is NPV, and a two-hidden-layered LM NN is employed. A cross plot of proxy predicted NPV is plotted versus the actual values from dynamic flow simulations. NPV is in million dollars, and training experiments are shown using blue squares. The verification or testing samples are also marked with red squares. The training performance after 5000 epochs seems satisfactory as the data points are close to the 45 degree line. Proxy predictability for this case study presents decent performance as the predicted values for the verification scenarios are close to the ones from simulation runs. Predicted NPVs can assist reservoir management sides to decide which injection/production scenario can be economically the most profitable one. For this case study, results from SL RBF and ML LM are on par. Two black oil field reservoir cases simulated with IMEX were discussed earlier. These two case studies are the SPE9 simulation model and Eagle Ford unconventional field case. In these case studies, the desired objective function to be predicted was the global history matching error in percentage. This objective function is presented as GlobalHmError (%), and the simulation uncertain parameters are used as predicting input variables for the data-driven proxy modeling workflow. Cross

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

Training Experiments

Verification Experiments

59

45 Degree Line

350

Simulated NPV_Field (million $)

325 300 275 250 225 200 175 150

150

175

200

225

250

275

300

325

350

Proxy Predicted NPV_Field (million $)

Fig. 8 ML LM NN data-driven proxy model results for SAGD six-well pairs case study

plots presented in Figs. 9 and 10 show the obtained results through the data-driven proxy modeling technique applied to these case studies. In both scenarios the datadriven proxy model results from ML LM NN and SL RBF NN are illustrated for training and verification experiments. As one can see, ML LM NN results in both scenarios are indicating better prediction results than the SL RBF NN. Here, a twohidden-layered LM NN with 12 and 10 processing nodes on each hidden layer was utilized for both SPE9 and Eagle Ford case studies. Figure 11 presents the proxy forecast for the SAGD one-well pair case study in which the objective function is GlobalHmError (%). The predicted values are plotted against the actual ones from the thermal flow simulator STARS. The presented plots outline that the prediction quality in (a) ML LM NN is way better than in (b) SL RBF NN. In this scenario, the ML LM NN data-driven proxy model predictability seems stunning as the verification experiments are very close to the 45◦ line and displays a perfect match. For this case study, a two-hidden-layered proxy model is utilized the same as in the previous cases. The artificial data sets from X11, X22, X44, X82, and X88 case studies were also investigated to evaluate the data-driven proxy modeling predictability for the GolbalHMError (%) objective function. These are thermal case studies, and forecasting results are illustrated in Figs. 12–17. Based on the assigned cross plots for X11, X22, X44, and X82, it can be seen that in these case studies, the datadriven proxy predicted GlobalHmError (%) is close to the one from dynamic flow simulation, which is STARS here.

60

E. Amirian et al.

Verification Experiments 180

160

160

Simulated GlobalHmError (%)

Simulated GlobalHmError (%)

Training Experiments 180

140 120 100 80 60 40 20

45 Degree Line

140 120 100 80 60 40 20

0

0 0

20

40

60

80

100 120 140 160 180

0

Proxy Predicted GlobalHmError (%)

20

40

60

80

100 120 140 160 180

Proxy Predicted GlobalHmError (%) (b)

(a)

Fig. 9 Data-driven proxy model results for SPE9 case study. (a) ML LM NN and (b) SL RBF NN Verification Experiments

20

20

18

18 Simulated GlobalHmError (%)

Simulated GlobalHmError (%)

Training Experiments

16 14 12 10 8 6 4 2

45 Degree Line

16 14 12 10 8 6 4 2

0 0

2 4 6 8 10 12 14 16 18 20 Proxy Predicted GlobalHmError (%) (a)

0

0

2 4 6 8 10 12 14 16 18 20 Proxy Predicted GlobalHmError (%) (b)

Fig. 10 Data-driven proxy model results for Eagle Ford case study. (a) ML LM NN and (b) SL RBF NN

For most of these cases, the prediction results from ML LM NN and SL RBF NN were on par, except for the X44 case study. In the X44 scenario, better predictions were obtained using ML LM NN than the SL RBF NN, which is illustrated in Fig. 14. To train the appropriate data-driven proxy model for the abovementioned cases, here a two-hidden-layered LM NN comprising 12 and 10 hidden nodes on each

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

Verification Experiments

40

40

35

35

Simulated GlobalHmError (%)

Simulated GlobalHmError (%)

Training Experiments

30 25 20 15 10 5 0

61

45 Degree Line

30 25 20 15 10 5 0

0 5 10 15 20 25 30 35 40 LM-Proxy Predicted GlobalHmError (%)

0 5 10 15 20 25 30 35 40 RBF-Proxy Predicted GlobalHmError (%)

(a)

(b)

Fig. 11 Data-driven proxy model results for SAGD one-well pair case study. (a) ML LM NN and (b) SL RBF NN

Training Experiments

Verification Experiments

45 Degree Line

14

Simulated GlobalHmError (%)

12 10 8 6 4 2 0 0

2

4 6 8 10 Proxy Predicted GlobalHmError (%)

Fig. 12 ML LM NN data-driven proxy model results for X11 case study

12

14

62

E. Amirian et al.

Training Experiments

Verification Experiments

45 Degree Line

26 24

Simulated GlobalHmError (%)

22 20 18 16 14 12 10 8 6 4 2 0 0

2

4

6

8 10 12 14 16 18 20 Proxy Predicted GlobalHmError (%)

22

24

26

Fig. 13 ML LM NN data-driven proxy model results for X22 case study

Verification Experiments

100 90 80 70 60 50 40 30 20 10 0 -10

Simulated GlobalHmError (%)

Simulated GlobalHmError (%)

Training Experiments

45 Degree Line

100 90 80 70 60 50 40 30 20 10 0 -10

-10 0 10 20 30 40 50 60 70 80 90 100 LM-Proxy Predicted GlobalHmError (%)

-10 0 10 20 30 40 50 60 70 80 90 100

(a)

(b)

RBF-Proxy Predicted GlobalHmError (%)

Fig. 14 Data-driven proxy model results for X44 case study. (a) ML LM NN and (b) SL RBF NN

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

Training Experiments

Verification Experiments

63

45 Degree Line

30

Simulated GlobalHmError (%)

25 20 15 10 5 0 -5 -5

0

5 10 15 20 Proxy Predicted GlobalHmError (%)

25

30

Fig. 15 ML LM NN data-driven proxy model results for X82 case study

layer was employed. Very few verification experiments are far away from the 45◦ line which can be an indication of an overfitting issue. Furthermore, this can be due to the fact that the ranges of the input parameter space for some of these verification scenarios are out of the ranges of the ones for training experiments. The story for the case study X88 was a bit different from the rest of aforementioned ones. Here, the problem was quite dissimilar where according to Fig. 16, one can see very poor prediction results for most of verification experiments demonstrated with red squares. In this case study, we first utilized a two-hidden-layered LM NN with the same configuration as in the previous ones. This multilayered LM NN forecast results for GlobalHmError (%) was unsatisfactory. Thus, we tried to test a new architecture which was deeper than the current one. The new selected architecture was a sixhidden-layered LM NN with 3, 4, 3, 3, 5, and 3 hidden nodes on each hidden layer, respectively. The obtained results are presented in Fig. 17 where a cross plot of proxy predicted GlobalHmError (%) is plotted against the actual one from simulation runs. For this case study moving to the deeper LM NN with an even fewer number of nodes has dramatically improved the data-driven proxy model predictability. However, there are some existing deficiencies within the forecasted values for the desired objective function, but this achievement which is moving to the deeper NNs for a big data set seems interesting. In the X88 case study compared to the other case studies, we dealt with a quite large set of collected data.

64

E. Amirian et al.

Training Experiments

Verification Experiments

45 Degree Line

100

Simulated GlobalHmError (%)

90 80 70 60 50 40 30 20 10 0 0

10

20

30

40

50

60

70

80

90

100

Proxy Predicted GlobalHmError (%)

Fig. 16 ML LM NN data-driven proxy model results for X88-2HL case study Training Experiments

Verification Experiments

45 Degree Line

100

Simulated GlobalHmError (%)

90 80 70 60 50 40 30 20 10 0 0

10

20

30

40

50

60

70

80

Proxy Predicted GlobalHmError (%)

Fig. 17 ML LM NN data-driven proxy model results for X88-6HL case study

90

100

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

65

Conclusions The rise of uncertainty-based reservoir development scenarios is the main stimulus for reservoir development teams to look for alternative tools that can be reevaluated numerous times much faster than the current existing solutions. In this study, a comprehensive data set from various oil and gas recovery processes has been collected and employed to design an assisted tool for commercial reservoir flow simulators. The utilized method which integrates machine learning theories alongside with a statistical analysis of data-mining concepts was called a data-driven proxy modeling technique. This technique was implemented to perform history matching, uncertainty analysis, optimization, and predictive modeling based on a desired objective function. Presented results revealed that multilayer LM NN can be used for any reasonable size of oil and gas reservoir simulations. Multilayer NN performance was sensitive to its architecture, and it is very important to find the way to set a proper default for the NN configuration. The larger the training data, the more beneficial is to use a deeper-layered architecture. This was investigated through case X88 which was a thermal oil recovery scenario. It is shown that the proxy models can be used as a light version of a simulator to estimate production. One of the major results is the fact that multilayered neural networks which are usually considered to be beneficial for much bigger data analysis can be more accurate than RBF NN for some oil reservoir cases with limited training data sets. Demonstrated outcomes imply the great potential of this modeling approach to be integrated directly into most existing reservoir management routines. This paper provides a viable tool to overcome challenges related to dynamic assessment of uncertainties during history matching of recovery processes and signifies the ability of data-driven proxy modeling in future performance prediction of oil and gas recovery processes. Acknowledgments The authors would like to thank Computer Modeling Group Ltd. for permission to publish this paper. This research is supported by the NSERC/AIEES/Foundation CMG and AITF (iCORE) Chairs.

References 1. Kjaerulff, U. B., & Madsen, A. L. (2008). Bayesian networks and influence diagrams. Springer Science Business Media, 200, 114. 2. Mehrotra, K., Mohan, C. K., & Ranka, S. (1997). Elements of artificial neural networks. Cambridge, MA: MIT Press. 3. Poulton, M. M. (Ed.). (2001). Computational neural networks for geophysical data processing (Vol. 30). Amsterdam: Elsevier. 4. Suh, S. (2012). Practical applications of data mining. Sudbury, MA: Jones & Bartlett. 5. Chen, Z. (2002). Characteristic mixed discontinuous finite element methods for advectiondominated diffusion problems. Computers & Mathematics with Applications, 191, 2509–2538.

66

E. Amirian et al.

6. Chen, Z., Ewing, R. E., Kuznetsov, Y., Lazarov, R., & Maliassov, S. (1994). Multilevel preconditioners for mixed methods for second order elliptic problems. Numerical Linear Algebra and Applications, 3, 427–453. 7. Chen, Z., Huan, G., & Ma, Y. (2006). Computational methods for multiphase flows in porous media. Computational science and engineering series (Vol. 2). Philadelphia, PA: SIAM. 8. Purewal, S. (2005). Production forecasting in the oil and gas industry—Current methods and future trends. Business Briefing: Exploration & Production: The Oil & Gas Review (2), p. 12. 9. Bravo, C. E. et al. (2012). State-of-the-art application of artificial intelligence and trends in the E&P industry: A technology survey. In SPE Intelligent Energy International. Society of Petroleum Engineers. 10. Mohaghegh, S. (2000). Virtual-intelligence applications in petroleum engineering: Part Iartificial neural networks. Journal of Petroleum Technology, 52, 64–73. 11. Saputelli, L., Malki, H., Canelon, J., & Nikolaou, M. (2002). A critical overview of artificial neural network applications in the context of continuous oil field optimization. In SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers. 12. Aminian, K., Ameri, S., Oyerokun, A., & Thomas, B. (2003). Prediction of flow units and permeability using artificial neural networks. In SPE Western Regional/AAPG Pacific Section Joint Meeting. Society of Petroleum Engineers. 13. Chaki, S., Verma, A. K., Routray, A., Mohanty, W. K., & Jenamani, M. (2014). Well tops guided prediction of reservoir properties using modular neural network concept: A case study from western onshore, India. Journal of Petroleum Science and Engineering, 123, 155–163. 14. Tang, H., Meddaugh, W. S., & Toomey, N. (2011). Using an artificial-neural-network method to predict carbonate well log facies successfully. SPE Reservoir Evaluation & Engineering, 14, 35–44. 15. Awoleke, O., & Lane, R. (2011). Analysis of data from the Barnett shale using conventional statistical and virtual intelligence techniques. SPE Reservoir Evaluation & Engineering, 14, 544–556. 16. Lechner, J. P., Zangl, G. (2005) Treating uncertainties in reservoir performance prediction with neural networks. In SPE Europec/EAGE Annual Conference. Society of Petroleum Engineers. 17. Zangl, G., Graf, T., & Al-Kinani, A. (2006). A proxy modeling in production optimization. In SPE Europec/EAGE Annual Conference and Exhibition. Society of Petroleum Engineers. 18. Artun, E., Ertekin, T., Watson, R., & Miller, B. (2012). Designing cyclic pressure pulsing in naturally fractured reservoirs using an inverse looking recurrent neural network. Computers & Geosciences, 38, 68–79. 19. Ayala, H. L. F., & Ertekin, T. (2007). Neuro-simulation analysis of pressure maintenance operations in gas condensate reservoirs. Journal of Petroleum Science & Engineering, 58, 207– 226. 20. Costa, L. A. N., Maschio, C., & Schiozer, D. J. (2014). Application of artificial neural networks in a history matching process. Journal of Petroleum Science and Engineering, 123, 30–45. 21. Maschio, C., & Schiozer, D. J. (2014). Bayesian history matching using artificial neural network and Markov chain Monte Carlo. Journal of Petroleum Science and Engineering, 123, 62–71. 22. Amirian, E., & Chen, Z. (2015). Practical application of data-driven modeling approach during waterflooding operations in heterogeneous reservoirs. Paper presented at the SPE Western Regional Meeting-Garden Grove. Society of Petroleum Engineers. 23. Amirian, E., Leung, J. Y., Zanon, S., & Dzurman, P. (2013) Data-driven modeling approach for recovery performance prediction in SAGD operations. In SPE Heavy Oil Conference-Canada, Calgary, AB, Canada. Society of Petroleum Engineers. 24. Amirian, E., Leung, J. Y., Zanon, S., & Dzurman, P. (2015). Integrated cluster analysis and artificial neural network modeling for steam-assisted gravity drainage performance prediction in heterogeneous reservoirs. Expert Systems with Applications, 42, 723–740. 25. Fedutenko, E., Yang, C., Card, C., & Nghiem L. X. (2012) Forecasting SAGD process under geological uncertainties using data-driven proxy model. In SPE Heavy Oil Conference Canada. . Society of Petroleum Engineers.

Artificial Neural Network Modeling and Forecasting of Oil Reservoir Performance

67

26. Fedutenko, E., Yang, C., Card, C., & Nghiem, L. X. (2014). Time-dependent neural network based proxy modeling of SAGD process. In SPE Heavy Oil Conference-Canada. Society of Petroleum Engineers. 27. Popa, A. S., & Cassidy, S. D. (2012). Artificial intelligence for heavy oil assets: The evolution of solutions and organization capability. In: SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers. 28. Parada, C. H., & Ertekin, T. (2012). A new screening tool for improved oil recovery methods using artificial neural networks. In SPE Western Regional Meeting. Society of Petroleum Engineers. 29. Zerafat, M. M., Ayatollahi, S., Mehranbod, N., & Barzegari, D. (2011). Bayesian network analysis as a tool for efficient EOR screening. In SPE Enhanced Oil Recovery Conference. Society of Petroleum Engineers. 30. Holdaway, K. R. (2012). Oilfield data mining workflows for robust reservoir characterization: Part 2, SPE Intelligent Energy International. Long Beach, CA: Society of Petroleum Engineers. 31. Mohammadpoor, M., Firouz, Q., Reza, A., & Torabi, F. (2012). Implementing simulation and artificial intelligence tools to optimize the performance of the CO2 sequestration in coalbed methane reservoirs. In Carbon Management Technology Conference, Orlando, FL. 32. McKay, M. D., Beckman, R. J., & Conover, W. J. (1979). Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21, 239–245. 33. Myers, R. H., & Montgomery, D. C. (2002). Response surface methodology. Process and product optimization using designed experiments. New York, NY: Wiley. 34. Killough, J. E. (1995, January). Ninth SPE comparative solution project: A reexamination of black-oil simulation. In SPE Reservoir Simulation Symposium. Society of Petroleum Engineers.

A Sliding-Window Algorithm Implementation in MapReduce Emad A. Mohammed, Christopher T. Naugler, and Behrouz H. Far

Introduction The epoch of big data signifies new challenges for computing platforms as enormous data are generated on a daily basis and are characterized by huge volume, variety, and velocity. The MapReduce framework [1, 2] and its open-source implementation Hadoop [3] were built to facilitate certain forms of batch processing of distributed data that make them ready to parallelize processes for storage and processing. However, their fundamentals inherently limit their ability to parallelize online processes or sequential algorithms. MapReduce is a programming framework that is capable of processing large datasets using parallel distributed algorithms on a cluster of commodity hardware. MapReduce facilitates parallelized computation, fault tolerance, and distributing data processing. Hadoop [4] is an open-source software implementation of the MapReduce framework for running applications on large clusters, which provides both distributed storage and computational capabilities. MapReduce has limitations to facilitate communications and interactions between processes running on the Hadoop computing nodes [1, 4]. Communication between

E. A. Mohammed Department of Software Engineering, Faculty of Engineering, Lakehead University, Thunder Bay, ON, Canada e-mail: [email protected]; [email protected] C. T. Naugler Departments of Pathology and Laboratory Medicine and Family Medicine, University of Calgary and Calgary Laboratory Services, Calgary, AB, Canada e-mail: [email protected] B. H. Far () Department of Electrical and Computer Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_6

69

70

E. A. Mohammed et al.

computing nodes in a Hadoop cluster to keep track of distributed data is not allowed. And thus, simple or complex structures that are not stored in the cluster shared memory are not visible to the computing nodes. This creates difficulty to compute serial operation such as moving average. This, in turn, limits its capability to implement practical algorithms that need to know the state of other computing nodes or to share data between nodes for computation dependency, and thus, implementing sliding-window algorithms that require data dependency for computation is not tangible on the MapReduce framework [4]. This is in addition to the fact that the MapReduce framework cannot handle interactive data processing algorithms. A common input format of the MapReduce framework is the key-value pair input format [5, 6], where the key is the item that the MapReduce program uses in the map phase to query the input for specific values. The queried value list and the corresponding keys are passed to the reduce phase to be processed. The computation in the MapReduce framework will fail if any key-value pair list cannot be completely loaded into a reducer memory. Sequential processing algorithm [7] is a term used to describe the computation that occurs in a specific order, e.g., processing array elements from left to right. Therefore, the sequential processing algorithms necessitate that all the data must reside in a computing node memory at the same time. One example of the sequential processing algorithms is the rolling/sliding-window algorithm [8], e.g., moving average. The problem can be more difficult if the computation is conducted on a specific sequence of inputs, e.g., searching for tags composed of many key words. From the above illustration, it is clear that sequential algorithms with very large datasets may not be possible to run in the MapReduce framework given the limitation of the MapReduce framework and the workflow of the sequential algorithm. In this paper, we describe and implement a sliding-window algorithm in the MapReduce framework. The algorithm implementation does not violate the fault tolerance method of the MapReduce framework nor overloads the shared distributed memory of the Hadoop cluster. The algorithm reforms the input data and makes it computationally ready in MapReduce framework for sequential algorithms. This is accomplished under the MapReduce program control to copy the required data between computing nodes by utilizing the input file metadata stored in the Hadoop cluster master node. The rest of the paper is structured as follows. In section “Background,” summary backgrounds on the MapReduce framework and the Hadoop platform are presented with recent related works to improve the performance of the MapReduce framework. In section “Sliding-Window Algorithm,” the details of the proposed algorithm are illustrated. This is followed in section “Results and Discussion” by a detailed example of the implementation of a moving average algorithm. In this section, a detailed comparison to previous work is illustrated. The limitations and future extensions of this work are discussed with concluding remarks in section “Conclusion, Limitations, and Future Work.”

A Sliding-Window Algorithm Implementation in MapReduce

71

Background MapReduce borrows ideas from functional programming [1], where the developer defines map and reduces functions to process large sets of distributed data. The implementations of the MapReduce framework enable many of the most common calculations on large-scale data to be performed on computing clusters efficiently and in a way that is tolerant of hardware failures during computation. However, the MapReduce is not suitable for online transactions and sequential processing, e.g., moving average. The key strengths of the MapReduce programming framework are the high degree of parallelism combined with the simplicity of the programming framework and its applicability to a large variety of application domains [1]. This requires dividing the workload across a large number of machines. The degree of parallelism depends on the input data size. The map function processes the input pairs (key1, value1) returning other intermediary pairs (key2, value2). Then the intermediary pairs are grouped together according to their key. The reduce function outputs new key-value pairs of the form (key3, value3). Hadoop [4] is an open-source software implementation of the MapReduce framework for running applications on large clusters built of commodity hardware. Hadoop is a platform that provides both distributed storage, i.e., Hadoop Distributed File System (HDFS), and computational capabilities, i.e., MapReduce. Hadoop was first comprehended to fix a scalability issue that existed in the Nutch project [4, 9], an open-source crawler and search engine that utilizes the MapReduce and the Bigtable [9] methods developed by Google. The HDFS stores data on the computing nodes providing a very high aggregate bandwidth across the cluster. The MapReduce processing framework imposes strong assumptions on the dependence relationship for computation among data, and the accuracy/exactness of the algorithm results depends on the commutativity and associativity properties of the processing algorithms [10]. MapReduce imposes a few dataset structures, e.g., input key-value pairs. MapReduce relieves developers from low-level routines, e.g., input data assignment and task distributions. The limitation of MapReduce is that it constrains a processing program’s aptitude to process low volumes of data that introduce difficulty in encompassing the MapReduce framework to more common computation on small scale or for sequential computation of the large volume of data [5]. Several studies have been conducted to improve the performance of the MapReduce programming framework. An integration of merging stage after the reduce

72

E. A. Mohammed et al.

function [11] was proposed to join two datasets produced by multiple reduce functions. A lightweight parallelization framework known as MRlite [5] was proposed to alleviate the one-way scalability problem, i.e., processing a low volume of data. The result showed that MRlite could efficiently process small size data and handle data dependency. A method based on the index pooling technique was implemented to solve the data dependency on the MapReduce framework. This method was used to implement a rolling window for time series analysis algorithms [12]. This method was based on resampling the time series in order to reduce the size of the data and compute equally spanned indices that were used to share specific file portions for computation. However, the index file must be handled carefully as the MapReduce failure handling method cannot afford to back up this file. Moreover, this method is not suitable where time series resampling cannot be considered [13]. The accumulate computation design pattern [14] was used to implement a method to solve the problem of data dependency [15] in MapReduce. This method used two MapReduce jobs with a sequential reducer phase. Moreover, the shared data and parameters were stored on the distributed shared memory [9] of the computing cluster that signified a single point of failure and communication bottleneck. From the above illustrations, it is obvious that there is still a gap between a nontrivial problem and the MapReduce framework because the MapReduce programming framework is a low-level framework that causes difficulties in practical programming. This is due to the fact that the Hadoop computing nodes are loosely coupled and cannot share data between them except for the shared distributed memory of the processing cluster that only provides parameter sharing and is of small size compared to the input data size. In this paper, the sliding-window algorithm is capable of sharing data for computation dependency utilizing the MapReduce job parameters, i.e., MapReduce job metadata, without violating the fault tolerance of the MapReduce framework neither loading the cluster distributed shared memory. The metadata is stored in the form of libraries on the master node of the processing cluster. The metadata includes information about the input data, such as how many splits an input file has, how many bytes in each data split, the offset of every data split from the beginning of the input file, etc. This algorithm helps implement a specific class of algorithms that process data in a sequential or predefined order, e.g., moving average algorithms. An illustration of how to use the MapReduce job metadata to share data for computation dependency is illustrated by implementing a moving average program.

Sliding-Window Algorithm The MapReduce framework does not support data dependency for computation and assumes that the value list associated with any given key can reside on a reducer memory for computation; however, in many cases, this assumption cannot hold,

A Sliding-Window Algorithm Implementation in MapReduce

73

e.g., time series analysis of a stock market history. This limitation makes it difficult to implement simple algorithms such as the moving average, which is utilized by many applications [16]. The proposed algorithm simplifies data dependency for computation in the MapReduce framework and is shown in Fig. 1. The proposed algorithm is based on utilizing the metadata of the processed file to create new keys for all the data, copy the requested data to be shared from the current input split with the created keys, and map them to the corresponding reducer for the actual processing steps. In the MapReduce framework, a map function processes a file input split and can request the length and offset of the input split by calling the methods getLength() and getStart(), respectively [17]. The developer must supply the required portion of the data to be shared “window” with the next mapper of the MapReduce job. The shared data can be anywhere in the input split; however, for consistency, the last window of every input split will be copied for the next mapper. When the developer invokes the MapReduce job, the desired file is transferred from the local file system to the HDFS. The map function starts by reading the input split, the split size, and the offset of the input split. The split size indicates how many bytes are in the input split. The offset of the input split represents the start of the input split relative to the start of the file. The copied window is complete records/lines of the input split that can be variable in length, and thus, the map function reads each line and updates the size value by the size of the lines to be copied. The map function verifies if the requested window to share is less than the available number of bytes in the current input split. If the verification fails, the MapReduce job is terminated. Otherwise, the map function tests if the current input split is the first one by examining the offset of the current input split. If the offset equals to zero, then the map function has the first input split. The next input split offset is calculated by adding the current offset to the input split length. This value is used as the first part of the new keys knext of the shared values. The other part of the keys is the shared value keys in the current input split kcurrent. The shift mathematical operator is used to shift left the knext by the number of bytes to be shared, and the “kcurrent” is added to create the new keys for the copied values. As a final step, the new keys and the associated values are emitted from the map function to the HDFS. This is in addition to emitting the original key-value pairs of this input split. If the offset is not equal to zero, then the map function has the middle or last input split. In this case, the map prepares the input key space to receive the copied window and emits the last window to the next mapper. Apart from that, the same procedures are followed as in the case where the input split offset is zero. The MapReduce job is a map-only job as there is no need for the reducer function if the objective is only to prepare the data for computation by any other MapReduce job.

74

E. A. Mohammed et al. Local File system Database

Hadoop Distributed File System (HDFS) Map only Map Reduce Job for data computation dependency

Read input split size, length, and offset

Get the size of the shared data “window” and the size of every line in the input split

Stop

window > length

Yes

No

Yes

window > length

No

SplitOffset ==0

Yes

No 1- Create a new space for the data to be received from the previous map 2- Calculate the new keys of the shared data with in the next mapper offset

Calculate the new keys of the shared data with in the next mapper offset range

Copy the shared values

Copy the shared values

Emit the key value pairs of the original and shared data

Emit the key value pairs of the original and shared data

Hadoop Distributed File System (HDFS)

Fig. 1 The proposed sliding-window algorithm on the MapReduce framework for computation

A Sliding-Window Algorithm Implementation in MapReduce

75

Results and Discussion The MapReduce programming framework received enormous attention in many fields as it facilitates the implementation of many applications by hiding the low-level parallelization routines from the implementation layer. However, the MapReduce programming framework cannot implement applications with computational dependency, e.g., the moving average algorithm. In this paper, a new implementation of a sliding-window algorithm is proposed to handle problems that require data dependency for computation. The proposed algorithm is based on analyzing the metadata of the processed file by the MapReduce framework to prepare the data that are required for computational dependency to be available among the computing nodes. The proposed algorithm consists of a map-only MapReduce job. The map function executes the actual data preparation and outputs the new form of the data in a new file on the HDFS. Then the developer can use this new file as an input to another MapReduce job for the actual computation. The following illustration shows the usefulness and efficiency of the proposed algorithm and a comparison to other methods in the literature.

Sliding-Window Algorithm Implementation for Moving Average Computation in MapReduce Framework A moving average is a series of values calculated by averaging several sequential values of the input data. The moving average is used to smooth out data and is computed by a running average (rolling window) over a specific number of data points (data window), i.e., the last N points of data. The average is expressed as the sum of the last N points divided by N that is represented by the following equation: MAi =

1 N

j =i+N−1 

xj ,

i =M −N +1

(1)

j =i

where MAi is the computed moving average at point i, xj is the input number j in the data, and M is the total number of points in the data. One method to compute the moving average algorithm is to repeat the computation for every new data point that requires the last N data points to be stored, N−1 additions and a single divide. The moving average computation requirement is achievable using a single computing node if the input data can reside in a single computing node memory. In the following, an example for data preparation of the moving average computation on the MapReduce framework is illustrated using the proposed algorithm. Consider the given list (1,2,3,4,5,6,7) as an input for a MapReduce job to compute the moving average for a data window of three numbers, and the split

76

E. A. Mohammed et al.

Table 1 The input file split metadata per map function to prepare the data for the moving algorithm in the MapReduce framework Map phase Input Offset in bytes Split size in bytes Length in bytes Line size Input keys Intermediate keys

Map #1 (1,2,3) 0 6 6 2 (0,2,4) (0,2,4,600,602)

Map #2 (4,5,6) 6 6 6 2 (0,2,4) (604,606,608, 1200,1202)

Map #3 (7) 12 6 2 2 (0) (1204)

size per map function is set at three integer size boundary. The job settings result in three split inputs as follows: split#1 = (1,2,3), split#2 = (4,5,6), and split#3 = (7). To compute the moving average, the following five averages must be computed, MA1 = average (1,2,3), MA2 = average (2,3,4), MA3 = average (3,4,5), MA4 = average (4,5,6), and MA5 = average (5,6,7). This is unachievable on the MapReduce framework as there is no way to share data for computation. A work-around solution is to attach all numbers to the same key to make sure that all the data reach a single reducer where the computation of the moving average can be computed correctly. However, this solution assumes that the data can reside in a single reducer memory that puts significant limitations on the size of the input data. The proposed algorithm overcomes this limitation by utilizing the input file metadata, i.e., input split size, split length, offset, and record keys, to prepare the required data for computation. The proposed algorithm guarantees that the desired data required by the moving average reaches a specific reducer and the aggregated output of all reducers correctly represents the computation. Table 1 shows the metadata of the input file split per map function and the produced new keys with their associated values to illustrate the preparation of the data to be computed by the moving average algorithm on the MapReduce programming framework. The algorithm starts by reading the MapReduce job setting for the input split size, offset, and file length with the required size of the data to be shared. The map function examines the offset of the split to prepare the data to be copied with the corresponding output keys. The example illustrated in Table 1 shows that the processed file has a split size of 6 bytes and with line size, i.e., the record size, = 2 bytes. The required shared size is 2 records, i.e., (window − 1) = 4 bytes, as the window of the moving average is 3 records of 6 bytes. The proposed algorithm computes the new key space that can accommodate the copied records and outputs the new key-value pairs to a new file on the HDFS if the objective is to prepare the data to be used by any other MapReduce job. The illustrated example assumes that a single MapReduce job is used to prepare the data in the map phase and perform the actual computation of the moving average algorithm in the reduce phase. However, other algorithms may require different processing steps for the same data, and thus, the proposed algorithm can store the prepared data for other algorithm computation

A Sliding-Window Algorithm Implementation in MapReduce

77

Table 2 The processing of the intermediate keys to be used in the secondary sort and reduce function to compute the moving average algorithm in the MapReduce framework Key1 = remainder (intermediate key/100) 0 2 4 0 2 4 6 8 0 2 4

Map output value = (key1, value) (0,1) (2,2) (4,3) (0,2) (2,3) (4,4) (6,5) (8,6) (0,5) (2,6) (4,7)

Map output key = quotient (key/100) 0 0 0 6 6 6 6 6 12 12 12

Map output (key,) pairs (0,) (0,) (0,) (6,) (6,) (6,) (6,) (6,) (12,) (12,) (12,)

that may require the same shared data but different computation, e.g., filtering. This comes at the expense of increasing the file size by the shared data and the new keys. The algorithm can calculate different key spans to accommodate records of different sizes. Table 1 shows the new keys and values to be used by MapReduce job to compute the moving average, where the map function of the MapReduce job divides the input key by 100 and uses the division quotient as the new key for the corresponding values. The map function input key is divided by 100 as the shared records per map equals 2 records, i.e., 10ˆ(window − 1). Then the actual moving average computation takes place in the reduce function of the MapReduce job. The computation of the keys and moving average is illustrated in Table 2. The MapReduce has only three reducers as the values of the input key are divided by 100 resulting in three different quotient values, i.e., 0,6, and 12, which correspond to the offset values of the input splits handled by the data preparation MapReduce job. The remainder of the division is used to generate intermediate keys that represent the output of the map function. The first element of the list is used at the shuffle and sort phase of the MapReduce to perform a secondary sort to keep the records in the proper order for computation. The reducers receive the following input list: 1. Reducer#1 list = ((0, ), (0, ), (0, )) and the moving average MA1 can be computed using the third element of every tuple. 2. Reducer#2 list = ((6, ), (6, ), (6, ), (6, ), (6, )) and the moving averages MA2 , MA3 , and MA4 can be computed using the third element of every tuple. 3. Reducer#3 list = ((12, ), (12, ),(12,)) and the moving average MA5 can be computed using the third element of every tuple.

78

E. A. Mohammed et al.

Comparison to Related Works The sliding-window algorithm uses one MapReduce job, where the data preparation for computation is performed in the map function without the need to keep records of the generated key space. This does not affect the failure mechanism of the MapReduce framework [9]. The proposed algorithm overcomes the limitation of the method proposed in [12] that necessitates a special failure handling mechanism to preserve the index pooling in case of hardware failure. The proposed sliding-window algorithm does not necessitate any parameters nor data sharing using the distributed memory of the processing cluster. This is in contrast to the method proposed in [15], which consumes a large amount of the cluster shared memory to share data for computation dependency. This is in addition to the sequential reducer that was used to communicate the shared data [15]. The proposed algorithm increases the volume of the data reaching the reducers by (line size) × (window − 1) × (number of mappers − 1) plus the size of the produced keys that are stored with the values. The number of mappers = ceil (input file size/input split size). In the illustrated example, the line size = 2 bytes, the window is 3 records, and the number of mappers “ceil (7/3)” is 3. This gives a total increase in the actual value of data bytes reaching the total reducer by 2 × 2 × 2 = 8 bytes for the values only, i.e., excluding the size of the new keys, which is 2 bytes per shared record. This results in a maximum number of 30 bytes that can reach any reducer, and the reducer memory must handle this size to be able to compute the moving average. The 30 bytes resulting from the maximum number of records that can reach any reducer = the maximum number of the records in an input split + window − 1 = 5, multiplied by the total size of the record = 6. If any reducer cannot handle this requirement, then it can be alleviated by increasing the split size compared to the window size. For example, consider a Hadoop cluster where the maximum memory size available for any mapper or reducer is 100 bytes and the input file size is 150 bytes of a time series and it is required to compute the moving average with rolling window = 3 records of the time series. Then the increase in the values that can reach any reducer = 2 × 2 × 8 = 32 bytes. This is in addition to another 4 bytes for the new key space of the shared records that produce more 36 bytes that can reach any given reducer memory to compute the moving average. This is in addition to 30 bytes representing the original 10 records with their new key space. From this analysis, it is obvious that any reducer memory must handle at least 66 (36 + 30) bytes (=idx1)& &(key.get()0){ long idx3 = SplitOffset+SplitLength-(window-1)*size; // used for any split other than SplitOffset =0 long idx4 = idx3+(window-1)*size; // used for any split other than SplitOffset =0 boolean test = (nkey>idx3)&&(nkey5 min) Fig. 6 Distribution of customers’ activity

Activity: In order to assess customers’ activity, the time span is divided into 8 weeks. Weight (i.e., thickness) of edge represents whether the customer i in the week t has at least a contact to customer j or not. In fact, the sum of all weights of edges connected to a vertex demonstrates its activity. In other words, the number of all contacts that a customer has made shows his loyalty and attractiveness potential to effect on others. Results show that more than 1900 customers have at least 15 calls with a duration of more than 5 min. The maximum activity in this network is 117 (Fig. 6).

138

M. Saeedi and A. Albadvi

2000

number of customers

clustering Co 1500

1000

500

0 0

50

100

150

200

250

300

clustering coefficent * 10-3 Fig. 7 Distribution of customers’ clustering coefficient 2000

closeness

number of customers

1750 1500 1250 1000 750 500 250 0 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

5.5

6

6.5

customers closeness Fig. 8 Distribution of customers’ closeness

Structural Variables Clustering coefficient: This variable depicts mutual friends of customers and is related to network structure. The more clustering coefficient, the more network reliability. Clustering coefficient is supposed to the density of network, and its range is between 0 and 1. As it is shown in Fig. 7, this parameter for most of customers is less than 0.005. Closeness: Customer’s ability to distribute information through the network is determined by closeness. The network structure permits customers with higher closeness to spread information more easily. Results of assessment of closeness are shown in Fig. 8.

A New 3D Value Model for Customer Segmentation: Complex Network Approach

139

160

number of customers

betweenness 120

80

40

0 0

100

200

300

400

500

600

700

800

Beteenness centrality * 102 Fig. 9 Distribution of customers’ betweenness centrality

Betweenness centrality: This variable has a strong relationship with the shortest paths. In order for a fast transfer of information between two nodes, optimum path is the shortest. In fact, betweenness centrality indicates controlling role of customer on the other customers. Customers with the maximum score are assumed to be important for the network to stay interconnected. So the higher the score of betweenness, the more important, and the customer network value will be higher. Using Eq. 3, betweenness centrality is evaluated for each customer (Fig. 9).

Monetary Variable As discussed earlier, customer’s future and financial revenue is considered as monetary value. To assess CLV, there are four types of models having numerous variations depending on the specific task, availability of data, and the user [7]. The models are as follows: 1. 2. 3. 4.

Basic structural model of CLV Customer migration model Optimal resource allocation models Customer relationship models Here we have used the first model that is defined as follows: CLV =

n  (Ri − Ci ) i=1

(1 + d)i−0.5

(4)

where i is the period of cash flow from customer transaction, Ri is the revenue from the customer in period i, Ci is the total cost of generating the revenue Ri in

140

M. Saeedi and A. Albadvi

number of customers

1750

CLV

1500 1250 1000 750 500 250 0 0

1000 2000 3000 4000 5000 6000 Customer future and financial revenue ($)

7000

Fig. 10 Distribution of CLV Table 3 Mean value of clusters Clusters Cluster 1 Cluster 2 Cluster 3 Cluster 4

Number of customers 5767 3725 1527 1434

CLV 483.3 571.8 2163.3 745.0

Node degree 25 61.1 34.9 4.4

Activity 14.1 36.8 20.3 0.2

Clustering coefficient 0.00945 0.00891 0.00892 0.02658

Betweenness Closeness centrality 4.2 35015.9 3.7 127145.8 4.0 56905.6 0.1 956.3

period i, and n is the total number of periods of projected life of the customer under consideration. Figure 10 shows the CLV distribution for customers in this study.

Results Analysis In order to cluster the customers, k-means algorithm is employed. We examined k = 2–10, and to select the best k, we used Silhouette analysis. The results show that k = 4 with Silhouette coefficient = 0.782 is the correct number of clusters. Thus, there are four classes of user behaviors. Table 3 presents mean value of the variables for each cluster. The clustering result of telecommunication call data is displayed in Fig. 11. On the x-axis, each parameter is illustrated by the sets of four classes along with the percentage of all users assigned to the cluster, while the y-axis measures normalized value of feature i of cluster j. It is computed as follows: fij rij = K

k=1 fik

(5)

A New 3D Value Model for Customer Segmentation: Complex Network Approach

141

0.7 cluster1 cluster2 cluster3 cluster4

0.6 0.5 0.4 0.3 0.2 0.1 0

clv

degree

activity

clustering co.

closeness

betweenness

Fig. 11 Relative feature value for telecommunication call network

where fij is the value of the feature i of cluster j. It allows to compare different and the same features of all clusters instead of the absolute value. As it is shown in Table 3, most of the customers (46%) are devoted to the cluster 1 which has the less CLV but the most closeness. The average CLV of the cluster 3 (12%) is the highest, while the mean node degree and activity of the cluster 2 (30%) are higher than the others. The most prominent characteristics of the cluster 4 (12%) are its clustering coefficient which is three times upper than the other clusters and its high mean CLV. Based on the model features, the clusters are analyzed as follows: • Cluster 1: Normal customers. Most of the customers are in this cluster. According to CLV, degree and activity, this cluster has an average monetary value and influential value. More advertisement to identify existing services and products is a good marketing idea for these customers. • Cluster 2: Influential customers. These customers have a lot of friends (degree) in the network, and also they are so active; consequently their influential value is high. It should be noted that betweenness centrality of this group is the highest; therefore the network structure permits them to be influencer. This class of customers can do indirect marketing spontaneously. So it is better that new services and products are initially introduced into this cluster. • Cluster 3: High-value customers. Customers in this group have a high monetary value, and their influential value is good enough although their structural value is at a proper level. High-quality services and products can satisfy them. • Cluster 4: Prospect customers. The average CLV of this cluster is normal, but their influential and structural value is so low. Considering their clustering coefficient, they have tendency to remain in the network, but their less activity and few friends could make them unstable. To retain these customers, informing them of the special services and products that have competitive advantages is the best idea.

142

M. Saeedi and A. Albadvi

Fig. 12 Clustered telecommunication network

The telecommunication network that is divided into clusters is presented in Fig. 12. In order to assess the validity of the model, we ran the segmentation algorithm simply based on CLV on the dataset. The customers were divided into four clusters, namely, high value (13%), normal (53%), low value (26%), and new entrance (8%). Deep survey in the clusters reveals some customers are devoted to normal cluster, while they are in fact prospect customers, as they called in/out with a few subscribers. Furthermore, some customers which are in low-value cluster will move to normal cluster by considering their network value, because of their activity in the network. Moreover, some customers in normal cluster, in regard to the number of their connections and amount of their calls, must be labeled as influential customers. As the results show, taking network aspects of customers into account may cause precise evaluation of their real value.

Conclusion This study focuses on network value of customers in segmentation and proposes a 3D value model including monetary value, influential value, and structural value. In order to assess customer value, we have defined some variables which all,

A New 3D Value Model for Customer Segmentation: Complex Network Approach

143

except monetary variable, are based on complex network features. Afterward, the customers are assigned into different classes by using clustering algorithm. Here, we used k-means algorithm, and to evaluate the quality and the correct number of clusters, Silhouette indicator is used. We analyzed a large telecommunication network as an example, where the nodes are the subscribers of a mobile phone operator and links correspond to reciprocated phone calls that we identify as their interactions. After employing kmeans algorithm, according to the results of Silhouette analysis, the customers are divided into four different clusters. The customers with an average value of all the features are assigned to the normal customers cluster. Customers who have a lot of friends and relationships belong to cluster 2. High-value customers according to monetary value and influential value are in cluster 3. Finally, prospective customers, customers in cluster 4, have normal monetary value, but their influential value and structural value are so low. In comparing cluster 4 to cluster 1 and cluster 2, additions to monetary value and customer’s influence and importance in the structure of network are crucial in customer valuation and segmentation. The methodology we discussed in this study is not limited to telecommunication network but could be applied to many other networks.

References 1. Ngai, E. W. T., Xiu, L., & Chau, D. C. K. (2009). Application of data mining techniques in customer relationship management: A literature review and classification. Expert Systems with Applications, 36, 2592–2602. 2. Brito, P. Q., Soares, C., Almeida, S., Monte, A., & Byvoet, M. (2015). Customer segmentation in a large database of an online customized fashion business. Robotics and ComputerIntegrated Manufacturing, 36, 93–100. 3. Kahreh, M. S., Tive, M., Babania, A., & Hesan, M. (2014). Analyzing the applications of customer lifetime value (CLV) based on benefit segmentation for the banking sector. ProcediaSocial and Behavioral Sciences, 109, 590–594. 4. Miguéis, V. L., Camanho, A. S., & Cunha, J. F. E. (2012). Customer data mining for lifestyle segmentation. Expert Systems with Applications, 39, 9359–9366. 5. Tsiptsis, K., & Chorianopoulos, A. (2009). Data mining techniques in CRM: Inside customer segmentation. West Sussex: Wiley. 6. Gupta, S. (2009). Customer-based valuation. Journal of Interactive Marketing, 23, 169–178. 7. Jain, D., & Singh, S. S. (2002). Customer lifetime value research in marketing: A review and future directions. Journal of Interactive Marketing, 16, 34–46. 8. Damm, R., & Monroy, C. R. (2011). A review of the customer lifetime value as a customer profitability measure in the context of customer relationship management. Intangible Capital, 7, 261–279. 9. Stahl, H. K., Matzler, K., & Hinterhuber, H. H. (2003). Linking customer lifetime value with shareholder value. Industrial Marketing Management, 32, 267–279. 10. Berger, P. D., & Nasr, N. I. (1998). Customer lifetime value: Marketing models and applications. Journal of Interactive Marketing, 12, 17–30. 11. Gupta, S., Lehmann, D. R., & Stuart, J. A. (2004). Valuing customers. Journal of Marketing Research, XLI, 7–18.

144

M. Saeedi and A. Albadvi

12. Robert Dwyer, F. (1997). Customer lifetime valuation to support marketing decision making. Journal of Interactive Marketing, 11, 6–13. 13. Tirenni, G. R. (2005). Allocation of marketing resources to optimize customer equity. Switzerland: University of St. Gallen. 14. Bauer, H. H., Hammerschmidt, M., & Braehler, M. (2003). The customer lifetime value concept and its contribution to corporate valuation. Yearbook of Marketing and Consumer Research, 1, 47–67. 15. Gupta, S., Hanssens, D., Hardie, B., Kahn, W., Kumar, V., Lin, N., & Sriram, S. (2006). Modeling customer lifetime value. Journal of Service Research, 9, 139–155. 16. Shen, C.-C., & Chuang, H.-M. (2009). A study on the applications of data mining techniques to enhance customer lifetime value. WSEAS Transactions on Information Science and Applications, 6, 319–328. 17. Klier, J., Klier, M., Probst, F., & Thiel, L. 2014. Customer lifetime network value. 18. Domingos, P., & Richardson, M. (2001). Mining the network value of customers. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 57–66), ACM. 19. Buttle, F. A. (1998). Word of mouth: Understanding and managing referral marketing. Journal of Strategic Marketing, 6, 241–254. 20. Trusov, M., Bucklin, R. E., & Pauwels, K. (2009). Effects of word-of-mouth versus traditional marketing: findings from an internet social networking site. Journal of Marketing, 73, 90–102. 21. Liu, R., Ma, J., Qi, J., Wu, B., & Wang, C. (2009). A customer network value model based on complex network theory. In International Conference on Systems, Man, and Cybernetics, San Antonio, TX, USA. 22. Hogan, J. E., Lemon, K. N., & Libai, B. (2003). What is the true value of a lost customer? Journal of Service Research, 5, 196–208. 23. Liu, R., Li, Y., & Qi, J. 2009 Making customer intention tactics with network value and churn rate. In Wireless Communications, Networking and Mobile Computing, WiCom’09. 5th International Conference on (pp. 1–4). Nanjing, China: IEEE 24. Iyengar, R., Van Den Bulte, C., & Valente, T. W. (2011). Opinion leadership and social contagion in new product diffusion. Marketing Science, 30, 195–212. 25. Kumar, V., Aksoy, L., Donkers, B., Venkatesan, R., Wiesel, T., & Tillmanns, S. (2010). Undervalued or overvalued customers: capturing total customer engagement value. Journal of Service Research, 13, 297–310. 26. Libai, B., Muller, E., & Peres, R. (2013). Decomposing the value of word-of-mouth seeding programs: Acceleration versus expansion. Journal of Marketing Research, 50, 161–176. 27. Nitzan, I., & Libai, B. (2011). Social effects on customer retention. Journal of Marketing, 75, 24–38. 28. Goyal, A., Bonchi, F., & Lakshmanan, L. V. (2011). A data-based approach to social influence maximization. Proceedings of the VLDB Endowment, 5, 73–84. 29. Kempe, D., Kleinberg, J., & Tardos, É (2003). Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 137–146). ACM. 30. Agarwal, N., Liu, H., Tang, L., & Yu, P. S.(2008). Identifying the influential bloggers in a community. In Proceedings of the 2008 International Conference on Web Search and Data Mining (pp. 207–218). ACM. 31. Bonato, A., & Tian, Y. (2013). Complex networks and social networks. In E. Kranakis (Ed.), Advances in network analysis and its applications. Berlin Heidelberg: Springer. 32. Kleinberg, J. (2006). Complex networks and decentralized search algorithms. Proceedings of the International Congress of Mathematicians (ICM), 1019–1044. 33. Albert, R., & Barabasi, A.-L. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74, 47. 34. Costa, L. D. F., Rodrigues, F. A., Travieso, G., & Villas BOAS, P. (2007). Characterization of complex networks: A survey of measurements. Advances in Physics, 56, 167–242.

A New 3D Value Model for Customer Segmentation: Complex Network Approach

145

35. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., & Hwang, D. U. (2006). Complex networks: Structure and dynamics. Physics Reports, 424, 175–308. 36. Newman, M. (2003). The structure and function of complex networks. SIAM Review, 45, 167– 205. 37. De Meo, P., Ferrara, E., Fiumara, G., & Ricciardello, A. (2012). A novel measure of edge centrality in social networks. Knowledge-Based Systems, 30, 136–150. 38. Pastor-Satorras, R., & Vespignani, A. (2001). Epidemic spreading in scale-free networks. Physical Review Letters, 86, 3200.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach Munima Jahan, Elham Akhond Zadeh Noughabi, Behrouz H. Far, and Reda Alhajj

Introduction Cancer is one of the leading causes of morbidity and mortality worldwide. According to WHO, cancer is the second leading cause of death globally and was responsible for 8.8 million deaths in 2015 [1]. Cancer is not a single disease—it has more than 200 types [2] with many possible causes. All are different, unique diseases, which require different approaches for diagnosis and treatment. While doctors have an idea of what may increase the risk of cancer, the majority of cancers occur in people who don’t have any known risk factors. The environmental, social, and behavioral influences are known to be significant [3] in some cases. According to the US National Cancer Institute (NCI), the most-studied known or suspected risk factors of cancer are age, alcohol, cancercausing substances, chronic inflammation, diet, hormones, immunosuppression, infectious agents, obesity, radiation, sunlight, and tobacco [4]. The motivation behind this research is to use data mining techniques to identify the influential risk factors for different cancers—related to behavioral, demographic, and medical status of a patient—and provide the experts with useful information to better understand the dynamics behind cancer development. In this paper, we adapt and extended association rule mining [5] and contrast set mining [6] to extract useful knowledge regarding cancer. For the analysis of dominant patterns, we used two different datasets publicly available from SEER and NHIS. The Surveillance, Epidemiology, and End Results M. Jahan () · E. A. Z. Noughabi · B. H. Far Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB, Canada e-mail: [email protected]; [email protected]; [email protected] R. Alhajj Department of Computer Science, University of Calgary, Calgary, AB, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_11

147

148

M. Jahan et al.

(SEER) Program is a premier source of domestic statistics of cancer. The collected data from SEER represents 28% percent of the US population across several geographic regions. This data is available from the SEER with permission [7]. The National Health Interview Survey (NHIS) is the principal source of information on the health of the civilian noninstitutionalized population of the United States and is one of the major data collection programs of the National Center for Health Statistics (NCHS) which is part of the Centers for Disease Control and Prevention (CDC) [8]. There are numerous active works in the domain of cancer research. Most of the researches focus on only one single type of cancer. Our paper is one of the few that considers multiple cancer types. The main objective of this research is twofold. First, we perceive the patterns that are significant for cancer in general and then we find features that are strongly evident in some types of cancer but rare in the others, which we classify as the distinguishing factors for that specific cancer types. This is done through frequent pattern mining [5] and contrast set mining [6] using multiple cancer datasets. Frequent pattern mining and association rule mining [5] are prevalent in medical research for finding prognostic indicator of certain cancers and developing correlation among diseases. Most of the existing works focus on single type of cancer whereas our contribution impacts on several types. Even though the methodology is currently applied to and verified on cancer datasets, it is general enough and can potentially be used to identify possible influential factors for other diseases like diabetes, heart disease, etc. It can also be used for one specific cancer type to explore more profound insights. The rest of the paper is organized as follows: “Literature Review” section summarizes related work. “Proposed Methodology” section provides the detailed methodology for proposed research. “Experimental Results” section presents the empirical results analysis along with the description of the datasets. Concluding remarks are given in “Conclusion and Discussion” section along with some directions for the future work.

Literature Review In this section, we review literatures related to this research with the focus on works using data mining approach in health care. We also report some works based on the SEER dataset.

Data Mining Techniques Frequent pattern mining and association rule mining are data mining techniques. Frequent patterns are itemsets, subsequences, or substructures that appear in a dataset with frequency no less than a user-specified threshold and from which a set of rules can be generated. The concept of frequent itemset was first introduced for mining transaction databases by Agrawal in 1993.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

149

Following the original definition by Agrawal [5], the problem of frequent pattern mining and association rule mining can be defined as follows. Let I = {i1 , i2 , . . . , in } be a set of n binary attributes called items. Let D = {t1 , t2 , . . . , tn } be a set of transactions called the database. Each transaction in D has a unique transaction ID and contains a subset of the items in I. A k-itemset α, which consists of k items from I, is frequent if α occurs in a D no lower than θ |D| times, where θ is a user-specified minimum support, and |D| is the total number of transactions in D. A rule is defined as an implication of the form X→Y where X, Y ⊆ I and X∩Y = ∅. The sets of items (for short itemsets) X and Y are called antecedent (lefthand side or LHS) and consequent (right-hand side or RHS) of the rule, respectively. Association rule generation is usually split up into two separate steps. Finding frequent patterns plays an essential role in mining associations, correlations, and many other interesting relationships among data. Moreover, it helps in data indexing, classification, clustering, and other data mining tasks as well. Among several existing methodologies for frequent itemset mining, Apriori [9], FP-growth [10], and Eclat [11] are the most fundamental ones. The basic principle behind the Apriori algorithm [9] and its alternative [12] is “A k-itemset is frequent only if all of its sub-itemsets are frequent. This implies that frequent itemsets can be mined by first scanning the database to find the frequent 1-itemsets, then using the frequent 1-itemsets to generate candidate frequent 2-itemsets, and check against the database to obtain the frequent 2-itemsets. This process iterates until no more frequent kitemsets can be generated for some k.” Frequent pattern mining is an actively researched technique in data mining. Several variations of frequent pattern mining are also available, such as sequential pattern mining [13], structured pattern mining [14], correlation mining [15], and frequent pattern-based clustering [16].

Cancer Research Scientists are looking for causes and ways to prevent cancer, better ways to find it early, and ways to improve treatments. Many studies are looking to identify the causes of cancer, in the hopes that this knowledge may help prevent it. Other studies are looking to see if certain types of diets, dietary supplements, or medicines can lower a patient’s risk of any specific cancer. For example, many studies have shown that aspirin and similar pain relievers might help lower the risk of colorectal cancer, but the medication can sometimes have serious side effects [17]. Researchers are trying to determine whether there are groups of people for whom the benefits would outweigh the risks. Some areas of interest in cancer research are identifying risk factors, cause and prevention [4], early diagnosis or prediction of a cancer [18, 19], treatment of specific cancer type, statistics of cancer depending on demographic and spatial disparities [17], survivability prediction and analysis, and sociodemographic factors related to cancer.

150

M. Jahan et al.

There are several reports on using association rules in the discovery of cancer prevention factors and also in cancer detection and surveillance. For example, an automatic diagnosis system for detecting breast cancer based on association rules and neural network is presented in [18]. The authors in [19] have developed an association algorithm to predict the risk of breast cancer based on the probabilities of genetic mutations. The work presented in [20] investigates association rules between the set of transcription factors and the set of genes in breast cancer. In [21], the researcher attempts to integrate the most widely used prognostic factors associated with breast cancer (primary tumor size, the lymph node status, the tumor histological grade, and tumor receptor status) with whole-genome microarray data to study the existing associations between these factors and gene expression profiles. A work to determine association rules between family history of colorectal cancer in first-degree relatives, parental consanguinity, lifestyle and dietary factors, and risk of developing colorectal cancer can be found in [22]. The study was performed with controls matched by age, gender, and race. The results indicated that parental consanguinity, family history of colorectal cancer, smoking, obesity, and dietary consumption of bakery items were more often associated with a colorectal cancer diagnosis. Nahar et al. [23] compared three association rule mining algorithms (Apriori, predictive Apriori, and Tertius) for uncovering the most significant prevention factors for bladder, breast, cervical, lung, prostate, and skin cancers. Based on their experimental results, they concluded that Apriori is the most useful association rule mining algorithm for the discovery of prevention factors. Several other studies have applied the Apriori association algorithm to the analysis of cancer data, focusing on treatment outcome as well as detection. Hu [24] studied the association rules between degree of malignance, number of invasion nodes, tumor size, tumor recurrence, and radiation treatment of breast cancer. That author proposed an improved Apriori association algorithm to reduce the size of candidate sets and better predict tumor recurrence in breast cancer patients. Association rule mining is applied on survival time in the lung cancer data from the Surveillance, Epidemiology, and End Results (SEER) Program in [25]. In [26], the demographic, clinical, and pathological characteristics for triplenegative breast cancer among Turkish people are investigated. The main focus of the paper is the prognosis of triple-negative breast cancers based on the statistical analysis of different pathological components. While plenty of studies have applied association rule mining to different types of cancer, only a few have considered multiple cancer type. None of the above works attempt to contrast different kinds of cancer to explore the distinguishing factors as our approach does. Some studies [26] use the traditional statistical analysis where as our approach stands on the data mining technique. A significant advantage of data mining approach is that it is not necessary to start with a hypothesis test; rather the algorithm can itself generate hypothesis from the data automatically. Annually, diverse studies with NHIS and other similar public health data are carried out in order to gain insight in current health behavior and trends. For

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

151

example, the article in [27] presents the prevalence, patterns, and predictors of yoga using the survey data from NHIS. The authors in [28] observe the effects of recall on reporting injury and poisoning episodes in the National Health Interview Survey. However, no literature was found on cancer study using NHIS datasets. Several researchers have used SEER data in their studies [29–34]. Few of the researchers are using the SEER*Stat software provided by SEER through NCI for cancer updates. Significant effort can be found in survival prediction [33, 34]. A reliable prediction methodology for breast cancer is proposed in [29] using C4.5 classification to distinguish between the different stages known as “carcinoma in situ” (beginning or pre-cancer stage) or “malignant” (potential) group. In [30], the authors use support vector machines (SVMs) and decision tree to identify cancer patients for whom chemotherapy could prolong survival time. Different algorithms like SVMs [30], C4.5 [29], K-means [31], and decision tree [32] are used to find prognostic indicator for different types of cancer. But the methodology in the literature is very specific to a single cancer and cannot be generalized for other types. This paper originates a new effort to identify facts about cancer which is applicable to other types of cancer in general and multiple cancer type to find the distinguishable features of different cancers. The framework is also extendible in finding risk factors for other disease.

Proposed Methodology Figure 1 presents the basic framework of this research with all the major phases as a separate block indicating the tasks associated with it. Data preprocessing and data analysis are the two major phases that incorporate different jobs to complete the process. The last phase is validation of results. The internal validation is addressed in this study. We also compared the obtained patterns and results with the existing researches. The external validation needs the attempt and search on the health side by the medical researchers. Here, we use two different datasets related to cancer. One is the survey data from NHIS and another is the SEER cancer database. The data collected from NHIS and SEER is first processed and prepared for the experiment. Data pre-processing is one of the most important phases in any data mining application as the quality of the result depends largely on this step [35]. This phase includes the task of handling missing values, outliers and duplicate data, feature selection, and normalization as shown in the first block of Fig. 1. It also includes some data transformation and integration. Data analysis is the main stage in our framework which consists of three major functions: 1. Frequent pattern mining 2. Association rule mining 3. Contrast set mining

152

M. Jahan et al.

Data Preprocessing Missing Value

Noise

Data Formatting

Continues Data

Data From NHIS / SEER

Data Consistency

Duplicate Data

Data Integration

Data Analysis Frequent Pattern Mining

Association Rule Mining

Contrast Set Mining

Validation Internal Validation

Interpreting the obtained patterns and results

Fig. 1 Framework of the proposed work

The last phase is validation of results. The internal validation is addressed in this study, and the patterns and rules are validated with support, confidence, and lift measures. The obtained patterns and results are also investigated based on the existing researches. The external validation needs more attempt and search on the health side by the medical researchers.

Finding Frequent Itemsets Our first objective is to find the patterns that are frequent in cancer irrespective to any type. These patterns are the influential factors for all cancer types. To find

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

153

the frequent patterns, we use frequent pattern mining approach. Among the several available algorithms for frequent pattern mining, we choose Apriori [9]. Apriori is an algorithm for frequent itemset mining and association rule learning over transactional databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger itemsets as long as those itemsets appear sufficiently often in the database. We provide a minimum support threshold of 0.001 for the frequent items, which means that the item is frequent if it appears in at least 1000 transactions of the database.

Finding Association Rules In this phase, we try to identify association between different features and the cancer types using association rule learning. It helps us discover strong rules in datasets using some measures of interestingness [36]. In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence. Let X be an itemset, X → Y an association rule, and T a set of transactions of given database. The support and confidence are defined as below.

Support Support is an indication of how frequently the itemset appears in the dataset. The support of X with respect to T is defined as the proportion of transactions t in the dataset which contains the itemset X. supp(X) =

| {t ∈ T ; X ⊆ t} | |T |

Confidence Confidence is an indication of how often the rule has been found to be true. The confidence value of a rule, →Y, with respect to a set of transactions T, is the proportion of the transactions that contains X which also contains Y. Confidence is defined as:

conf (X → Y ) =

supp (X ∪ Y ) supp(X)

154

M. Jahan et al.

First, we find the rules with a minimum confidence value 65% and then filter with the field “Cancer type” in either of the left-hand side or right-hand side of the rules. The rules with this field in lhs (left-hand side) signify the dominant patterns of every type of cancer. The rules with the field “Cancer type” in rhs (right-hand side) represent the conditions or circumstances that the type of cancer may occur with the given confidence.

Lift In association rule learning, lift is a measure of the performance of a targeting model (association rule) at predicting or classifying cases as having an enhanced response (with respect to the population as a whole), measured against a random choice target model. A target model is doing a good job if the response within the target is much better than the average for the population as a whole. Lift is simply the ratio of these values: target response divided by average response. Lift is defined as:

Lift (X → Y ) =

supp (X ∪ Y ) supp(X) × supp(Y )

Contrast Set Mining Contrast set learning is a form of association rule learning that seeks to identify meaningful differences between separate groups by reverse engineering the key predictors that identify each particular group [6]. For example, given a set of attributes for a pool of students (labeled by degree type), a contrast set learner would identify the contrasting features between students seeking bachelor’s degrees and those working toward PhD degrees. The definition is as follows. Let A1 , A2 , . . . , Ak be a set of k variables called attributes. Each Ai can take on values from the set {Vi1, Vi2 , . . . , Vim }. Then a contrast set is a conjunction of attribute-value pairs defined on groups G1 , G2 , . . . , Gn . Formally, the objective is to find all the contrast sets (cs) that meet the following criteria: ∃ij P (cs|Gi ) = P (cs|Gj )   maxi,j | sup (cs, Gi ) − sup cs, Gj |≥ δ where δ is a user-defined threshold named minimum support difference. Contrast set mining is applied among the frequent itemsets obtained for individual cancer type to identify distinguishing features. Some items evident with high

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

155

support values for certain cancer type may have a lower support for another cancer type. Thus, the items are significant for some cancer but not for others.

Experimental Results In this section, we describe the implementation of our methodology and the analysis of the results. Our experiment includes frequent pattern mining, association rule mining, and contrast set mining.

The Datasets Two different datasets from NHIS and SEER associated with cancer patient are elucidated in the following sections.

NHIS Dataset The National Health Interview Survey (NHIS) is the principal source of information on the health of the civilian noninstitutionalized population of the United States and is one of the major data collection programs of the National Center for Health Statistics (NCHS) which is part of the Centers for Disease Control and Prevention (CDC) [8]. The NHIS questionnaire consisted of two parts: (1) a set of basic health and demographic items (known as the Core questionnaire) and (2) one or more sets of questions on current health topics. The main objective of the NHIS is to monitor the health of the US population through the collection and analysis of data on a broad range of health topics. A major strength of this survey lies in the ability to display these health characteristics by many demographic and socioeconomic characteristics. The Core contains four major components: household, family, sample adult, and sample child. In this research, we have used the sample adult data for 2015 and 2016. The questionnaire for sample adult collects basic information on health status, healthcare services, and health behaviors. The dataset has about 800 different attributes and about 66,000 records. The annual response rate of NHIS is approximately 80% of the eligible households in the sample.

SEER Dataset The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute (NCI) is a source of epidemiologic information on the incidence and

156

M. Jahan et al.

survival rates of cancer in the United States. The SEER research data include SEER incidence and population data associated by age, sex, race, year of diagnosis, and geographic areas (including SEER registry and county). The data represents 28% of the US population across several geographic regions. This data is available from the SEER website upon submitting a SEER limited-use data agreement form [7]. SEER database has nine different files containing information about different types of cancer patient and stored as ASCII text files. Each record in the file has about 136 attributes, and each file contains more than 300,000 records. The attributes are both nominal and numerical in type. The length of each record is about 358 where for some record the value of some attribute may be missing. We have used seven different incidence files of seven cancer categories (breast, colon and rectum, other digestive, female genital, lymphoma of all sites and leukemia, male genital, and respiratory) during the year 1973–2013.

Data Preprocessing Different preprocessing steps used for cleaning the datasets are described in the following sections.

Feature Selection SEER dataset is defined with 19 categories of attributes including demographic details, site-specific details, disease extension, multiple primaries, and cause of death. We use this data to observe the demographic correlation with different cancers. Features like age, marital status, ethnicity, etc. are selected for the analysis. We add registry and year of survey to explore whether any state or year has a significant occurrence of any specific cancer type. NHIS data contains demographic, behavioral, and medical status of the population as an answer to some survey questionnaire. Some question like if a person has ever been told that he/she has cancer and what type of cancer. Questions that have potential impact on cancer incidence are the main area of our interest. Some of the demographic, behavioral, and medical information are selected for the study. Tables 1 and 2 describe the attributes selected from the two different datasets along with their meaning. NHIS accumulates information about 31 different types of cancer. We have grouped similar types as one category for the ease of analysis. The cancer types included in the experiment are given in Table 3. For SEER dataset, the following values are used for “CancerType” as shown in Table 4.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

157

Table 1 Description of the attributes of NHIS dataset No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Variable name AASMEV AGE_P AHEP ALCSTAT ARTH1 ASISLEEP ASPMEDEV AWEIGHTP CANEV CHDEV CHLMDEV2 CHLYR CNKIND1 CNKIND30 COPDEV DIBAGE DIBEV EPHEV HISPAN_I HYPEV HYPMDEV2 MRACRPI2 MIEV R_MARITL SEX SMKSTAT2 SRVY_YR STREV ULCCOLEV ULCEV

Table 2 Description of the attributes of SEER dataset

Description Ever been told you had asthma Age Ever had hepatitis Alcohol drinking status: Recode Ever been told you had arthritis Hours of sleep Ever been told to take low-dose aspirin Weight without shoes (pounds) Ever told by a doctor you had cancer Ever been told you had coronary heart disease Ever prescribed medicine to lower cholesterol Had high cholesterol, past 12 months What kind of cancer . . . Bladder What kind of cancer . . . Other Ever been told you had COPD Age first diagnosed w/diabetes Ever been told that you have diabetes Ever been told you had emphysema Hispanic subgroup detail Ever been told you have hypertension Ever prescribed medicine for high blood pressure Race coded to single/multiple race group Ever been told you had a heart attack Marital Status Sex Smoking Status: Recode Year of National Health Interview Survey Ever been told you had a stroke Ever been told you had Crohn’s disease/ulcerative colitis Ever been told you have an ulcer Sl. 1 2 3 4 5 6 7 8 9

Variable name MAR_STAT RACE1V NHIADE SEX REG YEAR_DX AGE_1REC BEHTREND CancerType

Description Marital status at diagnosis Race/ethnicity Hispanic origin Sex SEER registry Year of diagnosis Age group Behavior recode of tumor for analysis Type of the cancer

158

M. Jahan et al.

Table 3 Values for different cancer types Sl. 1 2 3 4 5 6 7 8 9 10 11

Cancer type Blood Breast Colorectal Digestive FemaleGenital Lung MaleGenital Skin Thyroid Urinary Other

Table 4 Values for different cancer types

Example Blood, leukemia, lymphoma Breast Colon, rectum Liver, pancreas, stomach, esophagus, gallbladder Cervix, ovary, uterus Lung Prostate, testis Melanoma, skin (non-melanoma), skin (DK what kind) Thyroid Bladder, kidney Bone, brain, mouth/tongue/lip, and the rest Sl. 1 2 3 4 5 6 7

Attribute values BREAST COLRECT DIGOTHR FEMGEN LYMYLEUK MALEGEN RESPIR

Corresponding cancer type Breast Colon and rectum Other digestive Female genital Lymphoma of all sites and leukemia Male genital Respiratory

Data Transformation and Formatting NHIS dataset has 31 variables CANKIND1 to CANKIND31 that identify if a person ever told by a doctor that he/she has any of the cancer type among the 31 types. We create a new attribute called “CancerType” to store one single cancer type for individuals. Some patient may have more than one type of cancer. For them, we generate a duplicate record only with a different cancer type. As we mentioned before, we group 31 different incidences in 10 categories. The SEER dataset comprises nine text files for nine different types of cancer. We parse each record into tokens according to the data dictionary and save in an access database and check for duplicate data. Some of the attributes, like the “Patient ID” and “Date of Birth,” not relevant to our research are removed, and a new attribute “CancerType” is added with the value of the file name to describe the category. Then the individual file for different cancer is converted to a comma-separated values (CSV) format for further processing in R [37].

Data Integration A common CSV file has been generated by merging the seven files of different cancer data from SEER which contains 3,566,854 records.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

159

For NHIS, we combine the dataset for years 2015 and 2016 as a single file and analyze the total of 66,000 records. Handling Missing Values and Data Types The dataset contains both numeric and nominal data type. Some are continuous in nature. The continuous variable like age is converted to categorical age group, and the numerical attributes are converted to the nominal data type using the corresponding descriptive information given in the data dictionary for both datasets. We discard the records containing missing values. The values of any field mentioned as “unknown” or “not ascertain” are considered as missing values.

Finding Frequent Patterns We use the implementation of Apriori in R [36] with a minimum support value 0.001(0.01%) and observe the most frequent items from combined cancer occurrence. The top 30 most frequent items found in the dataset are depicted in Fig. 2. Figure 2 represents an overall statistic about the dataset. We observe from Fig. 2 that the topmost frequent items found related to demographic information are racewhite (MRACRPI2=White), Hispanic origin-not Hispanic or Spanish (HISPAN_I), sex-female, age group 61–85, and weight between 56 and 191 lbs. Most common medical factors revealed high pressure, high cholesterol, diabetes, and asthma. The response to arthritis shows almost 50–50 for “yes” and “no.” A recent research states that “If you have rheumatoid arthritis (RA), you may be at increased risk for certain cancers because of RA medications—or RA-related inflammation itself” [38].

Fig. 2 Relative frequency of top 30 most frequent items for all cancer patients

160

M. Jahan et al.

Table 5 List of a few top frequent items Sl. 1 2 3 4 5 6 7 8 9 10

Itemsets {HISPAN_I=Not Hispanic/Spanish origin} {MRACRPI2=White} {HYPEV=Yes, CHLEV=Yes} {DIBEV=Yes, ARTH1=Yes} {HYPEV=Yes,CHLEV=Yes, DIBEV=Yes} {HYPEV=Yes,CHLEV=Yes,ARTH1=Yes,ASPMEDEV=Yes, CancerType=Breast} {HYPEV=Yes,CHLEV=Yes,ARTH1=Yes,ASPMEDEV=Yes, CancerType=MaleGenital} {HYPEV=Yes,CHLEV=Yes,ARTH1=Yes, CancerType=Colorectal} {HYPEV=Yes,CHLEV=Yes,ASPMEDEV=Yes, CancerType=Urinary} {HYPEV=Yes, ARTH1=Yes, CancerType=Blood}

Support 0.9472 0.9002 0.351 0.116 0.115 0.020 0.019 0.015 0.015 0.014

The above figures describe the simple statistical facts about the topmost occurrence of different features among cancer patients. To find deeper insight and co-occurrence of different features, we collect frequent itemsets with minimum support value between 0.01 and 0.5. Some of the interesting dominant factors found are shown in Table 5. The third itemset in Table 5 signifies that among 35% of cancer patients, both high blood pressure and high cholesterol are evident. Itemset (4) shows that around 10% of participant is suffering from both arthritis and diabetes. Even the combination is apparent with some specific cancer. One of the most interesting associations can be observed from itemset (10). It illustrates that there is a correlation between high blood pressure, arthritis, and blood cancer. We already know that high blood pressure and high cholesterol are common issues for many diseases but not sure about cancer. There are still a lot to be discovered. Our findings also inspire for further research. Some interesting finding about high cholesterol and cancer can be found in [39].

Finding Association Rules To find the association between different behavioral and health factors with cancer occurrence, we find association rules with a high confidence. Among many existing algorithms, we use Apriori in R with a minimum confidence of 65% (0.65) and the maximum length of four. A few of the many interesting rules found from the analysis are shown in Table 6. The interpretation of the rules is summarized below. We interpret rules containing the field “CancerType” in two different perspectives depending on its appearance in either lhs or rhs of the rule. The cancer type in lhs indicates that the fact formulating the rhs has a strong correlation with the specific cancer type. The most significant factor found in many cases is the age. Rule (1)

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

161

Table 6 Some significant rules Sl. Rules 1 {R_MARITL=Living with partner, ALCSTAT=Curent infrequent, CancerType=FemaleGenital} => {AGE_P=[18,41)} 2 {AWEIGHTP=[156,191),COPDEV=Yes, CancerType=Lung} => {EPHEV=Yes} 3 {AHEP=Yes, COPDEV=Yes, CancerType=Skin} => {EPHEV=Yes} 4 {MIEV=Yes, CancerType=Skin} => {CHDEV=Yes} 5 {SEX=Female, MRACRPI2=Filipino, DIBEV=Yes} => {CancerType=Breast} 6 {SEX=Female, AGE_P=[18,41), SMKSTAT2=Current some day smoker} => {CancerType=FemaleGenital} 7 {AWEIGHTP=[191,299], ARTH1=No, ULCCOLEV=Yes} => {CancerType=Skin} 8 {SEX=Male, MRACRPI2=Black/African American, ALCSTAT=Current infrequent} => {CancerType=MaleGenital} 9 {CHDEV=Yes, AASMEV=Yes, CancerType=Lung} => {COPDEV=Yes} 10 {AASMEV=Yes, EPHEV=Yes, CancerType=Urinary} => {COPDEV=Yes}

Support Confidence Lift 0.001 0.750 14.261

0.002

0.680

13.801

0.001

0.667

13.530

0.020 0.001

0.710 0.750

5.436 4.455

0.001

0.769

7.431

0.001

0.692

2.202

0.002

0.682

6.104

0.001

1.000

10.390

0.001

1.000

10.390

demonstrates an association between alcohol status as well as marital status with female genital type cancer with a high confidence. High lift value indicates the strong correlating within a certain group of 0.01% population (support .001). The second rule in Table 6 emphasizes on the correspondence between COPD and lung cancer which aligns with many recent research findings about lung cancer [40]. Rules (3) and (4) in Table 6 indicate certain connection between hepatitis (AHEP), COPD, and heart disease (MIEV) with skin cancer, which is thoughtprovoking and needs further investigation. The variable “CancerType” in rhs implies that the certain type of cancer may exist when the conditions in lhs exist. From the above table, we perceive that Filipino females with diabetes have a strong possibility to have breast cancer (rule (5)). From rule (7), we observe a relation between ulcerative colitis and skin cancer. Recent studies show that having a long-term (chronic) inflammatory disease of the digestive tract can increase the risk of getting skin cancer [41]. Another motivating fact revealed from rule (10) which suggests high confidence on co-occurrence of asthma, emphysema with urinary cancer and demands further study. Although we clearly cannot derive a definitive causal relationship from this observed connection, the result suggests that further elucidation alluded to above may indeed be warranted. The high confidence of the rule indicates the high possibility to exist the correlation even if it is a rare case providing the low support value.

162

M. Jahan et al.

Fig. 3 Scatter plot of the rules representing the comparative value of support, confidence, and lift

The scatter plot for the generated rules with “CancerType” in either side is depicted in Fig. 3. Analyzing cancer data is always challenging. The dataset from NHIS is a general survey over the people all around the United States where only 10% of the population has ever been told that he/she has cancer. In fact, we never expect a more population with cancer. There are 31 different kinds of cancer with a very low frequency compared to the total sample size. So, it is tough to get many rules with high confidence for cancer information. Figure 3 displays that among the many generated rules only a few rules are relevant with enough confidence and lift value with very small support. Some rules exhibit very strong correlation among small group of people with high confidence as represented by the small number of red dots in the figure. But most of the rules do not show high correlation as indicated by the faded dots in Fig. 3. As we mentioned above, the number of cancer patient is comprised of a very small portion of the population, and considering distinct cancer type, the support goes even lower. The incidence of each cancer type in the database is grasped in Fig. 4. We note here the synergy referred in the introduction between data mining and literature-based discovery. Some of the rules found showing the correlation between smoking, alcohol, and weight have been reported in the literature. The rules that exhibit a reasonable support and relatively high confidence with a positive lift over large amount of observational data confirm existing claims and encourage further investigation.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

163

Support 0.3500 0.3000 0.2500 0.2000 0.1500 0.1000 0.0500 0.0000

Fig. 4 Relative frequency of distinct cancer type

Contrast Set Mining To identify the distinguishing features of individual cancer type, we generate the subset of frequent itemset of each cancer type with high and low support value. Then we contrast the sets with each another to explore which items are frequent in some cancer patient and which are rare in another. As we mentioned earlier, the frequency of the cancer incidence as well as the features associated with each cancer type is low compared to the total population. So, we emphasize on the descriptors that may have slight variation in support. This may look negligible at the beginning but carry substantial weight for cancer disparity. Some of the obtained result is presented in Table 7. The outcome reflects that diabetes may have some association with colorectal cancer, whereas high cholesterol (CHLEV) is apparent in skin, and male genital type. The people who are former smoker appeared to have lung cancer. Hypertension (HYPEV) is significant in blood and breast cancer as well as in skin cancer. The link between high blood pressure and breast cancer reinforces the recent findings about breast cancer [39].

Results from SEER Data The SEER dataset contains information about nine different types of cancer patient from different geographical locations all around the United States. There are 19 different types of attributes including demographic, pathological, and treatmentrelated information. This is a real-time cancer dataset with a large number of records, which is a very good resource for data mining. But when we tried to find the common prognostic factors related to different types of cancer, we found only the demographic attributes that can be used. Cancer is a very complex disease, and

164

M. Jahan et al.

Table 7 Dominant patterns in different cancer types Sl. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Itemset DIBEV=Yes DIBEV=Yes DIBEV=Yes CHLEV=Yes CHLEV=Yes CHLEV=Yes HYPEV=Yes HYPEV=Yes HYPEV=Yes SMKSTAT2=Former smoker SMKSTAT2=Never smoker SMKSTAT2=Never smoker STREV=Yes STREV=Yes STREV=Yes STREV=Yes

CancerType CancerType=Colorectal CancerType=Lung CancerType=Urinary CancerType=Lung CancerType=MaleGenital CancerType=Skin CancerType=Blood CancerType=Breast CancerType=Skin CancerType=Lung CancerType=Blood CancerType=Breast CancerType=FemaleGenital CancerType=MaleGenital CancerType=Blood CancerType=Digestive

Support 0.0152 0.0066 0.0118 0.0156 0.0632 0.1644 0.0248 0.0905 0.1737 0.0188 0.0234 0.0979 0.0099 0.0093 0.0026 0.0024

Table 8 Rules from SEER data Sl. 1 2 3 4

Rules {SEX=Female, AGE_1REC=Ages 25-29} => {CancerType=FEMGEN} {MAR_STAT=Married,CancerType=RESPIR} => {SEX=Male} {REG=New Mexico , CancerType=MALEGEN} => {SEX=Male} {AGE_1REC=Ages 80-84, CancerType=MALEGEN} => {SEX=Male}

Support 0.0104

Confidence 0.8107

Lift 6.6483

0.0718

0.7100

1.6589

0.0108

1.0000

2.3364

0.0114

1.0000

2.3364

each type of cancer is very unique in its cause, prognostic factors, and treatment. Defining all cancer patients under identical attributes is almost impossible. The fields available in SEER database that carry information for individual patient regardless of the cancer type are the demographic attributes. That’s why we only incorporated the demographic features of the patient in our study. Some of the results we found from the SEER cancer repository were: • The most dominant feature is the race, and “White” patients are highest in number having cancer. • 90% of the cancer patients are from “non-Spanish-Hispanic-Latino” Hispanic origin. • Female is affected more than male. Some of the rules that we found interesting are given in Table 8.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

165

Table 9 Some contrasting features from SEER Sl. 1 2 3 4 5 6 7

Cancer type Breast Colorectal Digestive Female genital Male genital Leukemia Respiratory

Significant features MAR_STAT=Widowed, AGE_1REC=Ages 40-49 MAR_STAT=Widowed, RACE1V=White MAR_STAT=Married, SEX=Male MAR_STAT=Single, RACE1V=Black REG=New Mexico, MAR_STAT=Single RACE1V=White, REG=Connecticut MAR_STAT=Married, SEX=Male

We observe that female with an age between 25 and 29 is more prone to “Female genital” cancer. Males have a higher probability to have “Respiratory” cancer. The male inhabitants of “New Mexico” are suffering from “Male genital” cancer, and the more vulnerable age is between 80 and 85. When contrasting the different types of cancer upon the dominating pattern, we found the following ones (Table 9). It is interesting to find that “Digestive” and “Respiratory” cancers are more common in male. Also, the “Genital” cancer for both male and female is frequent among singles. Especially female black community is prominent. The white people in the area of “Connecticut” are suffering from “Leukemia.” The observation is significant and thoughtful. It generates space for the researcher to rethink about the dynamics that formulate cancer among different demographic populations. The risk factors of cancers are abundant, and the role of environmental and behavioral factors is quite significant. However, for the design of etiological and epidemiologic investigations at community level, information on demographic variations in cancer distribution among different population groups is important.

Conclusion and Discussion Cancer has been a major health issue worldwide for the last few decades. Only a substantial worldwide research can enhance the understanding of cancer to support fighting against it. Numerous researches are going on in cancer detection and prevention. Most of the researches however concentrate on a single type of cancer. This research is one of the few that aims to find the correlation between multiple cancer types. Frequent pattern mining is currently being used in various medical datasets to find the prognostic indicator of a certain disease and the correlation between multiple diseases. However, within the scope of our knowledge, this is the only work that uses a unique methodology to find prime factors related to cancer and the distinctive features for certain cancer type. Here, we tried to figure out the vital patterns constitute from demographic, medical, and behavioral attributes. From both datasets, it is apparent that white people with Hispanic origin “not Hispanic/Spanish” are crucial among the population. The

166

M. Jahan et al.

age group between 61 and 85 is the most vulnerable for cancer in general. High blood pressure, high cholesterol, and diabetes are common among many cancer patients. Our result mostly signifies the factors we know about cancer. Some of the findings such as correlation of high blood pressure with breast cancer and the association of COPD with lung and skin cancer are among the most recent discoveries. The association between ulcerative colitis and skin cancer suggests further investigation. None of the rules and patterns that we find here are contradictory to common knowledge or counterintuitive and do not make any sense. So, it is obvious that the methodology works accurately. The result can be improved with more relevant datasets containing enough medical diagnostic information. As we know that every cancer type is unique in its nature and symptom, it is hard to relate multiple cancer type under common attributes. Still, the general clinical trials related to different regular checked up health information, like blood report, blood pressure, etc., can be used as an effective dataset for identifying the significant factors. But this type of datasets is not readily available for public. In the future, we hope to collect and have access to more generic dataset for further analysis. The methodology offered in this manuscript can be extended to other medical domain to explore factors related to certain diseases. This method is also applicable to any single cancer dataset for deeper insight on that cancer type. The findings of the research will help the medical community in designing more effective strategies to ameliorate the standards of the cancer care worldwide.

References 1. 2. 3. 4. 5.

6. 7.

8. 9.

10. 11.

http://www.who.int/mediacentre/factsheets/fs297/en/. https://www.worldwidecancerresearch.org/projects/philosophy/. http://www.mayoclinic.org/diseases-conditions/cancer/basics/risk-factors/con-20032378. https://www.cancer.gov/about-cancer/causes-prevention/risk. Agrawal, R., Imielinski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. In Proceedings of the 1993 ACM-SIGMOD International Conference on Management. Stephen, B., & Michael, P. (2001). Detecting group differences: Mining contrast sets. Data Mining and Knowledge Discovery, 5(3), 213–246. SEER Publication, Cancer Facts, Surveillance Research Program, Cancer Statistics Branch, limited use data (1973–2007). Available at: http://seer.cancer.gov/data/ .https://www.cdc.gov/nchs/nhis/index.htm. https://www.cdc.gov/nchs/nhis/index.htm. Agrawal R & Srikant R (1994) Fast algorithms for mining association rules. In Proceedings of the 1994 International Conference on Very Large Data Bases (VLDB’94) (pp. 487–499), Santiago, Chile. Zaki, M. J. (2000). Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12, 372–390. Agarwal, R., Aggarwal, C. C., & Prasad, V. V. V. (2001). A tree projection algorithm for generation of frequent itemsets. Journal of Parallel and Distributed Computing, 61, 350–371.

Finding Influential Factors for Different Types of Cancer: A Data Mining Approach

167

12. Mannila, H., Toivonen, H., & Verkamo, AI. (1994) Efficient algorithms for discovering association rules. In Proceeding of the AAAI’94 Workshop Knowledge Discovery in Databases (KDD’94) (pp. 181–192), Seattle, WA. 13. Agrawal, R. & Srikant, R. (1995). Mining sequential patterns. In Proceedings of the 1995 International Conference on Data Engineering (ICDE’95) (pp. 3–14), Taipei, Taiwan. 14. Han, J. W., Pei, J., & Yan, X. F. (2004). From sequential pattern to structured pattern mining: A pattern-growth approach. Journal of Computer Science and Technology, 19(3), 257–279. 15. Yoon, S., Taha, B., & Bakken, S. (2014). Using a data mining approach to discover behavior correlates of chronic disease: A case study of depression. Studies in Health Technology and Informatics, 201, 71–78. 16. Wang, H., Wang, W., Yang, J., & Yu, P. S. (2002). Clustering by pattern similarity in large data sets. In Proceeding of the 2002 ACM-SIGMOD International Conference on Management of Data (SIGMOD’02) (pp. 418–427), Madison, WI. 17. https://www.cancer.org/cancer/colon-rectal-cancer/about/new-research.html. 18. Karabatak, M., & Ince, M. C. (2009). An expert system for detection of breast cancer based on association rules and neural network. Expert Systems with Applications, 36, 3465–3469. 19. Mavaddat, N., Rebbeck, T. R., Lakhani, S. R., Easton, D. F., & Antoniou, A. C. (2010). Incorporating tumour pathology information into breast cancer risk prediction algorithms. Breast Cancer Research, 12, R28. 20. Malpani, R., Lu, M., Zhang, D., & Sung, W.K. (2011). Mining transcriptional association rules from breast cancer profile data. In IEEE IRI 2011, August 3–5, 2011, Las Vegas, Nevada, USA. 21. Lopez, F., Cuadros, M., Blanco, A., & Concha, A. (2009). Unveiling fuzzy associations between breast cancer prognostic factors and gene expression data. Database and expert systems application. In 20th International Workshop on Database and Expert Systems Application (pp. 338–342). 22. Bener, A., Moore, A. M., Ali, R., & El Ayoubi, H. R. (2010). Impacts of family history and lifestyle habits on colorectal cancer risk: A case-control study in Qatar. Asian Pacific Journal of Cancer Prevention, 11, 963–968. 23. Nahar, J., Tickel, K. S., Shawkat Ali, A. B. M., & Chen, Y. P. P. (2011). Significant cancer prevention factor extraction: An association rule discovery approach. Journal of Medical Systems, 35, 353–367. 24. Hu, R. (2010). Medical data mining based on association rules. Computer and Information Science, 3(4), 104. 25. Agrawal, A. & Choudhary, A. (2011). Identifying HotSpots in lung cancer data using association rule mining. In 11th IEEE International Conference on Data Mining Workshops (pp. 995–1002). 26. Aksoy, S., Dizdar, O., Harputluoglu, H., & Altundag, K. (2014). Demographic, clinical, and pathological characteristics of Turkish triple-negative breast cancer patients: Single center experience. Annals of Oncology, 18, 1904–1906 Oxford University Press. 27. Cramer, H., Ward, L., Steel, A., Lauche, R., Dobos, G., & Zhang, Y. (2016). Prevalence, patterns, and predictors of yoga use: Results of a U.S. Nationally Representative Survey. American Journal of Preventive Medicine, 50, 230–235. https://doi.org/10.1016/j.amepre.2015.07.037. 28. Warner, M., Schenker, N., Heinen, M. A., & Fingerhut, L. A. (2005). The effects of recall on reporting injury and poisoning episodes in the National Health Interview Survey. Injury Prevention, 11, 282–287. https://doi.org/10.1136/ip.2004.006965. 29. Rajesh, K., & Sheila, A. (2012). Analysis of SEER dataset for breast cancer diagnosis using C4.5 classification algorithm. International Journal of Advanced Research in Computer and Communication Engineering, 1(2), 2278. 30. Yadav, R., Khan, Z., & Saxena, H. (2013). Chemotherapy prediction of cancer patient by using data mining techniques. International Journal of Computer Applications, 76(10), 28–31. 31. Agrawal, A., Misra, S., Narayanan, R., Polepeddi, L., & Alok, C. (2011, August). A lung cancer outcome calculator using ensemble data mining on SEER data, In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

168

M. Jahan et al.

32. Majali, J., Niranjan, R., Phatak, V., & Tadakhe, O. (2014). Data mining techniques for diagnosis and prognosis of breast cancer. International Journal of Computer Science and Information Technologies, 5(5), 6487–6490. 33. Al-Bahrani, R., Agrawal, A., & Alok, C. (2013). Colon cancer survival prediction using ensemble data mining on SEER data. In Proceedings of the IEEE Big Data Workshop on Bioinformatics and Health Informatics (BHI). 34. Umesh, D. R., & Ramachandra, B. (2016). Big data analytics to predict breast cancer recurrence on SEER dataset using MapReduce approach. International Journal of Computer Applications, 150(7), 7–11. 35. Tan, P. N., Steinbach, M., & Kumar, V. (2006). Introduction to data mining. Boston, MA: Pearson Education Inc.. 36. Piatetsky, S., Frawley, G., & William, J. (Eds.). (1991). Discovery, analysis, and presentation of strong rules, knowledge discovery in databases. Cambridge, MA: AAAI/MIT Press. 37. R-3.3.2 for Windows (32/64 bit) available at https://cran.r-project.org/bin/windows/base/. 38. https://thetruthaboutcancer.com/cholesterol-levels-cancer/. 39. https://www.everydayhealth.com/heart-health/high-blood-pressure-medication-linked-tobreast-cancer-1154.aspx. 40. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4718929/. 41. https://www.hindawi.com/journals/jsc/2016/4632037/.

Enhanced Load Balancer with Multilayer Processing Architecture for Heavy Load Over Cloud Network Navdeep Singh Randhawa, Mandeep Dhami, and Parminder Singh

Introduction Cloud computing provides an easy path to retain data and records which includes distributed computing, virtualization, and web services. It has various features like distributed servers and customers. The aim of cloud computing is to offer all services with less cost and time. Nowadays, more than thousand millions of computer devices are associated with the Internet, and without any overload and delay, these devices send their request and receive the response. There are three types of services offered by cloud computing such as Platform as a Service, Infrastructure as a Service, and Software as a Service [1]. Figure 1 defines dissimilar hardware like laptop, tablet, and PCs that attach and access the information from a cloud at any given time [2]. The main goals of cloud computing are to improve response time, decrease cost, and give better performance. Basically, this paper presents various load balancing techniques and resolves the issues. The load has different types which are named as network load, CPU load, memory capacity problem, etc. This is to share load of machines through all nodes, to enhance resources, and to give high approval of clients and service utilization. The load balancing means to handle the load from the web servers. The load balancing assists novel network assets and services for the enhancement of performance and response time. Various techniques are used to handle cloud information among nodes [3]. Every user load is balanced by cloud users for easy provisioning of facilities. For a dissimilar structure, there are inclusive N. S. Randhawa () Department of Electronics and Communication Engineering, Swami Vivekanand Institute of Engineering and Technology, Banur, India M. Dhami · P. Singh Department of Information Technology Engineering, Chandigarh College of Engineering, Mohali, India © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_12

169

170

N. S. Randhawa et al.

Fig. 1 Cloud computing scenario

types of problems that required to be measured dissimilar CPUs might be capacities dissimilar since of processor different and allowing to overloads improves on them. The main concentration of this research work is load balancing technique design which studies the dynamic. Load balancing is normally related on large amount of information congestion and servers to divide work. The hybridization of least slack time scheduling and multilevel queue scheduling algorithm is used to minimize the complexity of load balancing procedure and to take care of diverseness [4]. Thus the paper gives an introduction about the types of load balancing in the “Types of Load Balancing” section. A study of the literature of these approaches and techniques for the reduction of delay and overload with the system module being used for proposed technique is discussed in the “Literature Review” section. The “Problem Formulation” section describes the problem formulation, and “Proposed Technique” section introduces the proposed technique. The evaluation of performance parameters and simulation results is discussed in the “Simulation Results” section followed by the conclusion and future scope in the “Conclusion” section.

Types of Load Balancing The load balancing techniques can be generally categorized into two types which are discussed below.

Static Load Balancing In static scheduling, the task assigned to processors is completed before task execution resources, etc. which is considered to be known at compiler time. Static scheduling techniques are non-preemptive. The idea of static scheduling techniques is to reduce the total executing time [5].

Enhanced Load Balancer with Multilayer Processing Architecture for Heavy. . .

171

Dynamic Load Balancing The dynamic scheduling is based on the process of redistribution between execution time. Redistribution is evaluated by changing tasks from the huge load processors to the less loaded processors with the objectives of enhancing the evaluation of the application. The main problem of dynamic approach is the run-time overload owed to the change of load data among CPUs, selection of processes for decisionmaking, and transfers of the job from processors and delay communication connected with the task move itself. The dynamic load balancing techniques give the opportunity of enhancing load division at the cost of additional message and overloads [6].

Literature Review The literature review shows that there has been a great work in the types of load balancing techniques and advantages and disadvantages of various algorithms in load balancing. The procedure in which load is distributed by the various users of system is called load balancing. Load balancing accesses the cloud computing through techniques [7]. The work has been completed to balance the overload/load in order to enhance the performance and escape over resource utilization. The several load balancing techniques or algorithms have been described adding Min-Min, Max-Min, and Round Robin, etc. [9]. Here Table 1 represents the details of various algorithms, advantages, and disadvantages of load balancing. The dynamic technique is normally on the dissimilar features of the nodes such as system bandwidth and abilities. In dynamic load balancer, the load divides among the nodes accuracy completion of time. Uncertainty load balancer searches high used of processor the need is send to the neighbor node. To handle the load, initial state of the network is used.

Table 1 Advantages and disadvantages of load balancing [8] Sr no. 1.

Algorithm name Round Robin

Advantages Fixed time period, easily understood, and priority checked Less done time period

2.

Min-Min

3.

Max-Min

4.

Static load balancer

Needs are related and known, so the work is better Lesser difficulties

5.

Dynamic load balancer

Divide work at run period

Disadvantages Takes larger time to complete the job, less quantum time Job variations and mechanism cannot be predicted Takes long period to complete the task Does not have the ability to handle load updates Requires constant check of the nodes

172

N. S. Randhawa et al.

Table 2 Performance parameters of load balancing Load balancing algorithms Round Robin Max-Min Min-Min Static Dynamic

Time period High High High High Less

Throughput High High High High High

Overhead High High High N/A High

Complexity Less Less Less Less High

Table 1 discusses the performance of described load balancing techniques using dissimilar parameters like throughput, time period, etc. In Table 2, comparison between load balancing techniques shows true and false consequences and describes this as low and high term [10]. As discussed prior dissimilar techniques illustration consequences. Max-min approaches are previously known. It works better and offers high accuracy.

Problem Formulation As described earlier, load balancers are used to provide faster and flexible execution in a network. Cloud servers are worldwide, and from several countries millions of peoples connect for processing their data. A perfect load balancer provides process execution faster in the network with minimum time consumption. Existing load balancer is facing various problems due to dynamic behavior of real-time execution. The heavy load of user’s requests on the server may cause delay in response time and maximum energy consumption of the network. The main points of the problem in the area of load balancing are as follows: • Without efficient load balancer, the user faces delay in response time. • A network may be connected to several types of processors. Current scenario is not as much capable to utilize the network in heavy duty. Improper utilization causes high amount of energy consumption and processing delay.

Proposed Technique The proposed approach is designed with the help of two different algorithms as multilevel feedback queue and least slack time scheduling. The proposed algorithm used properties of both of these algorithms to manage the heavy load on cloud servers. As cloud servers are worldwide and millions of users are connected to them, all the requests on a data center are from queues on the server. The cloud server finds their execution behavior and origin of execution and allots them for their processing. The proposed technique is capable to handle these types of execution with various

Enhanced Load Balancer with Multilayer Processing Architecture for Heavy. . .

173

process execution priorities. It divides the execution process in two phases called outer phase and inner phase. The outer phase handles with the help of multilevel feedback queue scheduling properties as it is capable to handle multiple queues at the same time and execute them properly. This method used two parameters to find less period of quantum. The mean of several of procedures and median is evaluated using the following formula:  Median =

Y n+1 2 , if m = odd 1/2 (Y n/2 + Y (1 − n/2)) , if m = even

Mean =

n

Brust time of all procedures i=1 Number of procedures

(1)

(2)

These parameters are used to evaluate the time period and reduce the delay probability in overall execution. On the other side, processes inside the queues are also valuable. So for perfect execution, there is a need to handle them and arrange in energy- and time-efficient execution manner. The least slack time properties are used to provide the execution priorities for inner phase of execution. The simulation result shows that both outer and inner modules are handled efficiently. So the proposed approach saves more energy with less time consumption. Pseudo Code of MLQS: 1. Qn: no. of queue, where n represent the number from 1, 2, 3 . . . M= number of procedures Ym = location of the mth procedure Bpt [j]: burst time of jth procedure Rb [j]: last burst time of jth procedure Qt : Quantum Time, Start: n=1,j=1, avg_tot=0 2. Enter the CPUs p[j] where j=1 to m to Q1. 3. Evaluate the Qt For (n=1to 5) While (prepared queue!=Null) { Qt1 = |Mean-Median|/3; Qt2 = 2* Qt1 and Qt3 =2* Qt2; For Q1 to Q5 For Q1 to Q5 { Find execution time for inner processes. Qi.processes.extract(properties); Qi.priorities(properties.time(bound)); Qi.sort(priorities); } 4. Allocate Qt to individual procedure in the Qn for individual procedure p[j] where j=1, 2 . . . . M. P[j] -> Qt. 5. If (Bpt [j]>= Qt

174

N. S. Randhawa et al. { Qn+1 ➔ Bpt [j] - Qt. // allocated to neighbor

queue. } Else Qn -> Bpt [j] this is used in the similar queue hence procedure 6. Evaluate the wait time for Qn+1 at individual Qn. 7. Then a novel queue arrives Update the values n and m and go to step 3. 8. If (n>5), here considered no. of queue as 5. { Scheduled the Rbt [j] of procedure in ascending order. Rearrange the Left Process in their particular queue acc. to give value. } 9. Evaluate tot, Avg waiting time and Accuracy at the finish. 10. Exit and Stop.

Simulation Results In literature, different types of parameters exist which affect the performance optimization for load balancing architecture. The main parameters which define the cost of network and processing are energy, response time, network utilization, and system crashes. All these parameters affect the quality of work in cost-effective manners with less time duration. The comparison for response time in different cases is shown in Table 3. The response time is calculated through the following Equation (3) where the n is number of requests in the network and timer defines the execution time calculation before current time takes place for the execution. Response time =

n r=0

(Response + Timer )

n

(3)

The energy parameter is used to check the execution value of current procedure. It is a sum of various possibilities like waiting energy in the network that occurs because of load on server, actual energy of the task, and time taken for the execution. Equation (4) defines the summed form of energy in the cloud backend processing (Table 4). Table 3 Response time (average per request) Number of requests 10 100 1000

Existing technique (ms) 214 554 849

Proposed technique (ms) 109 387 778

Enhanced Load Balancer with Multilayer Processing Architecture for Heavy. . .

175

Table 4 Energy in joules Number of requests 10 100 1000

Existing technique 275 724 1729

Proposed technique 230 668 1533

Table 5 Network load in percentage Number of requests 10 100 1000

Energy =

Existing technique 35 62 94

n  

Proposed technique 26 51 73

 Energy_con + Energyr + Waitingenergy ∗ Requestr . . .

(4)

r=0

The network load is another form of calculating the performance of a load balancer. This parameter shows the waiting of processes in the network. A proposed approach manages the processes with various priorities and reduces the network load efficiently shown in Table 5.

Conclusion This paper presents the evaluation of different load balancing techniques for cloud computing such as multilevel queue and least slack time scheduling algorithm. The load balancing has multiple meanings; firstly it puts a huge number of simultaneous access or heavy load to multiple processors, respectively, to minimize the energy, network load, and response time. On another hand, evaluation from each heavy load to the multiple processors to improve the performance of the energy, response time, and network load of each processor. The proposed algorithm provides the efficient resource utilization and also increases the performance of the multiple systems with the reduction in time. The proposed technique achieves the overall 28% approximate enhancement in all the cases.

References 1. Li, K., Xu, G., Zhao, G., Dong, Y., & Wang, D. (2011, August). Cloud task scheduling based on load balancing ant colony optimization, In Sixth Annu. Chinagrid Conf (pp. 3–9). 2. Abraham, A. (2007). Genetic algorithm based schedulers for grid computing systems Javier Carretero, Fatos Xhafa. International Journal of Innovative Computing, Information and Control, 3(6), 1–19.

176

N. S. Randhawa et al.

3. Kaur, R., & Luthra, P. (2014). Load balancing in cloud computing, In Int. Conf. on Recent Trends in Information, Telecommunication and Computing, ITC (pp. 1–8). 4. Chaczko, Z., Mahadevan, V., Aslanzadeh, S., & Mcdermid, C. (2011). Availability and load balancing in cloud computing. International Proceedings of Computer Science and Information Technology, 14, 134–140. 5. Aslam, S., & Munam Ali Shah (2015). Load balancing algorithms in cloud computing: A survey of modern techniques. In 2015 National Software Engineering Conference (NSEC) (pp. 30–35). IEEE. 6. Chatterjee, M., & Setua, S. K. (2015). A new clustered load balancing approach for distributed systems. In Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on (pp. 1–7). Hooghly, India: IEEE. 7. Shaw, S. B., & Singh, A. K. (2014). A survey on scheduling and load balancing techniques in cloud computing environment. In Computer and Communication Technology (ICCCT), 2014 International Conference on (pp. 87–95). IEEE: Allahabad, India. 8. Qilin, M., & WeiKang, S. (2015). A Load Balancing Method Based on SDN. In 2015 Seventh International Conference on Measuring Technology and Mechatronics Automation (pp. 18–21). IEEE. 9. Al Nuaimi, K., Mohamed, N., Al Nuaimi, M., & Al-Jaroodi, J. (2012, December). A survey of load balancing in cloud computing: challenges and algorithms. In Second Symp. Netw. Cloud Comput. Appl. (pp. 137–142). 10. Ray, S., & De Sarkar, A. (2012). Execution analysis of load balancing algorithms in cloud computing. International Journal on Cloud Computing: Services and Architecture (IJCCSA), 2(5), 1–13.

Market Basket Analysis Using Community Detection Approach: A Real Case Sepideh Faridizadeh, Neda Abdolvand, and Saeedeh Rajaee Harandi

Introduction A common problem for many retail stores, particularly during the past few decades, has been finding the best sets of products co-purchased by customers. The only source providing this necessary information is the history of their customers’ behavior that is found in sales transactional data [1]. In fact, customer shopping behaviors help retailers understand individuals’ interest and habits. Although the use of this information might change across time, generally, it would be useful for making decision about product placement, producing personalized marketing campaigns, and determining the best times for product promotions [2]. With the rapid development of online businesses, both companies and consumers have encountered new situations; so that companies’ viability in the field of online business such as E-retailing has become more difficult. In other words, customers are faced with a multitude of goods that make choices difficult for them and have led to lower purchases in a certain time. As a result, the need for new marketing strategies such as person to person marketing, personalized Web, and customer relationship management (CRM) has increased for both marketing and market research executives [3]. Recently and especially during the last two decades, companies have started to use transactional data to discover new information about customers’ shopping behaviors, sales records, and the other relevant issues [1, 4]. Most companies, in recent years, have considered it essential to use new marketing strategies, such as one-to-one marketing, customer relationship management, and web personalization to conquer marketing problems [3]. For instance, studies of retail transactional data, also known as market basket analysis, are an interesting

S. Faridizadeh · N. Abdolvand () · S. Rajaee Harandi Department of Social Science and Economics, Alzahra University, Tehran, Iran e-mail: [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_13

177

178

S. Faridizadeh et al.

research field within which one can find meaningful relationships among customers’ purchases. Analyzing these meaningful relationships, companies can offer more products to their customers and ultimately increase their sales’ profits [2, 5, 6]. Researchers have tried to introduce different techniques to facilitate the analysis of this information. Most of these methods have failed to fully explore all the ramifications of big data [1]. Moreover, before the 1990s, retail stores and other companies have been selling their products in different markets without access to any kind of knowledge sources such as transactional databases. Problems arise when there are large transaction datasets that include hundreds or thousands of rules based on different levels of support and confidence with many redundant or obvious rules [7]. Although there are different tools and techniques to deal with this problem, there are limitations as well [3]. The first limitation is related to the lack of an obvious method of determining an appropriate threshold for support and confidence parameters [2]. By using the minimum threshold of support and confidence values, many created rules are lack of sufficient interestingness. Moreover, some redundant rules are produced which create difficulty in identifying appropriate rules [7]. In addition, by selecting thresholds that are too high, some of the rules with higher interestingness might be omitted, as is clearly shown in Reader and Chawla study [2]. Customers face a kind of complexity in their decisions about choosing the best items among the wide variety of products available in the market. For instance, customers need to choose from thousands of books, hundreds of films, and many other products. Evaluating all of these products and choosing one of them without any kind of assistance may be hard or even impossible. Being able to find relevant products for the customer automatically has become a crucial issue for retailers over the last few decades [8, 9]. In the early 1990s, when Internet usage started to increase, recommender systems (RS) emerged to help customers by predicting their interests and preferences through analyzing the behavior of each user [9– 11]. Some common techniques are collaborative filtering (CF), content-based (CB) techniques, and knowledge-based techniques. Each technique has advantages and limitations. For example, although CF is a popular personalized recommender algorithm, it has some limitations in terms of scalability, sparseness, and coldstart [10, 11]. Therefore, in spite of the many attempts to improve recommender systems, these systems are still confronting some problems. There is a strong demand for increasing prediction accuracy of these systems which eventually can bring the satisfaction for customers and more profit for the e-retailors [12]. The association rule mining is the most common used methods for analyzing customers’ shopping baskets. In this way, correlation and associations of different items will be identified by extracting frequent shopping patterns. Despite the advantages that the mining of these rules can have in market basket analysis, there are still many restrictions. The first limitation of association rules mining is the lack of clear procedure to find the appropriate threshold for the minimum support and confidence. Therefore considering high threshold would result in the elimination of some interesting association rules. In contrast, the low threshold makes mass production of rules; the rules that are mostly redundant and repetitive or are even obvious. Different techniques have been developed to solve this problem and remove

Market Basket Analysis Using Community Detection Approach: A Real Case

179

uninteresting rules. One of them involves finding a closed maximal item set [5]. Those experiments, however, were conducted on generic machine learning datasets rather than market basket datasets. Given the constraints mentioned above, there is a need for other methods for better analysis and to gain more knowledge of the customer’s shopping basket. Hence, this study tries to use a new and different approach to the traditional approach of rule mining in the field of market basket analysis. This constructive approach detects a community of related and dependent products in a market basket by considering transactional data as a product network. Community detection aims to find clusters, groups, or subgraphs of a network. In fact, the community is the cluster that has its nodes connected together by many edges, so that the number of edges between different clusters is minimized [13]. Regarding the importance of the issue discussed above, still more studies should be done in the aspect of analyzing customers’ purchasing behavior in order to recommend the items based on their interests. Therefore, the main idea of this work is extracting extra knowledge from customer shopping behavior and finding the hidden patterns through the information of a transactional database. This research can develop a new way to find sets of specific products to recommend customers. In this research, by addressing the community detection technique, which is a common approach in complex network analysis, this study tries to achieve the goal. By using the graph theory, transactional data are modeled to a graph which can provide an expressive picture about relations of all transactions. Then, community detection algorithms are applied to discover communities with especial properties. Then, applied transactional data of the study and its properties will be described, and the research methodology and finding are defined in detail in the next part. Discussion and recommendation for the future studies are explained in the last part.

Literature Review Association Rules (AR) The association rule mining and interpretation is a popular and well-known method for analyzing the market basket. The association rule is represented as A→B, in which A is antecedent and B is consequent. This means that both cannot occur at the same time. The extracted rules using this method provide general information about the hidden rules of data. There are many criteria to evaluate the quality and interestingness of these rules, but their quality is often evaluated by support, confidence, and lift criteria [14]. Support is defined as a proportion of the database transactions that are put together in a market basket. The confidence is the ratio of products dependency on the number of cases where the products are considered antecedent. The lift criterion indicates the power of correlation that represents the conditional probability of B given A than the unconditional probability of B (see Eq. 1).

180

S. Faridizadeh et al.

lift (A → B) =

P (AB) P (B|A) = P (B) P (A)P (B)

(1)

Algorithms are known as popular tools for discovering effective association rules [4, 13, 15, 16]. Due to their widespread use, researchers found the importance of understanding discovered rules because the discovered association rules can be used for a store shelf space management [16, 17].

Association Rule Mining (ARM) in Market Basket Analysis One of the most significant and common methods for analyzing data and uncovering interesting data patterns in large data sets is association rule mining or ARM. This method is presented as a premier approach for analyzing the market basket and initially investigates the repeat-buying patterns to be utilized in the next level of extracting interesting rules on the basis of confidence and support measures. Interestingness of patterns is accessed by considering factors of association and correlation [5, 6]. Detecting these rules has had useful advantages in market basket analysis for discovering practical knowledge. Moreover, several studies have been done to develop the techniques and algorithms for ARM. In fact, there are several such techniques in literature, including Apriori [17] and FP-growth [18], which are both common and popular to detect frequent item sets. The Apriori algorithm is considered as the candidate generation of prudential approach based on Apriori property. It states that if there are no iterative items, then any combination of those items is repetitive. This feature reduces the search space, but more studies must be done on database to calculate recurring items; thereby, the runtime would increase and memory would be overhead. Han et al. [6] proposed the FP-growth algorithm to overcome the multiple review of database. In this approach, first, the database becomes a three-tree structure called the FP Tree, which is a compact representative of the original database. Then the algorithm’s main database is divided into several smaller databases, for which data items and its extensions are separately derived from the conditional database [6]. However, the method still has certain limitations. Up until now, several studies have been conducted for developing or improving the ARM techniques and identify the limitations of this approach [2]. For instance, some of the weaknesses and strengths of ARM have been illustrated in the studies of Reader and Chawla [2]. One of these limitations was the redundancy of the rules, which makes finding the precise interesting rules challenging. In order to deal with this problem and to augment the chances of correctly choosing interesting rules, several techniques have been developed that can sort rules on the basis of objective measures. The most critical of these techniques is to decrease the obvious and redundant rules through finding maximal and closed frequent item set among frequent patterns [6, 19]. However, according to Reader and Chawla [2], the use of practical market basket analysis is limited to mining of maximal and closed item sets. The first approach for detecting

Market Basket Analysis Using Community Detection Approach: A Real Case

181

the exact interesting rules is using several interestingness measures and sorting rules by more than 20 objective measures. These include lift and correlation coefficient [20], J-measure, Gini index, and other measures that are compared through specific research studies [21]. Furthermore, these measures are used to sort the rules by their importance and also assist in the pruning of uninteresting rules. Nevertheless, few related studies [20–22] have shown that each of these measures classifies the rules differently, ambiguously elevating the appropriate rules. Additionally, many of these existing measures ensure that interestingness is actually “deviation from independence” [2, 21]. Considering these limitations, it is crucial to use other approaches and methods which can provide more accurate information about relative logic between co-purchased products and customers. Complex networks analysis is one of the contemporary methods that represents a new insight into transactional data analysis. This technique can model data for a complex network with rational connections. Accordingly, this work attempts to clarify the structural patterns in connections between products and customers in a retail store by using network analysis method. Eventually, application of these conclusions leads to the development of recommender systems.

Graph Mining Method in Marketing and Customer Behavior Analysis Graphs are useful tools for representing mathematical models of networks and complex systems structures. A graph G with definition of G = (V, E) is a way of displaying relationships among several items or subjects where V is the set of verdicts and E the set of edges that connect verdicts [23, 24]. Over the past decades, many researchers have used the approach of network analysis or graph theory to investigate the structures of complex systems in different scientific fields, including biological research [25], business [26], engineering domains [27], and particularly social network analysis (SNA) [23]. Market basket analysis is one of the important business domain subjects, particularly for e-retailers who use sales transaction information to follow customer shopping behavior and to promote their products appropriately [3]. For analyzing market basket or customer behavior, we need to model customer profile information or transactional data to a graph or network using SNA tools. In 2007, consumer shopping behavior in e-commerce was studied by applying a random graph modeling methodology, while sales transactions were presented as a bipartite consumer-product graph in this research; and the findings of the research showed that the topological characteristics of real-world customers and products deviated from theoretical predictions based on a random bipartite graph regarding the nonrandomness of the customers’ choices [28]. The other research in 2010 coped with the application of network techniques in terms of market basket analysis problem presenting a product network by modeling transaction data. Indeed, in this study, the product

182

S. Faridizadeh et al.

network is indicated through the connection of products which each customer purchased during the specific period. Then using community detection approach in the network, some expressive relationships among products while discovering these patterns would be difficult or may even be impossible by association rule mining approaches [2]. Another research done in 2012 suggested co-purchased networks and market basket networks to expand the concept of market basket analysis. In this research, connection among each pair products in market basket network had formed based on purchasing a number of products in a same time, whereas in co-purchased network, product linked when the same customer purchased them. As a result, this work shows the two networks were different in visualization, network characteristics, and network structure. So they could provide different useful information from the same transactional data [3]. Eventually, Videla et al. [1] presented a novel approach using graph mining techniques to analyze market basket. In this approach, overlapping community detection algorithms are used instead of association rule mining technique. By comparison, this state-of-the-art method with k-means, frequent item set algorithms, and SOM was discovered with applied novel methodology and meaningful and useful frequent item set, whereas traditional techniques were far from these advantages and usefulness. Several researches over the last decades tried to investigate this important aspect of business specially to develop recommender system in e-commerce area. Doubtlessly, the most important application of recommender systems is at e-retailers. Amazon or other online retailers use recommender systems to suggest products to customers based on item-based similarity or user-based similarity and other techniques which can provide some prediction for next purchasing decisions. Generally, recommendation systems have several applications, including product recommendations, movie recommendations, and news articles [29, 30]. Over the two past decades, over 170 research studies, web pages, etc. were published and presented in this area [31]. In addition, different methods are applied for recommender systems such as data mining and graph mining methods. In this regard, Amatriain et al. [32] had an overview on main data mining techniques used for designing recommender systems. They reviewed some main classification methods like nearest neighbors, decision trees, rule-based classifiers, Bayesian networks, artificial neural networks, and support vector machines. They showed each of these methods was useful for different kinds of recommender system, and also in many cases, it was datadependent. Furthermore, they described the use of association rule in recommender system, and they illustrated that association rules are more applicable than classifiers in some cases. Other researches have used other techniques like graph theory as well as data mining. In 2007, Huang et al. represented sales transactions as a bipartite customer-product network, and eventually the finding was investigated for applications in recommender systems [28]. Furthermore, there are different factors to evaluate quality of a recommender system including accuracy of a recommender system, user satisfaction, and finally providing a recommender system with low cost. So, if a recommender system has all of these factors, it can be counted as a good one [31]. Thus, improving recommender system is interestingly and yet important.

Market Basket Analysis Using Community Detection Approach: A Real Case

183

In current research, there is an attempt to follow the main idea of the study mentioned in the previous section by following the basis of these related studies, while other studies in this area did not consider all of these three networks. Indeed, three isolated networks that are being investigated include product network, customer network, and product-customer network. Furthermore, they can be targeted for further analysis by using a combination method which makes a new insight to the different networks information. According to the results of these three networks, we find out new applications and results which would be useful to enhance recommender systems algorithms. A new method for developing recommender systems is proposed by combining a bipartite network with each two other networks, namely, product network and customer network.

Social Network Analysis (SNA) and Community Detection Technique Social network analysis refers to social interaction analysis using graph theory, where actors participating in the network are represented by nodes, and their interactions are illustrated by edges that link each node pairs. Since its inception, SNA has become a powerful tool for analyzing social structures and is particularly useful when combined with statistics methods [33]. Community detection is one aspect of network analysis that is useful for clustering network actors. In some aspects, community detection is similar to the clustering technique in data mining. Indeed, each community in a graph can be considered as a cluster with its members connected to each other by links. For this reason, community detection is typically applied in social network analysis. Its popularity is also attributed to the fact that, in the clustering model used in data mining, data is not in a relational form, while in community detection, there are several specific relationships among data. Consequently, discovered communities are groups in which entities share common properties with each other [34]. In addition to assigning, community detection approach also clarifies the structure and function of the entire network [35]. Thus, community analysis allows gaining a deeper understanding of, for example, key individuals (entities), community structure, behavior of entities, etc. [36].

Community Detection Techniques in Networks In general, detecting community algorithms are defined as the dividing algorithms that detect and remove the connections between communities [33, 37]. Collector assembly algorithms, which are similar to recursive merge [38] and optimization methods, are based on the maximization of the objective function [35, 39]. The agglomerative algorithms recursively merge the similar groups or communities

184

S. Faridizadeh et al.

[38], and also they are optimization methods based on the objective function maximization [35, 38]. The quality of obtained communities is often measured by the modularity criterion. In fact, the modularity maximization is an important function in network analysis [40]. Modularity maximization is considered as NPhard problems; therefore, heuristic algorithms are used to solve it. The modularity of a community is a scalar value between −1 and 1, which measures the density of connections within communities [33, 41]. The positive modularity indicates the possible presence of community structure in the network; therefore, we should look for divisions of the network in which this number is higher than expected [35]. Modularity is defined as networks with weights on their edges such as the number of communication between two cell phone users (see Eq. 2) [42]. Q=

    ki kj 1  δ ci , cj Aij − 2m 2m

(2)

i,j

In which the Aij indicates the weight of the edge between i and j, ci is the community to which  vertex i is assigned, cj is the community to which vertex j is assigned, and ki = Aij is the sum of the weights of the edges attached to vertex i, the community  to which vertex i is assigned, and m = 12 ij Aij is the number of edges in the graph.

Research Methodology The methodology adopted for this research consists of several steps. The process commences with data preparation, allowing it to be modeled as a graph. This enables application of a popular community detection algorithm called Louvain method [43] to discover several communities are discovered from each of the presented graphs (customer network, product network, customer-product network). The Louvain method, which focuses on maximizing modularity, is one of the most popular algorithms for community detection [33]. This method is used in market basket analysis to divide and detect communities in a product’s network, allowing to find strong relationships between products and then discover, in turn, significant relationships in customer buying behavior. This method and its uses are explained in detail below. Louvain method was chosen, as it is a hierarchical greedy algorithm that performs well on very large graphs. Thus, it can efficiently maximize the network modularity [44, 45]. Network modularity was proposed by Girvan and Newman as a quality metric for evaluating any assignment of verdicts to modules [34, 36]. The study has been done using transactional data collected during the first 6 months of 2015 from “Digikala,” a popular online retail store in Iran. These transactional data are comprised of customer information. Typically, such data would include personal information, such as gender, date of birth, and the country

Market Basket Analysis Using Community Detection Approach: A Real Case

185

and city of residence; they would also include product information, such as the group and brand of the product, as well as consumer-product information, like the time and the amount of the purchase, etc. The database contains around 321,000 real transactional data entries, as well as information on 128,000 customers, and 243 specific products, such as digital devices (cell phone, tablet, laptop, etc.), home appliances (stove, refrigerator, washing machine, etc.), and other kinds of products, which are identified by category, such as cell phone, coffee maker, etc.; however, brand and detailed product specifications are not included. The customers that purchased the products over a 6-month period in 2015 are shown in the database, along with their identification numbers.

Data Analysis and Results As was mentioned previously, three networks are extracted from the transaction data set employed in the present study, namely, product network, customer network, and customer-product network.

Product Network In general, a network includes some entities represented by nodes that are connected pairwise, producing a distinct graph structure. Every node or edge in this network is assigned a weight too [1, 46]. Product network is one of the networks extracted from the transactional data applied in this research in which the products play the roles of nodes in the graph, and each connection between products describes that every paired products are purchased by one or several customers during the special period. So, if the value of one connection is 20, it means that two obvious products are purchased together by 20 different customers. Extracted product network is almost dense with a large number of connection per nodes. Nevertheless, some of these connections are not meaningful enough, and they are called noisy connections. In fact, noisy links represent association generated by chance, and their weight or value equals one or two which is so low [2]. In this network, we discovered over 10,000 of the edges had the value of 1 and 2. So, as Reader and Chawla [2] mentioned in their study, a specified threshold should be considered for weight of edges. Therefore, after some investigation according to given transactional data set by choosing δ = 10, the edges of product network decreased from 15,340 to 6015 edges. So, the new network just includes 38% of transactions as a whole with around 127,000 customers and 217 products. Next, the product network diameter and density and the average degree of the networks before and after of pruning are measured using Pajek software. The product network diameter is the length of the longest path in the network which is related to two vertices. The greater

186 Table 1 General information of product network before and after pruning

S. Faridizadeh et al.

Entire network Pruned network

Diameter 3 4

Density 0.51 0.21

Average degree 126.25 55.43

product network diameter compare to the entire network (four versus three) reveals the sparse of the pruned network and less intensity of the relationships (Table 1). As Table 1 indicates, the density of relationships is nearly half, and density of networks is reduced after reaching the threshold. In essence, the density of network shows the relationships between network entities. Therefore, the more relationship there is between network entities, the density of that network is more, while the density of the entire network that its entities are mutually connected to each other is maximum and equal to 1. On the other hand, the density is inversely related to the size of the network; as the network grows, its density decreases. The reason is that the number of connections in the network will increase with the enlargement of the network, but the number of connections between products will decrease [47]. The centrality degree is another concept of the network. It is the number of edges incident on a vertex in a network. The product is at the center when a large number of products are associated with it. In fact, centrality in product network means that the product is bought along with many other product networks. Network centrality measure means that the network was organized around the most central point. Therefore, it can be said that the density describes the overall coherence of a network, while centrality specifies that the density is organized around the particular focal points. In general, the density is disproportionate with centrality [3]. Since the network density depends on the size of the network, it is not useful criteria. Hence, it is better to consider the centrality degree that is the number of relationships between the products. Therefore, we can use the average degree of network products to measure the network structure. This measure is a better criterion than the density, because it does not depend on network size and can compare the average degree of the networks in different size [47]. Then, using the most popular and effective methods of community detection approach called Louvain method [45], eight communities are detected from the network. Table 2 presents detected communities from Louvain method containing the size of each community and other information like average degree, density, percentage of transactions belonging to each community, and also the number of product connections of each community. The comparison between ranks of most popular products based on sales rate and degree centrality of each product in the graph which includes top 20 products in each group is shown in Table 3. According to the table, it is clear that if a product accounts as a best-selling one, it would not be necessarily sold with lots of other products; for instance, product with ID of 13 (tablet) is the second best-selling item in the store, whereas product with ID of 64 (flash memory) is the second one in terms of its relation with other products, and it would be one of those items which customers buy with other items. Therefore, it is one of the key products in this store.

Market Basket Analysis Using Community Detection Approach: A Real Case

187

Table 2 Detected communities from product network Community number 1 2 3 4 5 6 7 8

Community size 89 55 16 14 23 3 8 9

Average degree 34.17 11.81 8.37 7.28 10.26 3.33 4.75 4.66

density 0.37 0.20 0.47 0.48 0.41 0.88 0.51 0.47

Transaction (%) Connections 24% 1521 44% 325 18.5% 67 3.8% 51 5.5% 118 0.33% 5 1.5% 19 0.9% 21

Table 3 Comparison between rank of best sales and rank of product with most degree centrality in top 20 Product ranking 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Product ID (Degree centrality) 11 64 13 68 1322 146 25 18 39 26 1291 80 5721 211 77 74 48 40 219 75

Product ID (Best sales) 11 13 64 146 68 1322 5721 18 39 211 48 27 74 77 34 6085 52 1272 87 57

According to Bhaskaran and Gilbert [48], “Complementary products are those for which a consumer’s utility from using both of them together is greater than the sum of the utilities that he would have received from using each product separately.” The function (3) is used to measure the complementary effect, in which k is introduced as a parameter of complementarity and vd is independent valuation for the durable good when the per-unit prices for the durable good and the complementary products are pd and pc , respectively.

188

S. Faridizadeh et al.

Fig. 1 Community 2 (product network)

y U (y, δ) = δ (vd − pd ) +

ac + kδ + φvd − x dx − ypc γ

(3)

0

where y is the amount of the complement that the consumer consumes, δ = 1 if he has the use of the durable good, and δ = 0 otherwise. Finally, Figs. 1 and 2 show two of the discovered communities from running the Louvain algorithm. It is clear that the graph of Fig. 1 is more congestive than the graph of Fig. 2; it means that there are lots of co-purchased products in community III, while some of them have stronger relations in comparison to others, for instance, products with ID of 11, 39, 64, or 1323 which are mobile phone, laptop, flash memory, and SD card. The strong relationship between the two products indicates that they are most frequently bought together in this community, and somehow this feature explicitly demonstrates that two products are complementarity.

Customer Network The other network extracted from given transactional dataset is customer network. Although, in this network, customers are shown as nodes, the relations between them are not based on any friendship among customers, but rather each edge shows the similarity between customers’ purchases; in fact, it represents how many common items are bought by every 2 customers which is the weight of each edge too, and it may differ from 1 to 1000 during the specified period. After

Market Basket Analysis Using Community Detection Approach: A Real Case

189

Fig. 2 Community 5 (product network)

modeling data to a customer network, we found around 50% of customers had just purchased less than six items over the 6-month period. As it is supposed to investigate only customers who had purchased in average one item in a month or in other words more loyal customers, those who had less than six products were ignored in forming the final customer graph. Ultimately, 110,222 transactions which include 10,947 customers and 238 products were extracted. Then, using Louvain method on intended network, eight communities are detected and have been shown in Table 4. These tables contain information about size of each community and also the percentage of customers’ desperation in each discovered group. Furthermore, by sorting top 20 customers of the store based on the amounts of their purchases, dispersion of key customers (those who have the most relation in a community with other customers based on their common shopping interests) is identified in each of eight communities in Table 4. This finding can help e-retailers to know their key customers, and then, by following their shopping behavior, they can promote more relevant products. Figures 3 and 4 represent two of customers’ communities detected by the applied method. It is clear that in Fig. 4, connections among customers have more similarities in their shopping interests and behavior, whereas in Fig. 3, there are two different patterns among customers, and obviously there are both strong and weak connections; bold and thick links mean each paired customer has bought several items in common, while thin links indicate less similarity or having one or two same products purchased (Table 5).

190

S. Faridizadeh et al.

Table 4 Detected communities from customer network ID of best customer based on amount of their purchases 481794 716805 641245 537310 470250 617476 473414 867963 546859 610837 567166 492615 697663 757745 588250 708511 732027 616887 716350 837167

Fig. 3 Community 5 (customer network)

Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Market Basket Analysis Using Community Detection Approach: A Real Case

191

Fig. 4 Community 8 (customer network) Table 5 Ranking customers based on their purchases Dispersion of customers in communities 35% 29% 37% 0.19% 0.16% 0.06% 0.21% 0.1%

Community size (number of customers) 3738 3126 4004 21 18 7 23 10

Community number 1 2 3 4 5 6 7 8

Product-Customer Network (Bipartite Network) Product-customer network is a kind of bipartite network which is very common to investigate in market basket analysis, especially for recommendation systems. So far, it has been surveyed by several studies and researches [25, 26, 43]. In fact, the sales transactions are represented as a graph in which the verdicts are customers and products, while transactions (pairs of customer-product) are treated as edges. After running Louvain algorithm, eight communities were detected which have been demonstrated in Table 6 containing detailed information for each discovered community. The graph of seventh community is depicted in Fig. 5, while separated nodes in the left side are representing products with their ID number, and the other nodes in network are customers interested to these recognized products.

192

S. Faridizadeh et al.

Table 6 Detected communities from customer-product network ID of best customer based on amount of their purchases 481794 716805 641245 537310 470250 617476 473414 867963 546859 610837 567166 492615 697663 757745 588250 708511 732027 616887 716350 837167

Fig. 5 Community 7 (bipartite network)

Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Market Basket Analysis Using Community Detection Approach: A Real Case

193

Using the findings acquired from this bipartite graph and both customer and product graphs, we propose two new algorithms which use the patterns discovered during the community detection techniques and present some hidden relationship between entities.

Recommender System Algorithms Although each of the presented networks can be used as a personalized recommender system, but obviously, when they combined together, they will provide more complete information, and consequently, the accuracy would be increased. In order to extend algorithms of recommender system, two algorithms’ pseudo-codes are represented below, namely, recommender system A that has been derived from the combination of product network and bipartite network (customer-product) and B that has been derived from customer network and bipartite (customer-product) network. Recommender System A: PRODUCT Community Numbers: N= {1 . . . n} CUSTOMER-PRODUCT Community Numbers: M= {1 . . . m} CUSTOMER Numbers = {1 . . . i} PRODUCT Numbers= {1 . . . j} 1. List recommender (User I, List J ) 2. Recommender Item List ← empty List 3. For (user i: I) { 4. For (community m: M ) { 5. PC ←Extract all products purchased by customer i 6. PM ←Extract all of products in community m 7. } 8. Compare PC with PM 9. Case1: 10. List J (first priority)← PC -PM 11 Case2: 12. For (Product Network){ PN ←Search communities containing all of PC 13. 14. If (PC =PN ) 15. Go to 9 16. Else 17. PN ←Extract all products from (community n:N) containing all of PC 18. List J (second priority)←PC -PN 19. } 20. } Recommender System B: CUSTOMER Community Numbers: L= {1 . . . l} CUSTOMER-PRODUCT Community Numbers: M= {1 . . . m} CUSTOMER Numbers = {1 . . . i} PRODUCT Numbers= {1 . . . j} 1. List recommender (User I, List J )

194

S. Faridizadeh et al. 2. Recommender Item List ← empty List 3. W←0 4. For (user i: I) { PI ←Extract all of purchased product by user i 5. 6. For (community l: L) { 7. PL ←Extract all products purchased by each customer 8. For (Customer-Product Network) { CL ←Extract all 9. communities containing user of community l PM ←Extract all of products from CL 10. 11. PL ← PM + PL 12. Compare PI with PL 13. List J← PL -PI 14. } 15. } 16. For  (product j: List J) { W ← Find connection value (weight) of each 17. paired customers interested to product j in community l 18. List J← Sort products based on highest to lowest W in List J 19. } 20. }

Conclusions and Recommendations In this paper, by applying the well-known state-of-the-art Louvain method, a popular community detection technique, hidden structures and patterns of relations among customers, and products modeled as three separated graphs were discovered. After analyzing all three graphs and communities, two algorithms for developing a customized recommender system were represented. One of them has been derived from the combination of product network and bipartite network (customer-product), and the other one has been derived from customer network and bipartite (customerproduct) network. As it was argued in the related work section, previous studies investigated each kind of the three networks presented in this research to investigate subjects related to market basket analysis and promoted products with extended recommender systems [2–4, 49, 50]. But in this work, we tried to apply all of these networks purposely to develop a new perspective on the issue of recommending products to customers. Indeed, by applying all these networks as an associated whole, it is possible to organize matters like upselling, cross-selling, and other similar subjective in a wider sight. This method helps e-retailers to find shopping patterns and, subsequently, provide product recommendation to customers with more accuracy because of an expanded range of information. Furthermore, we had a brief comparison between our method and association rule mining (especially Apriori), and interestingly, we found that for a given transactional database, the association rule cannot work exactly when a bit changes the support and confidence measures (increasing 0.01); lots of interesting rules were omitted, and practically,

Market Basket Analysis Using Community Detection Approach: A Real Case

195

this technique had a weak result in the given database. For example, before using the Apriori algorithm, we found best seller products from the frequency table. Then, we set certain support and confidence measures (s = 0.1 and c = 0.02) to extract rules, and two of the best seller products with ID 11 (cell phone) and 1322 (SD card) were discovered as ranking 1st and 6th, respectively. Whereas, when the confidence value increased to 0.4, none of these rules were extracted by this method, which would be a significant weakness. Indeed, the method can find many complex relationships in the network structure of sales transactions, which is difficult or almost impossible to find by using the association rule mining approach. Particularly, methods like community detection can provide valuable information from the network and in the framework of commercial intelligence [25]. A real transactional data has been used as the source of this research. Unlike other works [1–3, 28, 29, 35, 51], the current study has focused on finding a more appropriate product to recommend to customers by putting together information from each of three extracted networks: (a) the product network, (b) the customer network, and (c) the product-customer network. At first, some information is gained using the community detection technique, which presents a strong association between different kinds of product and customers based on the logic of market basket analysis. Then, by finding a close association between the structures of these communities, this study utilizes them as a primary base to extend a personalized recommendation system in the future. Through the application of this algorithm, a list of prospective products are suggested for a customer based upon an analysis of the customers’ previous purchases. This is achieved via two aspects: first, by analyzing relations of useruser or item-item graphs and, second, by combining extracted information from user-item graphs to provide a broader perspective on which to base purchase recommendations. However, in this study, information like demographic profiles has not been considered. Therefore, it is suggested, for the purposes of future study, that we should investigate methods that consider aspects of the customers’ profile beyond merely transactional data. Further, there are some factors, such as price and product traits, which help researchers in the analysis of discovered knowledge across a range of topics. For instance, if price point of a set of products was considered, it would be possible for the e-retailer to determine the most profitable baskets of products based upon identified product communities, as previously discussed in the “Data Analysis and Results” section. In addition, the Louvain method, which was applied, did not consider community overlapping, which we know occurs within real-world data as in the vast majority of cases, the interests of communities are overlapping. Therefore, this would provide a prudent topic for future research, in addition to the active consideration of customer demographic information and such product details as price point. Consequently, implementing of these two proposed algorithms is necessary to evaluate their efficiency in comparison to other similar algorithms in this area.

196

S. Faridizadeh et al.

References 1. Videla-Cavieres, I. F., & Ríos, S. A. (2014). Extending market basket analysis with graph mining techniques: A real case. Expert Systems with Applications, 41(4 Part 2), 1928–1936. 2. Raeder, T., & Chawla, N. V. (2011). Market basket analysis with networks. Social Network Analysis and Mining, 1, 1–17. 3. Kim, H. K., Kim, J. K., & Chen, Q. Y. (2012). A product network analysis for extending the market basket analysis. Expert Systems with Applications, 39(8), 7403–7410. 4. Phan, D. D., & Vogel, D. R. (2010). A model of customer relationship management and business intelligence systems for catalogue and online retailers. Information Management, 47(2), 69–77. 5. Blattberg, R., Byung-Do, K., & Scott, N. (2008). Database marketing: Analyzing and managing customers (p. 875). New York: Springer. 6. Han, J., & Kamber, M. (2006). Data mining concepts and techniques., Second ed. Waltham, MA: Elsevier. 7. Klemettinen, M., & Mannila, H. (1994). Finding interesting rules from large sets of discovered association rules. In Proceedings of the Third International Conference on Information and Knowledge Management (pp. 401–407). 8. Lee, J. S., Jun, C. H., Lee, J., & Kim, S. (2005). Classification-based collaborative filtering using market basket data. Expert Systems with Applications, 29(3), 700–704. 9. Zhu, X., Ye, H., & Gong, S. (2009). A personalized recommendation system combining casebased reasoning and user-based collaborative filtering, In 2009 Chinese Control and Decision Conference CCDC 2009 (pp. 4026–4028). 10. Adomavicius, G., & Tuzhilin, A. (2005). Towards the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering, 17(6), 734–749. 11. Zhao, Z.-D., & Shang, M.-S. (2010). User-based collaborative-filtering recommendation algorithms on Hadoop. In 2010 Third International Conference on Knowledge Discovery and Data Mining (pp. 478–481). 12. Lu, J., Wu, D., Mao, M., Wang, W., & Zhang, G. (2015). Recommender system application developments: A survey. Decision Support Systems, 74, 12–32. 13. Agrawal, R., & Srikant, R. (1994). Fast algorithms for mining association rules. In Proceeding VLDB ’94 Proc. 20th Int. Conf. Very Large Data Bases (Vol. 1215, pp. 487–499). 14. Ahlemeyer-Stubbe, A., & Coleman, S. (2014). A practical guide to data mining for business and industry. West Sussex, UK: Wiley. 15. Zaki, M. J., Parthasarathy, S., Ogihara, M., Li, W. (1997). New algorithms for fast discovery of association rules. In KDD (Vol. 7, pp. 283–286). 16. Petrison, L. A., Blattberg, R. C., & Wang, P. (1997). Database marketing. Journal of Interactive Marketing, 11(4), 109–125. 17. Agrawal, R., Imieli´nski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. ACM SIGMOD Record, 22(2), 207–216. 18. Han, J., Pei, J., & Yin, Y. (2000). Mining frequent patterns without candidate generation. ACM SIGMOD Record, 29(2), 1–12. 19. Zaki, M. J. (2000). Generating non-redundant association rules, In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 34– 43). 20. Brijs, T., Vanhoof, K., & Wets, G. (2003). Defining interestingness for association rules. International Journal “Information Theories & Applications”, 10(4), 370–376. 21. Tan, P.-N., Kumar, V., & Srivastava, J. (2004). Selecting the right objective measure for association analysis. Information Systems, 29(4), 293–313. 22. Du Mouchel, W., & Pregibon, D. (2001). Empirical Bayes screening for multi-item associations, In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining—KDD’01 (pp. 67–76).

Market Basket Analysis Using Community Detection Approach: A Real Case

197

23. Easley, D., & Kleinberg, J. (2010). Networks, crowds, and markets: reasoning about a highly connected world (Vol. 81). Cambridge University Press. 24. Cuvelier, E., & Aufaure, M.-A. A. (2012). Graph mining and communities detection. In M.-A. Aufaure & E. Zimányi (Eds.), Business intelligence, vol. 96 (pp. 117–138). Berlin Heidelberg: LNBIP Springer. 25. Stam, C. C. J., & Reijneveld, J. C. J. (2007). Graph theoretical analysis of complex networks in the brain. Nonlinear Biomed. Phys., 1, 3. 26. Wang, Y., & Lin, K. J. (2008). Reputation-oriented trustworthy computing in e-commerce environments. IEEE Internet Computing, 12(4), 55–59 25. 27. Shai, O., & Preiss, K. (1999). Graph theory representations of engineering systems and their embedded knowledge. Artificial Intelligence in Engineering, 13(3), 273–285. 28. Huang, Z., Zeng, D. D., & Chen, H. (2007). Analyzing consumer-product graphs: Empirical findings and applications in recommender systems. Management Science, 53(7), 1146–1164. 29. McSherry, F., & Mironov, I. (2009). Differentially private recommender systems: building privacy into the netflix prize, In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (pp. 627–636). 30. Leskovec, J., Rajaraman, A., & Ullman, J. D. (2014). Mining of massive data., Second ed. New York, NY: Cambridge University Press. 31. Beel, J., Langer, S., Genzmehr, M., Gipp, B., Breitinger, C., & Nürnberger, A. (2013). Research paper recommender system evaluation: A quantitative literature survey. In RepSys (pp. 15–22). 32. Amatriain, X., Jaimes, A., Oliver, N., & Pujol, J. (2011). Data mining methods for recommender systems. Recommender Systems Handbook, pp. 39–71. 33. Girvan, M., Girvan, M., Newman, M. E. J., & Newman, M. E. J. (2002). Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99(12), 7821–7826. 34. Schafer, J., Frankowski, D., Herlocker, J., & Sen, S. (2007). Collaborative filtering recommender systems. In Adaptive Web (pp. 291–324). 35. Newman, M. (2006). Modularity and community structure in networks. Proceedings of the National Academy of Sciences, 103(23), 8577–8582. 36. Sankar, C. P., Asokan, K., & Kumar, K. S. (2015). Exploratory social network analysis of affiliation networks of Indian listed companies. Social Networks, 43, 113–120. 37. Newman, M., & Girvan, M. (2004). Finding and evaluating community structure in networks. Physical Review E, 69(2), 1–16. 38. Pons, P., & Latapy, M. (2005). Computing communities in large networks using random walks. Journal of Graph Algorithms and Applications, 10(2), 191–218. 39. Newman, M. E. J. (2004). Detecting community structure in networks. The European Physical Journal B, 38(2), 321–330. 40. Bhowmick, S., & Srinivasan, S. (2013). A template for parallelizing the Louvain method for modularity maximization. Dynamics On and Of Complex Networks, 2, 111–124. 41. Newman, M. E. J. (2006). Finding community structure in networks using the eigenvectors of matrices. Physical Review E, 74(3), 036104. 42. Newman, M. E. J.. Analysis of weighted networks. Physical Review E, Vol.70, No.5 2004: 056131. 43. Blondel, V. D., Guillaume, J.-L., Lambiotte, R., & Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008, 6. 44. Newman, M. E. J. (2006). Finding community structure in networks using the eigenvectors of matrices. Physical Review E, 74(3). 45. Clauset, A., Newman, M. E. J., & Moore, C. (2004). Finding community structure in very large networks. Physical Review E, 70(6 Pt 2), 066111. 46. Kawonga, M., Blaauw, D., & Fonn, S. (2015). Exploring the use of social network analysis to measure communication between disease programme and district managers at sub-national level in South Africa. Social Science & Medicine, 135, 1–14.

198

S. Faridizadeh et al.

47. de Nooy, W., Mrvar, A., & Batagelj, V. (2005). Exploratory social network analysis with Pajek. New York, NY: Cambridge University Press. 48. Bhaskaran, S. R., & Gilbert, S. M. (2005). Selling and leasing strategies for durable goods with complementary products. Management Science, 51(8), 1278–1290. 49. Celdrán, A. H., Pérez, M. G., García Clemente, F. J., & Pérez, G. M. (2016). Design of a recommender system based on users’ behavior and collaborative location and tracking. Journal of Computer Science, 12, 83–94. 50. Kim, H. K., Kim, J. K., & Ryu, Y. U. (2009). Personalized recommendation over a customer network for ubiquitous shopping. IEEE Transactions on Services Computing, 2(2), 140–151. 51. Gregory, S. (2009). Finding overlapping communities using disjoint community detection algorithms. Studies in Computational Intelligence, 207, 47–61.

Predicting Future with Social Media Based on Sentiment and Quantitative Analysis Sahil Sharma, Jon Rokne, and Reda Alhajj

Introduction In this modern era of social media platforms, millions of users have the access to express their opinions and perspectives on topics of their interest. With these platforms becoming increasingly popular, the authors of the contents can express themselves more fluently, reaching and affecting big masses of people. The stakeholders of these topics also use these platforms to provide reviews regarding various events from their perspective and get to know what stand does the masses have on their goods and services and various other insights into any such event. The data generated through social media is vast and versatile. It involves data from simple text to high-definition videos and other media forms. In the past, social media was only considered a simple platform for virtual interaction. Now, with many recent researches focused on data crawling and opinion mining, it has evolved in a tool for corporations to understand user’s preferences. These further translate various things like personalization of user experience of Internet and predictions and polls, etc. The recent 2016 Elections in the United States saw an extremely competitive election with outcome still unclear till the announcement of the result. With whole of social media discussing and debating the possible outcomes, there was a lot of data reflecting the true stand of these users between the four leading candidates: Donald Trump, Ted Cruz, Hillary Clinton, and Bernie Sanders. Analyzing these sentiment and quantitative pattern changes of the public opinions would enable businesses and government agencies to work and strategize more efficiently. They can act against the occurrences of negative sentiment clusters and design strategies such as dispelling non-favorable expressions and balancing their approach, targeting

S. Sharma () · J. Rokne · R. Alhajj Department of Computer Science, University of Calgary, Calgary, AB, Canada e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1_14

199

200

S. Sharma et al.

audience to revert the public opinion in their favor or maybe just change their own approach on the matter. This would not only allow just the process to be efficient but would also be economically more convenient compared to traditional methods to carry out such large-scale analysis and then finding the trends in them. With such a large amount of user-generated data numbering into multiple terabytes every day, it is quite possible to perform analysis and predict the outcome of an election based on the patterns observed from the content of the affected users. This process on the most basic form includes two aspects: on one hand, we just consider the plain quantity of the content or keyword by their ratio on a specific topic. This would reflect the mere popularity of a candidate in terms of the election. On the other hand, you take into perspective the natural language processing to carry out a scientific study of human language. This in simple terms is called sentiment analysis or opinion mining. This involves a well-researched method to determine the polarity of a user on a certain topic. The aim is to determine the attitude (feeling, emotion, and subjective stand) of the author on the given topic. Using these two together to make a prediction would naturally yield to a more accurate prediction since both play a vital role in propagating the influence of any certain candidate from an election perspective. Hence, the culmination of these two aspects is quite powerful and would be effective to predict the outcome of any such event. The overall process of data crawling/mining and their later analysis are very large and taxing on their own. But, the results this simple analysis can yield are amazing and if applied to a sensible case with proper size of data in hand. This also involves targeting only those audiences who directly relate to the event under consideration. This can be done with right selection of parameters.

Related Work There have been plenty of research regarding this approach to predict the future just using social media where Twitter being the most common source of data extraction. Twitter provides and simple to use and efficient API. This can be used to retrieve simple plain textual format of data which is just right for intrinsic amount of analysis needed for such a large quantity of data using limited resources. Many of the works that consider the prediction have based their results just on only the popularity factor (the repetition of keywords in the social media quantity). On the other end of this research spectrum, the few papers that depended on the sentiment are based only on the sentiment analysis. But, in few of the cases, both the factors are considered in detail and given subjective weightage depending on the case. But there are certain aspects even these works lack a bit. Gaurav et al. [2] in their work “Leveraging candidate popularity on Twitter to predict election outcome,” have done an extensive mining and analysis to predict the general election outcome. They have used keywords from being very specific to computing counts based on candidate’s full name or simultaneous presence of an alias name as well as one of the keywords in Tweet. One of the biggest drawbacks

Predicting Future with Social Media Based on Sentiment and Quantitative Analysis

201

of their work is that only volume-based analysis is done. Since the popularity alone cannot be used as a criteria and it can be a negative or positive polarity which may or may not be beneficial in the actual outcome of the event in the candidate’s favor. Cameron et al. [3] in their work, “Can Social Media Predict Election Results? Evidence from New Zealand,” have done a similar type of research. The research involves counting and measuring the following and friends of each candidate in the social media sites to provide for their influence comparison. Though this research is done in a lot of detail, but just the connection in social media may or not always represent direct good relation. Which further, cannot be deducted to the votes being casted in their favor in the end. Similarly, Khatua et al. [4] in their work, “Can #Twitter_Trends Predict Election Results? Evidence from 2014 Indian General Election,” did a similar analysis on Indian General Elections of 2014 to make their prediction. But, they have analyzed roughly 0.4 million tweets during the period March 15, 2014, to May 12, 2014. The quantity of data utilized to make the prediction is very less, to scale with the population of the country (India) and the number of users in proportion. They have also discarded the tweets with mentions of multiple parties jointly which is not an efficient strategy as there are many automated methods like one used in our research to split the polarity content of the specified keywords. Moreover, almost 42% of tweets in our case had joint mentions of different candidates with 27% that could be split by the algorithm. Wang et al. [5] in their research, “A system for real-time twitter sentiment analysis of 2012 us presidential election cycle,” have done a great work on analyzing the US presidential election cycle based on real data. The work is promising and has a lot of insight into the occurrences and mentions all over social media. The only factor where the work was not clear was the handling of sarcasm. The manual annotation part of the sarcasm was not reflected on how the lexicon system handled it. Ceron et al. [6] in their work “Every tweet counts? How sentiment analysis of social media can improve our knowledge of citizens’ political preferences with an application to Italy and France” and Adam Bemingham et al. [7] in their work “On Using Twitter to Monitor Political Sentiment and Predict Election Results” have both done a case study and sentiment analysis on French and Irish general elections, respectively. Both works have done a two-way study: sentiment- and volume-based. Both have done similar error handling with mean absolute error method. But, one aspect where both the researches are not quite adequate is the time span and quantity of the data being analyzed. Moreover, the data under consideration is not based on location of the people reflecting them. Hence, it cannot be deducted if the authors of the tweets were actual voters in the election. There are various other researches which approach the concept of analyzing social media. Alhumedi et al. [8] in their work “Tweets are forever: a largescale quantitative analysis of deleted tweets” did an interesting research revolving around dynamics of deleted tweets involving various interesting parameters and infographics. The research was very in-depth and provided with a lot of good methodology tweaks for further researchers in the field. Rong Lu et al. [9] in his paper presents a novel method to predict the trends of topics on Twitter based on MACD (moving average convergence divergence), which is one of the simplest and

202

S. Sharma et al.

most effective momentum indicators in technique analysis of stocks. The MACD turns two trend-following indicators, moving averages, into a momentum oscillator. As a result, we monitor the keywords of topics on Twitter and use the longer moving average and the shorter moving average to track their longer and shorter trends, respectively. Jiayin Qi et al. [10] in their paper have tried to build a principle-driven model to predict the process of UGC topic evolution under considering enterprises’ intervention, with which enterprise can predict UGC trend more earlier and know what actions the enterprise should take more exactly than using previous methods. Similarly, Weiwei Liu et al. [11] in their paper have provided a generic probabilistic framework is proposed for hot topic prediction, and machine learning methods are explored to predict hot topic patterns. Two effective models, PreWHether and PreWHen, are introduced to predict whether and when a topic will become prevalent, respectively. Sitaram Asur et al. [12] have similarly used the tweet rates or only the quantity to base their predictions despite whatever the underlying polarity of the tweet be. They showed that a simple model built from the rate at which tweets are created about certain topics can outperform market-based predictors. Whereas, Le T. Nguyen et al. [13] in their time-series research have presented a strategy of building statistical models from the social media dynamics to predict collective sentiment dynamics. They modeled the collective sentiment change without delving into microanalysis of individual tweets or users and their corresponding low-level network structure. Sneha A. Mulay et al. [14] have tried to base their predictions only on the sentiments of the users of these social media to make the predictions. In their paper, they have exhibited how online networking substance can be utilized to anticipate genuine results for box office releases. There is a big quantity of research material existing in this field due to the ease of access and analysis. Works in the future are hoped to be more precise and make improvements based on the abovementioned researches.

Quantity Time-Series Study In broad spectrum, we consider two approaches. We make exploratory analysis, which basically refers to analysis of different parameters with unknown data and looking for a pattern. Exploratory factor analysis is a method for condensing information contained in a number of original variables into a smaller set of underlying dimensions (or factors) with minimal loss of information. The approach is simple yet rigorous; we account for the amount of popularity each subject or keyword has on the social media as a whole. As in certain cases like products, movies, elections, etc., the quality of the tweets as per does account for direct predictions. As discussed before, a lot of research works have taken just this as their pivotal basis of their whole prediction. The test case we considered is given below:

Predicting Future with Social Media Based on Sentiment and Quantitative Analysis

203

The US Election Total Popularity Dynamics A study from Dublin City University in 2011 stated that tweet volume was “the single biggest predictive variable” in election results. This work has been already cited in the previous section. Based on their analysis of political sentiment and prediction modeling, the research indicated that mention volume was a more accurate indicator than sentiment because volume better represents the relative popularity among the population, whereas sentiment can be reactive and influenced by responses to specific news stories or events. The graph data is taken and translated directly from Twitter interactive as such large amount of surveys and data that are directly available on their support sites lead to some interesting results. The graph representing the volume-based results is shown below:

As we can clearly see in this graph, Donald Trump has almost total advantage in terms of popularity ratio in the social media, Hillary Clinton coming close to him around February and May. But, he still manages to succeed in terms of overall basis with his mentions in millions being extremely high in some months, whereas other candidates do maintain tough competition among themselves with Cruz being the third most probable contestant from these dynamics. From this, we can easily see that Trump was far ahead of other competition in terms of pure mentions of the keywords related to his name on the social media sites, whereas Hillary was not too behind. In actual turn of events, this was the case itself with Donald winning the elections and Hillary coming close second. Thus, just basing on just the sheer popularity aspect, we can predict that Donald Trump is likely to win elections long back near March or even earlier as he had absolute lead in summation. However, this is a different scenario and only one parameter would not be sufficient for this particular case.

204

S. Sharma et al.

The Anomaly Despite the total lead being in favor of Donald Trump, it is not a straightforward prediction. We cannot base our prediction on sheer volume base because Trump’s popularity is not much related to the context of the subject (i.e., US Presidential Elections). His popularity accounts due to many different factors and can be accounted and seen to be popular on social media far before he even declared his candidacy for the elections. Trump has been a constant face in media and has always been around various discussions. Therefore, the volume-based prediction may be misleading. There can be a big speculation since the polarity may or may not work in the favor of the candidates in this case. In other words, the mention does not potentially reflect his voter base. This being the case, we’d need to look at additional qualifiers to get a better handle on Trump’s standing with voters. Therefore, we turn to sentiment or quality of the Tweets to make our prediction more accurate. Qualitative content analysis is a descriptive research method involving development of a coding frame and qualitative coding of data. Here, machine learning-based framework is used to carry out an intensive polarity check. This determines if the author is against the idea of a particular candidate being selected to the seat of the president or against it.

Quality Time-Series Study This part of the analysis involves the sentiment analysis. Using the methods of natural language processing, an automated mechanism is designed which retrieves the polarity of the author with respect to the topic under consideration. The polarity reflects whether the user is for, against, or indifferent to the topic. Quality or sentiment analysis is a lengthy process and can yield a lot of inaccurate results if not defined and executed properly. The definition of proper lexicons, getting the correct polarity, choosing the correct pack of identifiers, sarcasm, etc., all these can account to skewing of the results if not done with proper detail. Especially, it is a controversial and sensitive subject like the one considered in our case here. The case involves various human interaction intricacies such as sarcasm, etc. The maturity of content in social media is a huge hurdle to overcome when it comes to getting clear understanding the underlying sentiment. Features from an existing sentiment lexicon were somewhat useful in conjunction with microblogging features, but the microblogging features (i.e., the presence of intensifiers and positive/negative/neutral emoticons and abbreviations) were clearly the most useful. The emoticon combinations with the polarity were useful in determining the sarcasm or even for the intensity of the tweet. Combined with other elements of the tweet, a final score was deducted. Abbreviations were removed due to the high level of inconsistencies in their meanings and usages.

Predicting Future with Social Media Based on Sentiment and Quantitative Analysis

205

The Polarity Provided by the Lexicon and the Average Consideration The tweets were extracted on an Apache Flume- and HIVE-based ingestion system with the driver program being a simple python script assembling all the modules together. The sentiments were extracted based on location being the USA. This is vital since these are the sentiments of the actual voters who contribute toward the final election result. When extraction of sentiment is done, we get the three different degrees of polarity of a given post or tweet. The re-tweets were only taken into consideration as a for case only when the polarity matched in favor of the original poster. The re-tweeter has freedom to express his thought on his tweet. So, the sentiment analysis was done on them separately, and the re-tweets were used as the contributor to the final polarity of the tweet, or else it was considered as the negative or neutral in each of their respective cases. So, the final polarity score was an amalgamation of the total polarity based on the culminated opinion of all the users. These polarities account for the positivity (α), the negativity (β), and the neutrality (μ) of any sentence. The tweets are taken one by one into the polarity extractor, which is a Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. Once all the polarities of the tweet have been extracted, they are added to produce the total summation of the polarity () which is an indicator of the overall sentiment of that duration of time:  = α + β + μ, for time T Once all the tweets for a time interval T have their polarities calculated, and the summation is done, we then proceed to calculate the total polarity in a particular interval to plot a marking at a particular period in the case under study by doing the same summation of all the  in that particular period (for our case the duration was set to 5 days initially).  (Period) = Summation of polarity from a time t1 to t2, t2 > t1.   (Period) =

t2 

  / (t2 − t1)

t1

Recent studies have shown that re-tweets are quite complex source of information and should be taken into account with the original tweets for this analysis. Therefore, before we account for the overall result, we need to account for the polarity of the re-tweets in a separate case. The re-tweets are added as a separate influencer to final polarity of a particular tweet. This was done by analyzing all the retweets and their own sentiment. This way, each tweet along with intrinsic sentiment analyzed score with re-tweets’ individual score average included truly reflected the

206

S. Sharma et al.

sentiments of the individuals sharing the original tweet. The authorship, attribution, and communicative fidelity are negotiated in diverse ways in cases of re-tweets. Hence, their contribution cannot be taken just as simple multiplication factor with polarity to reflect the same as the original tweeter. Let x be cumulative polarity factor of each tweet which is obtained after calculating average polarity of the re-tweeted tweets. This is the final polarity mark of the re-tweets which is a fraction from range 0 to 1. Then it’s multiplied to the original tweets sentiment polarity to reflect the polarity of the re-tweeted tweeters. Hence, this takes into account the average sentiment of everyone that shared the tweet. So, the revised formula is:  t2   (Period) =  ((Rest of the tweets) − (No of re-tweeted tweets)) t1

+ ((Cumulative polarity factor x) ∗  (Re -tweeted tweets A))) / (t2 − t1) Now, after getting the polarities of different tweeters, we plot them on a graph. Then we see how they interacted in the social media to gain an insight on how the performance of each candidate is and by verifying the total sum of the polarities. We then compare the polarity of the four subjects here, that is, Donald Trump, Hillary Clinton, Ted Cruz, and Bernie Sanders, in our case and see if it’s a clear lead that we can predict the winner in total terms of polarity for different duration. But in case where the lead is unclear and the candidates have a lot of fluctuations, we need to study the pattern keeping events and final concluding date in mind to predict the accurate result. Better understanding the predictive power and limitations of social media is therefore of utmost importance. This is required to avoid false expectations, misinformation, or unintended consequences.

The US Elections Quality Dynamics As discussed before, the quantity dynamics alone cannot be credited to predict the victory of Donald Trump in 2016 elections. The other parameters, that is, the sentiments of these tweets, had to be taken into consideration. The most timeconsuming part of this process is the preprocessing of tweets to normalize the language and generalize the vocabulary employed to express sentiment which can reflect the intricacies of human conversations. So, with an upgraded Apache Flumebased congestion system on a multi-cluster setup of almost five computers, the data was gathered. Then for the quality estimates, we took nearly 12,000 tweets per day from the start of the election year (i.e., January 1, 2016) to the month on which the elections were held (i.e., November, 2016). The polarities were calculated to find the polarity of some specific intervals (15 days nearly average in our case). These polarities as calculated using the formulas given in the previous section were plotted to see the top four contestants of the presidential elections, namely, Donald Trump, Ted Cruz, Hillary Clinton, and Bernie Sanders. We observe a very confirming yet interesting trend in the polarities of the tweet.

Predicting Future with Social Media Based on Sentiment and Quantitative Analysis

207

We did a full sentiment analysis based on the previous sections methodology over a duration of more than 10 months. It can be clearly seen that Trump despite being negative in certain specific months has overall polarity far better than the rest of his competitors. The aggregated totals of polarity score for each candidate, respectively, over the duration are as given: 1. 2. 3. 4.

Donald Trump = 7.2058 Hillary Clinton = 4.54892 Ted Cruz = 3.4324 Bernie Sanders = 3.94552

By this clear lead in polarity, a prediction may be made that Donald Trump has clear lead in term of polarity average and is far more likely to win the elections than his rival. Monitoring social media during an electoral campaign can become therefore a useful complement of traditional off-line polls. Many of the researches described before and have been successful go even further by claiming that by analyzing social media, we can produce a highly reliable forecast of the result. As the year started, the overall tweet polarity for candidates was very close. With many ups and downs, Donald Trump can easily be seen to be the preferred candidate by the people with other three being consistently around a lower positive spectrum. The whole time line shows Donald Trump’s polarity soars higher in a lot of intervals. Despite the highest number of negative polarity clusters shared between Trump and Hillary, they still manage to get the highest score in overall sentiment of the whole duration. Hence, we can not only use it to just predict the future, but this kind of information can also be used to derive these polarity clusters to find the clusters whose polarity needs to be altered and can be used by any organization to gain tremendous benefits. Finding fundamental characteristics of consumers’ opinions enunciated in a textual manner in social media is continuously proving to be of great benefits. Specifically, we are interested in detecting semantic similarity or semantic relatedness over a set of documents retrieved from social media and focus on the uniformity and difference in the occurring patterns. This way they can directly target and influence these clusters and alter them to change their polarity in their own favor.

208

S. Sharma et al.

Conclusion and Future Work The overall study clearly indicates Donald Trump has lead in terms of both the quality and quantity of reach among his followers. The result of the election has proved this to be translated into reality as well. There has been a lot of criticism of these kinds of research works as well. One of the most prominent works is by GayoAvello et al. [15], in his work “I Wanted to Predict Elections with Twitter and all I got was this Lousy Paper.” He has a well-articulated point against the whole concept of predicting using social media. Most of the points are fair, but not denying the fact that it can be done. Many of these criticisms root from the fact that not everyone is connected through these platforms, or the data is not big enough to be valid. The concept behind these researches lies in small strides to make contribution toward bigger and more accurate systems. Hopefully, in the future, with easier access to these platforms and the validity of such researches will provide that people can or are reflecting their true opinions online which may in the future lead the polls being held online with identity verification, etc., and the opinions reflected virtually may translate into physical reality faster. In our research, the parameters as discussed in previous sections are basically indicators of Trump being able to reach out to more people and moreover therefore have more overall influence which is more positive than his immediate candidate. Thus, the whole study indicates that it is possible to predict the future of any event considering both the amount of reach and the polarity in decent amount of detail. There has been a lot of work in this field before and this research aims to be a part of the process and contribute toward this amalgamation of social media platforms and data analysis. The final prediction is a culminated result of multiple aspects involved in human conversation and how they should be perceived from the user’s perspective. This method is called “mixed methods.” The term “mixed methods” refers to an emergent methodology of research that advances the systematic integration, or “mixing,” of quantitative and qualitative data within a single investigation or sustained program of inquiry. The basic premise of this methodology is that such integration permits a more complete and synergistic utilization of data than do just either quantitative or qualitative data collection and analysis. The future work on this includes working on more data set to confirm and make the prediction more accurate and include more data or number of tweets per day. Currently, ongoing project is about finding the clusters which will allow us to find the exact locations and people which are affected in different polarity perspectives and accurately allow to release these clusters to be targeted, thus leading to locationbased predictions in the future and more insight into stakeholders. This can also be used to analyze people who are changing polarities and what lead to those changes. Another interesting research is to analyze people with higher number of positive polarity re-tweets and reasons behind their popularity. Hence, this whole aspect of using the user-generated data to work and improvise human-generated events has a lot of potential to offer to the modern world.

Predicting Future with Social Media Based on Sentiment and Quantitative Analysis

209

References 1. O’Reilly, Tim (2005). What is web 2.0. 2. Gaurav, M., Srivastava, A., Kumar, A., & Miller, S. (2013). Leveraging candidate popularity on Twitter to predict election outcome. In Proceedings of the 7th Workshop on Social Network Mining and Analysis. ACM. 3. Cameron, M. P., Barrett, P., & Stewardson, B. (2016). Can social media predict election results? Evidence from New Zealand. Journal of Political Marketing, 15(4), 416–432. 4. Khatua, A., Khatua, A., Ghosh, K., & Chaki, N. (2015). Can# Twitter_Trends Predict Election Results? Evidence from 2014 Indian General Election. In System Sciences (HICSS), 2015 48th Hawaii International Conference on. Kauai, HI: IEEE. 5. Wang, H., Can, D., Kazemzadeh, A., Bar, F., & Narayanan, S. (2012). A system for real-time twitter sentiment analysis of 2012 us presidential election cycle. In Proceedings of the ACL 2012 System Demonstrations. Association for Computational Linguistics. 6. Ceron, A., Curini, L., Iacus, S. M., & Porro, G. (2014). Every tweet counts? How sentiment analysis of social media can improve our knowledge of citizens’ political preferences with an application to Italy and France. New Media & Society, 16(2), 340–358. 7. Bermingham, A., & Smeaton, A. (2011). On using Twitter to monitor political sentiment and predict election results. In Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology (SAAIP 2011). 8. Almuhimedi, H., Wilson, S., Liu, B., Sadeh, N., & Acquisti, A. (2013). Tweets are forever: a large-scale quantitative analysis of deleted tweets. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. ACM. 9. Lu, R., Xu, Z., Zhang, Y., & Yang, Q. (2012). Trends predicting of topics on twitter based on MACD. IACSIT 12: 44-49 10. Qi, J., Qu, Q., & Tan, Y (2012). Topic evolution prediction of user generated contents considering enterprise generated contents. In Proceedings of the First ACM International Workshop on Hot Topics on Interdisciplinary Social Networks Research. ACM. 11. Liu, W., Deng, Z.-H., Gong, X., Jiang, F., & Tsang, I. W. et al. (2015). Effectively predicting whether and when a topic will become prevalent in a social network. In Twenty-Ninth AAAI Conference on Artificial Intelligence. 12. Asur, Sitaram, and Huberman, B.A. (2010). Predicting the future with social media. In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2010 IEEE/WIC/ACM International Conference on (Vol. 1). Toronto, AB, Canada: IEEE. 13. Nguyen, L. T., Wu, P., Chan, W., Peng, W., & Zhang, Y. (2012). Predicting collective sentiment dynamics from time-series social media. In Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining. ACM. 14. Mulay, S. A., Joshi, S. J., Shaha, M. R., Vibhute, H. V., & Panaskar, M. P. (2016). Sentiment analysis and opinion mining with social networking for predicting box office collection of movie. International Journal of Emerging Research in Management & Technology, 5, 74–79. 15. Gayo-Avello, D. (2012). “I wanted to predict elections with Twitter and all I got was this Lousy paper”—A balanced survey on election prediction using Twitter data. arXiv preprint arXiv:1204.6441.

Index

A Adaptive neuro-fuzzy inference system (ANFIS), 88 Analytic hierarchy process (AHP), 86–87 Apache Flume-and HIVE-based ingestion system, 205 ARIMA, 91–92, 96–97 Artificial and computational intelligence (ACI) ANN, 45–46 big data analytics (see Big data analytics) neural network Levenberg-Marquardt optimization, 53–56 single-layer RBF network, 51–53 petroleum engineering, 46–48 Artificial neural network (ANN) churn rate prediction, 91–92, 96–97 input and target variables, 45–46 pattern recognition, 46 self-organizing maps and associative memories, 46 supervised learning, 46 unsupervised learning, 46 workflow of, 46 Association rule mining (ARM), 180–181 cancer detection and surveillance, 149–151 confidence, 153–154, 160–161 constraints, 153 data analysis, 151–152 data preprocessing, 151–152 lift, 154 relative frequency, 162–163 scatter plot, 162 support, 153 Association rules (AR), 179–180

Automobile insurance fraud detection anomaly ranking, 12 BFS, 13 community detection algorithm, 15–16 complex network analysis theory, 12 constructed network, 13–14 data classification and separation, 13 detected group, example of, 14–15 detection of cycles, 13–14 DWACN, 12 number of identified cycles, 13–14 opportunistic fraud, 11 PostgreSQL 9.3.5 server, 13 professional fraud, 11 suspicious components, 12 undirected network of collisions, 12–13 Autoregressive models, 90

B Bag-of-words model, 33 Banking industry, 90–91 Belief score, 3 Big data analytics Eagle Ford, 57 illustration of, 44 oil and gas company dynamic data set, 48 experimental design, 49–50 implementation, 48–49 operational parameters, 48 optimization, 50 polynomial proxy, 50 production and injection rates, 48 research settings, 50–51

© Springer Nature Switzerland AG 2018 M. Moshirpour et al. (eds.), Applications of Data Management and Analysis, Lecture Notes in Social Networks, https://doi.org/10.1007/978-3-319-95810-1

211

212 Big data analytics (cont.) reservoir characteristics, 48 static data set, 48 validation and iteration, 50 value of, 45, 48 variability, 45, 48 variety, 45, 48 velocity, 45, 48 volume, 44–45, 48 reservoir heterogeneities, 56 SAGD process, 56–61 SPE9, 57 uncertainty-based development, 44 X11, 57, 59, 61 X22, 57, 59, 62 X44, 57, 59, 62 X82, 57, 59, 63 X88, 57, 59, 64 Bipartite network, 191–193 Black oil (BO), 50, 58 Bootstrapping, 33 Breadth-first search (BFS), 13

C Cancer association rule mining cancer detection and surveillance, 149–151 confidence, 153–154, 160–161 constraints, 153 data analysis, 151–152 data preprocessing, 151–152 lift, 154 relative frequency, 162–163 scatter plot, 162 support, 153 contrast set learning definition, 154 dominant patterns, 163–164 minimum support difference, 154–155 frequent pattern mining Apriori algorithm, 149 data analysis, 151–152 data preprocessing, 151–152 itemsets, 152–153, 160 overall statistic, 159 user-specified threshold, 148–149 variations, 149 missing values, 159 NHIS attributes, 156–157 data integration, 159

Index data transformation and formatting, 158 sample adult data, 155 US population, 155 values, 156, 158 numeric and nominal data type, 159 SEER Program ASCII text files, 156 attributes, 156–157 data integration, 158 data transformation and formatting, 158 incidence and survival rates, 155–156 results, 163–165 values, 156, 158 Churn CRM (see Customer relationship management) definition, 85 dynamic models, 85–86 static models, 85 Circular layout algorithm commercial and academic uses, 18 Darwin’s natural selection principles, 19 edge crossing detection, 22–24 number of, 18 representation, 17–18 genetic algorithm chromosome representation, 19–20 crossover techniques, 19, 21–22 evaluation, 19–20 flowchart, 19–20 with 100 nodes and 150 edges, 24–25 with 100 nodes and 200 edge, 24–25 with 100 nodes and 500 edges, 24, 26 with 100 nodes and 600 edges, 24, 26 initial population, 19–20 mutation operator, 22 number of edge crossings, 25, 27 operator selection, 19–21 graph drawing, 17 initial layout, 18–19 Cloud computing laptop, tablet, and PCs, 169–170 load balancing advantages and disadvantages, 171 dynamic scheduling techniques, 171 energy-and time-efficient execution, 173 execution behavior and origin of execution, 172–173 least slack time properties, 173 maximum energy consumption, 172 minimum time consumption, 172

Index performance parameters, 172 simulation results, 173–175 static scheduling techniques, 170 CLV, see Customer lifetime value Comma-separated values (CSV), 158 Community detection agglomerative algorithms, 183–184 collector assembly algorithms, 183 customer network, 189–191, 193–194 customer-product network, 191–194 modularity maximization, 184 product network centrality degree, 186 complementary products, 187 detected communities, 186–187 discovered communities, 187–188 less intensity, 186 Louvain method, 186–187 noisy connections, 185 Pajek software, 185 pruned network, 186 recommender system, 193–194 sales rate and degree centrality, 186–187 transactional data, 178, 184–185 Complex network analysis theory, 12 Confusion matrix, 123–125 Contrast set learning definition, 154 dominant patterns, 163–164 minimum support difference, 154–155 CRM, see Customer relationship management CSV, see Comma-separated values Curse of dimensionality, 34 Customer lifetime value (CLV) complex network, 131–133 customer importance, 133 customer’s ability, 133 customer’s reputation, 130 data collection, 135–136 influential value, 133–137 k-means algorithm, 140–142 monetary value, 133–135, 139–140 network effects, 130–131 sample network, 133 structural value, 133–135, 138–139 time series, 89 Customer network, 189–191, 193–194 Customer-product network, 191–194 Customer relationship management (CRM) degree of churn, 91, 96 fuzzy inference system (see Fuzzy inference system) LRFM in banking industry, 90–91

213 k-mean clustering, 91, 93–96 retention phase, 86 RFM model customer lifetime value, 86–87 customer purchase records, 87 customers’ relationship, 87 predicting customer behavior, 87 time series, 88–90 weighted-RFM model, 91 Customer segmentation, 129

D Dickey-Fuller unit root test, 96–97 Directed networks, 131 Directed weighted accident causation network (DWACN), 12 Dunn index, 94–95 Dynamic network bioinformatics, 104 evolution of features adjacency matrix, 108–109 algorithm, 110 characteristics, 107 combined time dependency Graph, 113–114 Diagonal Matrix D, 109–110 evaluation techniques, 106 experimental set-up, 112 feature frequencies, 107–108 feature ID, 113 feature selection, 105–106 Flickr dataset, 111 Group Matrix K, 108–110 Identity Matrix I, 109–110 initial feature matrix, 108–109 Laplacian matrix L, 109–110 network nodes, 105 percentage of changes, dependency, 114–115 sample social network, 105–106 semi-supervised methods, 106 Social Dimension Matrix Z, 109–110 supervised methods, 105–106 from t = 1 to t = 10, 112 unsupervised methods, 105–109 workflow diagram, 107–108 resources, 104

E Edge crossing number, 17 Eigenvector, 14

214

Index

F False negative (FN), 124–125 False positive (FP), 124–125 FP Tree, 180 Frequent pattern mining Apriori algorithm, 149 data analysis, 151–152 data preprocessing, 151–152 itemsets, 152–153, 160 overall statistic, 159 user-specified threshold, 148–149 variations, 149 Fuzzy c-means (FCM), 88 Fuzzy inference system ANN, 92–93, 97–99 ARIMA, 91–92, 96–97 churn classification, 88 defuzzification block, 87–88 degree of membership, 87 MAE, 93 Mamdani fuzzy model, 88 MAPE, 93 membership functions, 87 nonlinear mapping, 88 reasoning mechanism, 87 RMSE, 93 selection, 87 Sugeno method, 88

Holdout method, 122–123 Hortonworks Data Platform 2.4, 37

G Gaussian kernel, 51–52 Generalized linear models (GLM), 6 Genetic algorithm (GA) chromosome representation, 19–20 crossover techniques, 19, 21–22 evaluation, 19–20 flowchart, 19–20 with 100 nodes and 150 edges, 24–25 with 100 nodes and 200 edge, 24–25 with 100 nodes and 500 edges, 24, 26 with 100 nodes and 600 edges, 24, 26 initial population, 19–20 mutation operator, 22 number of edge crossings, 25, 27 operator selection, 19–21 Global history matching (GHM), 48–49 GNU General Public License, 33 Graph mining method, 181–183

M Machine learning algorithm, 30, 33, 36 Mamdani fuzzy models, 88 MapReduce framework commutativity and associativity properties, 71 degree of parallelism, 71 Hadoop, 69, 71 implementations, 71 MRlite, 72 sequential processing algorithm, 70 sequential reducer phase, 72 sliding-window algorithm data dependency, 73–74 getLength() and getStart(), 73 input split offset, 73, 78 maximum memory size, 78 moving average, 75–78 reducer memory, 72–73, 78 shift mathematical operator, 73 special failure handling mechanism, 78 split size, 73 time series analysis algorithms, 72

H Hadoop Distributed File System (HDFS), 71 Health Care Reform data set, 31

I Infodemiology, 29 Iterative Assessment Algorithm (IAA), 12

J JSON, 120

K k-mean clustering, 91, 93–96 k-nearest-neighbor-based time-series classification (kNN-TSC) technique, 89

L Latin hypercube design (LHD), 49, 58 Length of recency frequency monetary (LRFM) model in banking industry, 90–91 k-mean clustering, 91, 93–96 Linkpredictor, 7 LUFS, 107

Index Market basket analysis AR, 179–180 ARM, 180–181, 195 customer shopping behavior, 181–183 graph mining method, 181–183 SNA, 183 See also Community detection Mean absolute error (MAE), 93 Mean absolute percentage error (MAPE), 93 Mixed methods, 208 Monte-Carlo (MC) design, 49 Moving average convergence divergence (MACD), 201–202 Multilayer (ML) Levenberg-Marquardt (ML) NN, 50, 53–56, 58 Multilayer perceptron network, 93 Multi-Perspective Question Answering (MPQA) Subjectivity Lexicon, 33, 38 Multiple period training data (MPTD), 89

N National Center for Health Statistics (NCHS), 155 National Health Interview Survey (NHIS), 150–151 attributes, 156–157 data integration, 159 data transformation and formatting, 158 sample adult data, 155 US population, 155 values, 156, 158 Natural language toolkit (NLTK) package, 37 Negative link prediction (NeLP) Facebook, Twitter, and LinkedIn, 1 Slashdot and Epinions balanced dataset, 3 belief propagation, 3 data matrix, 7–8 dataset description, 4 degree feature set, 2–3 edge (u,v), 2–3 Epinions, 2 fisher scoring iterations, 7 formulation in R, 5–7 logistic regression classifier, 3 people’s reviews, 2 positive links and content centric interactions, 3 predictive accuracy, 3 site users, 1 Slashdot, 1–2 social-psychological theories, 8–9

215 standard error and z-statistic values, 7 triad feature set, 3 trustworthiness, 3 NetFS, 107 Net present value (NPV), 48–49 Neural network (NN) Levenberg-Marquardt optimization, 53–56, 58 single-layer RBF network, 51–53, 58 NLP Sentiment Project, 40

O Obama-McCain debate data set, 31 Objective function (OF), 48–49 Open Calais, 31 Opinion mining, see Sentiment analysis Oracle Data Miner, 30

P Parsimonious Rule-Based Model, 205 Positive link prediction, see Negative link prediction (NeLP) PRIDIT analysis, 11 Principal component analysis, 11 Product avoidance explicit feedback, 118 implicit feedback, 118 recommender systems accuracy, 123–125 advantages and disadvantages, 119 collaborative filtering, 119 content-based filtering approach, 119 keyword extraction, 119 product profile, 122 Python, Java, and R, 120–121 sentiment analysis, 118–119 testing set, 122–123 training set, 122–123 user profile, 122 Product network centrality degree, 186 complementary products, 187 detected communities, 186–187 discovered communities, 187–188 less intensity, 186 Louvain method, 186–187 noisy connections, 185 Pajek software, 185 pruned network, 186 recommender system, 193–194 sales rate and degree centrality, 186–187

216 PySpark, 38–39 Python-igraph package, 19 R Recency frequency monetary (RFM) model customer lifetime value, 86–87 customer purchase records, 87 customers’ relationship, 87 predicting customer behavior, 87 Recursive neural tensor network (RNTN), 119 Rheumatoid arthritis (RA), 159 RIDIT scores, 11 Root-mean-square error (RMSE), 93 S Self-organizing map (SOM), 89 Sentiment analysis, 118–119, 121 See also Twitter sentiment analysis Sentiment Analysis of Social Media Text, 205 Sequence-alignment method (SAM), 89 Silhouette analysis, 140–142 Single-layer (SL) radial basis function (RBF) NN, 50–53, 58 Single period training data (SPTD), 89 Social interaction analysis, 183 Social network analysis (SNA), 183 Spark ML, 34, 37 Spark Streaming instance, 38 SPE comparative solution project (SPE9) simulation, 57 Stanford CoreNLP tool, 121 Stanford Network Analysis Project, 5 Stanford Twitter Sentiment Corpus (STS), 31 Static network, 103 Status and balance theories, 8–9 Steam-assisted gravity drainage (SAGD), 50 Sugeno fuzzy models, 88 Supplementary Multilingual Plane, 32 Support vector machines (SVMs), 36–37, 151 Surveillance, Epidemiology, and End Results (SEER) Program, 150–151 ASCII text files, 156 attributes, 156–157 data integration, 158 data transformation and formatting, 158 incidence and survival rates, 155–156 results, 163–165 values, 156, 158

T TeFS, 107 TextBlob, 38–39

Index Theil U Statistic, 93 3D value-based model, see Customer lifetime value (CLV) Tone Analyzer service, 30 True negative (TN), 124–125 True positive (TP), 124–125 Twitter sentiment analysis AlchemyAPI, 31–32 APIs, 30 context-free language, 36 context-sensitive, 35 corpus building, 33 customer privacy, 30–31 data ingestion, 32 data preparation, 32 feature engineering, 33–34 feature extraction, 36–37 general election outcome, prediction, 200–201 hashtags, 34 IBM, 30 infodemiology, 29 limited resources, 200 MACD, 201–202 Maluuba, 30 on-premises solution, 31 ontology-based methods, 31 Oracle Data Miner, 30 plug-and-play levels of service, 30 PreWHether and PreWHen, 202 quality time-series study emoticon combinations, 204 microblogging features, 204 natural language processing, 204 polarities, 205–206 US elections quality dynamics, 206–207 quantity time-series study, 202–204 results, 37–40 SAP, 30 security tools, 34–35 testing and tuning, 34–35 UGC, 202 US presidential election cycle, 201 Type 1 error, 33, 124–125 Type 2 error, 33, 124–125

U Uncertainty analysis (UA), 48–49 Undirected networks, 131 Unweighted networks, 131

Index W Ward index, 94–95 Weighted networks, 131 Weighted-RFML model, 87

217 Wikipedia, 3 Word-of-mouth (WOM) models, see Customer lifetime value (CLV) Word2Vec, 34

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.