ISBN-13: 9781119821250 / Angielski / Twarda / 2022 / 480 str.
ISBN-13: 9781119821250 / Angielski / Twarda / 2022 / 480 str.
Preface xix1 Supervised Machine Learning: Algorithms and Applications 1Shruthi H. Shetty, Sumiksha Shetty, Chandra Singh and Ashwath Rao1.1 History 21.2 Introduction 21.3 Supervised Learning 41.4 Linear Regression (LR) 51.4.1 Learning Model 61.4.2 Predictions With Linear Regression 71.5 Logistic Regression 81.6 Support Vector Machine (SVM) 91.7 Decision Tree 111.8 Machine Learning Applications in Daily Life 121.8.1 Traffic Alerts (Maps) 121.8.2 Social Media (Facebook) 131.8.3 Transportation and Commuting (Uber) 131.8.4 Products Recommendations 131.8.5 Virtual Personal Assistants 131.8.6 Self-Driving Cars 141.8.7 Google Translate 141.8.8 Online Video Streaming (Netflix) 141.8.9 Fraud Detection 141.9 Conclusion 15References 152 Zonotic Diseases Detection Using Ensemble Machine Learning Algorithms 17Bhargavi K.2.1 Introduction 182.2 Bayes Optimal Classifier 192.3 Bootstrap Aggregating (Bagging) 212.4 Bayesian Model Averaging (BMA) 222.5 Bayesian Classifier Combination (BCC) 242.6 Bucket of Models 262.7 Stacking 272.8 Efficiency Analysis 292.9 Conclusion 30References 303 Model Evaluation 33Ravi Shekhar Tiwari3.1 Introduction 343.2 Model Evaluation 343.2.1 Assumptions 363.2.2 Residual 363.2.3 Error Sum of Squares (Sse) 373.2.4 Regression Sum of Squares (Ssr) 373.2.5 Total Sum of Squares (Ssto) 373.3 Metric Used in Regression Model 383.3.1 Mean Absolute Error (Mae) 383.3.2 Mean Square Error (Mse) 393.3.3 Root Mean Square Error (Rmse) 413.3.4 Root Mean Square Logarithm Error (Rmsle) 423.3.5 R-Square (R²) 453.3.5.1 Problem With R-Square (R²) 463.3.6 Adjusted R-Square (R²) 463.3.7 Variance 473.3.8 AIC 483.3.9 BIC 493.3.10 ACP, Press, and R²-Predicted 493.3.11 Solved Examples 513.4 Confusion Metrics 523.4.1 How to Interpret the Confusion Metric? 533.4.2 Accuracy 553.4.2.1 Why Do We Need the Other Metric Along With Accuracy? 563.4.3 True Positive Rate (TPR) 563.4.4 False Negative Rate (FNR) 573.4.5 True Negative Rate (TNR) 573.4.6 False Positive Rate (FPR) 583.4.7 Precision 583.4.8 Recall 593.4.9 Recall-Precision Trade-Off 603.4.10 F1-Score 613.4.11 F-Beta Sore 613.4.12 Thresholding 633.4.13 AUC - ROC 643.4.14 AUC - PRC 653.4.15 Derived Metric From Recall, Precision, and F1-Score 673.4.16 Solved Examples 683.5 Correlation 703.5.1 Pearson Correlation 703.5.2 Spearman Correlation 713.5.3 Kendall's Rank Correlation 733.5.4 Distance Correlation 743.5.5 Biweight Mid-Correlation 753.5.6 Gamma Correlation 763.5.7 Point Biserial Correlation 773.5.8 Biserial Correlation 783.5.9 Partial Correlation 783.6 Natural Language Processing (NLP) 783.6.1 N-Gram 793.6.2 BELU Score 793.6.2.1 BELU Score With N-Gram 803.6.3 Cosine Similarity 813.6.4 Jaccard Index 833.6.5 ROUGE 843.6.6 NIST 853.6.7 SQUAD 853.6.8 MACRO 863.7 Additional Metrics 863.7.1 Mean Reciprocal Rank (MRR) 863.7.2 Cohen Kappa 873.7.3 Gini Coefficient 873.7.4 Scale-Dependent Errors 873.7.5 Percentage Errors 883.7.6 Scale-Free Errors 883.8 Summary of Metric Derived from Confusion Metric 893.9 Metric Usage 903.10 Pro and Cons of Metrics 943.11 Conclusion 95References 964 Analysis of M-SEIR and LSTM Models for the Prediction of COVID-19 Using RMSLE 101Archith S., Yukta C., Archana H.R. and Surendra H.H.4.1 Introduction 1014.2 Survey of Models 1034.2.1 SEIR Model 1034.2.2 Modified SEIR Model 1034.2.3 Long Short-Term Memory (LSTM) 1044.3 Methodology 1064.3.1 Modified SEIR 1064.3.2 LSTM Model 1084.3.2.1 Data Pre-Processing 1084.3.2.2 Data Shaping 1094.3.2.3 Model Design 1094.4 Experimental Results 1114.4.1 Modified SEIR Model 1114.4.2 LSTM Model 1134.5 Conclusion 1164.6 Future Work 116References 1185 The Significance of Feature Selection Techniques in Machine Learning 121N. Bharathi, B.S. Rishiikeshwer, T. Aswin Shriram, B. Santhi and G.R. Brindha5.1 Introduction 1225.2 Significance of Pre-Processing 1225.3 Machine Learning System 1235.3.1 Missing Values 1235.3.2 Outliers 1235.3.3 Model Selection 1245.4 Feature Extraction Methods 1245.4.1 Dimension Reduction 1255.4.1.1 Attribute Subset Selection 1265.4.2 Wavelet Transforms 1275.4.3 Principal Components Analysis 1275.4.4 Clustering 1285.5 Feature Selection 1285.5.1 Filter Methods 1295.5.2 Wrapper Methods 1295.5.3 Embedded Methods 1305.6 Merits and Demerits of Feature Selection 1315.7 Conclusion 131References 1326 Use of Machine Learning and Deep Learning in Healthcare--A Review on Disease Prediction System 135Radha R. and Gopalakrishnan R.6.1 Introduction to Healthcare System 1366.2 Causes for the Failure of the Healthcare System 1376.3 Artificial Intelligence and Healthcare System for Predicting Diseases 1386.3.1 Monitoring and Collection of Data 1406.3.2 Storing, Retrieval, and Processing of Data 1416.4 Facts Responsible for Delay in Predicting the Defects 1426.5 Pre-Treatment Analysis and Monitoring 1436.6 Post-Treatment Analysis and Monitoring 1456.7 Application of ML and DL 1456.7.1 ML and DL for Active Aid 1456.7.1.1 Bladder Volume Prediction 1476.7.1.2 Epileptic Seizure Prediction 1486.8 Challenges and Future of Healthcare Systems Based on ML and DL 1486.9 Conclusion 149References 1507 Detection of Diabetic Retinopathy Using Ensemble Learning Techniques 153Anirban Dutta, Parul Agarwal, Anushka Mittal, Shishir Khandelwal and Shikha Mehta7.1 Introduction 1537.2 Related Work 1557.3 Methodology 1557.3.1 Data Pre-Processing 1557.3.2 Feature Extraction 1617.3.2.1 Exudates 1617.3.2.2 Blood Vessels 1617.3.2.3 Microaneurysms 1627.3.2.4 Hemorrhages 1627.3.3 Learning 1637.3.3.1 Support Vector Machines 1637.3.3.2 K-Nearest Neighbors 1637.3.3.3 Random Forest 1647.3.3.4 AdaBoost 1647.3.3.5 Voting Technique 1647.4 Proposed Models 1657.4.1 AdaNaive 1657.4.2 AdaSVM 1667.4.3 AdaForest 1667.5 Experimental Results and Analysis 1677.5.1 Dataset 1677.5.2 Software and Hardware 1677.5.3 Results 1687.6 Conclusion 173References 1748 Machine Learning and Deep Learning for Medical Analysis--A Case Study on Heart Disease Data 177Swetha A.M., Santhi B. and Brindha G.R.8.1 Introduction 1788.2 Related Works 1798.3 Data Pre-Processing 1818.3.1 Data Imbalance 1818.4 Feature Selection 1828.4.1 Extra Tree Classifier 1828.4.2 Pearson Correlation 1838.4.3 Forward Stepwise Selection 1838.4.4 Chi-Square Test 1848.5 ML Classifiers Techniques 1848.5.1 Supervised Machine Learning Models 1858.5.1.1 Logistic Regression 1858.5.1.2 SVM 1868.5.1.3 Naive Bayes 1868.5.1.4 Decision Tree 1868.5.1.5 K-Nearest Neighbors (KNN) 1878.5.2 Ensemble Machine Learning Model 1878.5.2.1 Random Forest 1878.5.2.2 AdaBoost 1888.5.2.3 Bagging 1888.5.3 Neural Network Models 1898.5.3.1 Artificial Neural Network (ANN) 1898.5.3.2 Convolutional Neural Network (CNN) 1898.6 Hyperparameter Tuning 1908.6.1 Cross-Validation 1908.7 Dataset Description 1908.7.1 Data Pre-Processing 1938.7.2 Feature Selection 1958.7.3 Model Selection 1968.7.4 Model Evaluation 1978.8 Experiments and Results 1978.8.1 Study 1: Survival Prediction Using All Clinical Features 1988.8.2 Study 2: Survival Prediction Using Age, Ejection Fraction and Serum Creatinine 1988.8.3 Study 3: Survival Prediction Using Time, Ejection Fraction, and Serum Creatinine 1998.8.4 Comparison Between Study 1, Study 2, and Study 3 2038.8.5 Comparative Study on Different Sizes of Data 2048.9 Analysis 2068.10 Conclusion 206References 2079 A Novel Convolutional Neural Network Model to Predict Software Defects 211Kumar Rajnish, Vandana Bhattacharjee and Mansi Gupta9.1 Introduction 2129.2 Related Works 2139.2.1 Software Defect Prediction Based on Deep Learning 2139.2.2 Software Defect Prediction Based on Deep Features 2149.2.3 Deep Learning in Software Engineering 2149.3 Theoretical Background 2159.3.1 Software Defect Prediction 2159.3.2 Convolutional Neural Network 2169.4 Experimental Setup 2189.4.1 Data Set Description 2189.4.2 Building Novel Convolutional Neural Network (NCNN) Model 2199.4.3 Evaluation Parameters 2229.4.4 Results and Analysis 2249.5 Conclusion and Future Scope 230References 23310 Predictive Analysis on Online Television Videos Using Machine Learning Algorithms 237Rebecca Jeyavadhanam B., Ramalingam V.V., Sugumaran V. and Rajkumar D.10.1 Introduction 23810.1.1 Overview of Video Analytics 24110.1.2 Machine Learning Algorithms 24210.1.2.1 Decision Tree C4.5 24310.1.2.2 J48 Graft 24310.1.2.3 Logistic Model Tree 24410.1.2.4 Best First Tree 24410.1.2.5 Reduced Error Pruning Tree 24410.1.2.6 Random Forest 24410.2 Proposed Framework 24510.2.1 Data Collection 24610.2.2 Feature Extraction 24610.2.2.1 Block Intensity Comparison Code 24710.2.2.2 Key Frame Rate 24810.3 Feature Selection 24910.4 Classification 25010.5 Online Incremental Learning 25110.6 Results and Discussion 25310.7 Conclusion 255References 25611 A Combinational Deep Learning Approach to Visually Evoked EEG-Based Image Classification 259Nandini Kumari, Shamama Anwar and Vandana Bhattacharjee11.1 Introduction 26011.2 Literature Review 26211.3 Methodology 26411.3.1 Dataset Acquisition 26411.3.2 Pre-Processing and Spectrogram Generation 26511.3.3 Classification of EEG Spectrogram Images With Proposed CNN Model 26611.3.4 Classification of EEG Spectrogram Images With Proposed Combinational CNN+LSTM Model 26811.4 Result and Discussion 27011.5 Conclusion 272References 27312 Application of Machine Learning Algorithms With Balancing Techniques for Credit Card Fraud Detection: A Comparative Analysis 277Shiksha12.1 Introduction 27812.2 Methods and Techniques 28012.2.1 Research Approach 28012.2.2 Dataset Description 28212.2.3 Data Preparation 28312.2.4 Correlation Between Features 28412.2.5 Splitting the Dataset 28512.2.6 Balancing Data 28512.2.6.1 Oversampling of Minority Class 28612.2.6.2 Under-Sampling of Majority Class 28612.2.6.3 Synthetic Minority Over Sampling Technique 28612.2.6.4 Class Weight 28712.2.7 Machine Learning Algorithms (Models) 28812.2.7.1 Logistic Regression 28812.2.7.2 Support Vector Machine 28812.2.7.3 Decision Tree 29012.2.7.4 Random Forest 29212.2.8 Tuning of Hyperparameters 29412.2.9 Performance Evaluation of the Models 29412.3 Results and Discussion 29812.3.1 Results Using Balancing Techniques 29912.3.2 Result Summary 29912.4 Conclusions 30512.4.1 Future Recommendations 305References 30613 Crack Detection in Civil Structures Using Deep Learning 311Bijimalla Shiva Vamshi Krishna, Rishiikeshwer B.S., J. Sanjay Raju, N. Bharathi, C. Venkatasubramanian and G.R. Brindha13.1 Introduction 31213.2 Related Work 31213.3 Infrared Thermal Imaging Detection Method 31413.4 Crack Detection Using CNN 31413.4.1 Model Creation 31613.4.2 Activation Functions (AF) 31713.4.3 Optimizers 32213.4.4 Transfer Learning 32213.5 Results and Discussion 32213.6 Conclusion 323References 32314 Measuring Urban Sprawl Using Machine Learning 327Keerti Kulkarni and P. A. Vijaya14.1 Introduction 32714.2 Literature Survey 32814.3 Remotely Sensed Images 32914.4 Feature Selection 33114.4.1 Distance-Based Metric 33114.5 Classification Using Machine Learning Algorithms 33214.5.1 Parametric vs. Non-Parametric Algorithms 33214.5.2 Maximum Likelihood Classifier 33214.5.3 k-Nearest Neighbor Classifiers 33414.5.4 Evaluation of the Classifiers 33414.5.4.1 Precision 33414.5.4.2 Recall 33514.5.4.3 Accuracy 33514.5.4.4 F1-Score 33514.6 Results 33514.7 Discussion and Conclusion 338Acknowledgements 338References 33815 Application of Deep Learning Algorithms in Medical Image Processing: A Survey 341Santhi B., Swetha A.M. and Ashutosh A.M.15.1 Introduction 34215.2 Overview of Deep Learning Algorithms 34315.2.1 Supervised Deep Neural Networks 34315.2.1.1 Convolutional Neural Network 34315.2.1.2 Transfer Learning 34415.2.1.3 Recurrent Neural Network 34415.2.2 Unsupervised Learning 34515.2.2.1 Autoencoders 34515.2.2.2 GANs 34515.3 Overview of Medical Images 34615.3.1 MRI Scans 34615.3.2 CT Scans 34715.3.3 X-Ray Scans 34715.3.4 PET Scans 34715.4 Scheme of Medical Image Processing 34815.4.1 Formation of Image 34815.4.2 Image Enhancement 34915.4.3 Image Analysis 34915.4.4 Image Visualization 34915.5 Anatomy-Wise Medical Image Processing With Deep Learning 34915.5.1 Brain Tumor 35215.5.2 Lung Nodule Cancer Detection 35715.5.3 Breast Cancer Segmentation and Detection 36215.5.4 Heart Disease Prediction 36415.5.5 COVID-19 Prediction 37015.6 Conclusion 372References 37216 Simulation of Self-Driving Cars Using Deep Learning 379Rahul M. K., Praveen L. Uppunda, Vinayaka Raju S., Sumukh B. and C. Gururaj16.1 Introduction 38016.2 Methodology 38016.2.1 Behavioral Cloning 38016.2.2 End-to-End Learning 38016.3 Hardware Platform 38116.4 Related Work 38216.5 Pre-Processing 38216.5.1 Lane Feature Extraction 38216.5.1.1 Canny Edge Detector 38316.5.1.2 Hough Transform 38316.5.1.3 Raw Image Without Pre-Processing 38416.6 Model 38416.6.1 CNN Architecture 38516.6.2 Multilayer Perceptron Model 38516.6.3 Regression vs. Classification 38516.6.3.1 Regression 38616.6.3.2 Classification 38616.7 Experiments 38716.8 Results 38716.9 Conclusion 394References 39417 Assistive Technologies for Visual, Hearing, and Speech Impairments: Machine Learning and Deep Learning Solutions 397Shahira K. C., Sruthi C. J. and Lijiya A.17.1 Introduction 39717.2 Visual Impairment 39817.2.1 Conventional Assistive Technology for the VIP 39917.2.1.1 Way Finding 39917.2.1.2 Reading Assistance 40217.2.2 The Significance of Computer Vision and Deep Learning in AT of VIP 40317.2.2.1 Navigational Aids 40317.2.2.2 Scene Understanding 40517.2.2.3 Reading Assistance 40617.2.2.4 Wearables 40817.3 Verbal and Hearing Impairment 41017.3.1 Assistive Listening Devices 41017.3.2 Alerting Devices 41117.3.3 Augmentative and Alternative Communication Devices 41117.3.3.1 Sign Language Recognition 41217.3.4 Significance of Machine Learning and Deep Learning in Assistive Communication Technology 41717.4 Conclusion and Future Scope 418References 41818 Case Studies: Deep Learning in Remote Sensing 425Emily Jenifer A. and Sudha N.18.1 Introduction 42618.2 Need for Deep Learning in Remote Sensing 42718.3 Deep Neural Networks for Interpreting Earth Observation Data 42718.3.1 Convolutional Neural Network 42718.3.2 Autoencoder 42818.3.3 Restricted Boltzmann Machine and Deep Belief Network 42918.3.4 Generative Adversarial Network 43018.3.5 Recurrent Neural Network 43118.4 Hybrid Architectures for Multi-Sensor Data Processing 43218.5 Conclusion 434References 434Index 439
Pradeep Singh PhD, is an assistant professor in the Department of Computer Science Engineering, National Institute of Technology, Raipur, India. His current research interests include machine learning, deep learning, evolutionary computing, empirical studies on software quality, and software fault prediction models. He has more than 15 years of teaching experience with many publications in reputed international journals, conferences, and book chapters.
1997-2024 DolnySlask.com Agencja Internetowa