Modern AI techniques –- especially deep learning –- provide, in many cases, very good recommendations: where a self-driving car should go, whether to give a company a loan, etc. The problem is that not all these recommendations are good -- and since deep learning provides no explanations, we cannot tell which recommendations are good. It is therefore desirable to provide natural-language explanation of the numerical AI recommendations. The need to connect natural language rules and numerical decisions is known since 1960s, when the need emerged to incorporate expert knowledge -- described by imprecise words like "small" -- into control and decision making. For this incorporation, a special "fuzzy" technique was invented, that led to many successful applications. This book described how this technique can help to make AI more explainable.The book can be recommended for students, researchers, and practitioners interested in explainable AI.
Why Explainable AI? Why Fuzzy Explainable AI? What Is Fuzzy?.- Defuzzification.- Which Fuzzy Techniques?.- So How Can We Design Explainable Fuzzy AI: Ideas.- How to Make Machine Learning Itself More Explainable.- Final Self-Test.
Vladik Kreinovich is Professor of Computer Science at the University of Texas at El Paso. His main interests computations and intelligent control. He has published 13 books, 39 edited books, and more than 1,800 papers.
Vladik is Vice President of the International Fuzzy Systems Association (IFSA), Vice President of the European Society for Fuzzy Logic and Technology (EUSFLAT), Fellow of International Fuzzy Systems Association (IFSA), Fellow of Mexican Society for Artificial Intelligence (SMIA), Fellow of the Russian Association for Fuzzy Systems and Soft Computing. He is Treasurer of IEEE Systems, Man, and Cybernetics Society
Modern AI techniques –- especially deep learning –- provide, in many cases, very good recommendations: where a self-driving car should go, whether to give a company a loan, etc. The problem is that not all these recommendations are good -- and since deep learning provides no explanations, we cannot tell which recommendations are good. It is therefore desirable to provide natural-language explanation of the numerical AI recommendations. The need to connect natural language rules and numerical decisions is known since 1960s, when the need emerged to incorporate expert knowledge -- described by imprecise words like "small" -- into control and decision making. For this incorporation, a special "fuzzy" technique was invented, that led to many successful applications. This book described how this technique can help to make AI more explainable.The book can be recommended for students, researchers, and practitioners interested in explainable AI.