About Gonzalo Nápoles

Dr. Gonzalo Nápoles received the PhD degree in Computer Science from Hasselt University, Belgium. He has published his research in several peer-reviewed journals including Information Sciences, Neurocomputing, Neural Processing Letters, Applied Soft Computing, Knowledge-based Systems, among others. He was awarded with the Cuban Academy of Science Award twice (2013 and 2014). He is one of the creators and senior developer of the FCM Expert software tool (www.fcmexpert.net) for Fuzzy Cognitive Maps. His research interests include neural networks, fuzzy cognitive maps, rough cognitive networks, learning systems and chaos theory.

Why should we trust your algorithm?

There is no doubt about the current role of Machine Learning in the fascinating world of Business Intelligence. Predicting whether a customer will be loyal to the company or not, understanding customers’ behavior or anticipating market fluctuations are typical examples on which Machine Learning may be pivotal. Unfortunately, most successful Machine Learning algorithms like Random Forests, Neural Networks or Support Vector Machines do not provide any mechanism to explain how they arrived at a particular conclusion and behave like a “black box”. This means that they are neither transparent  nor interpretable. We could understand transparency as the algorithm’s ability to explain its reasoning mechanism, while interpretability refers to the algorithm’s ability to explain the semantics behind the problem domain.

Continue reading