Why should we trust your algorithm?

There is no doubt about the current role of Machine Learning in the fascinating world of Business Intelligence. Predicting whether a customer will be loyal to the company or not, understanding customers’ behavior or anticipating market fluctuations are typical examples on which Machine Learning may be pivotal. Unfortunately, most successful Machine Learning algorithms like Random Forests, Neural Networks or Support Vector Machines do not provide any mechanism to explain how they arrived at a particular conclusion and behave like a “black box”. This means that they are neither transparent  nor interpretable. We could understand transparency as the algorithm’s ability to explain its reasoning mechanism, while interpretability refers to the algorithm’s ability to explain the semantics behind the problem domain.

Business intelligence researchers have struggled with this issue often. After convincing the company’s Board of Directors that “we have the best prediction algorithm ever”, a question will probably arise: why should we trust your algorithm? From that point on, any progress hinges on the Board members being able to understand why the model performs the brilliant way it does. After all, we cannot blame the Board members for their skepticism, they cannot play with the company’s prestige. Aiming at coping with this issue, researchers have developed some post-hoc procedures to elucidate the reasoning mechanism behind black boxes, but whether the Board will be satisfied with the explications computed by another algorithm is questionable.

In recent years, we have been investigating Fuzzy Cognitive Maps (FCMs) as a vehicle to design accurate and interpretable Machine Learning algorithms. These knowledge-based models are capable of expressing the system semantics by using concepts and causal relations. Actually, FCMs can be understood as recurrent neural networks with interpretability features comprising a collection of processing entities called concepts (or simply neurons), which are connected by signed and weighted arcs. Concepts represent variables, objects, entities or states describing the system under investigation, while the edges denote causal relations among these neurons. Hence, the domain expert may easily understand the reasoning process carried out by these neural structures by performing WHAT-IF simulations.

Designing an FCM-based system often requires the human intervention, which may become a tedious task or it may result in rather subjective cognitive models. Existing construction algorithms to automatically infer the network structures from data fail to produce authentic causal relations. Therefore, we cannot assuredly interpret the system behavior from such models, even if the FCM inference is still transparent. Some authors attempt to overcome this drawback by using correlation measures, which are unable to capture the nature of causal relations. Causality does surely imply the existence of correlation, but the opposite does not necessarily hold. Even some authors state that the term “causality” is a philosophic concept that cannot possibly be measured in a numerical way without performing controlled experiments.

On the other hand, the prediction rates of FCM-based classifiers are quite poor when compared with other black-boxes, mainly due to lack of mathematically sound learning algorithms with good generalization capability. But some FCM-based models perform very well in specific domains, even using very simple architectures, and surprisingly there is no suitable explanation for such an interesting behavior!

Despite the above drawbacks, the transparency inherent to FCMs keeps motivating researchers to build FCM-based algorithms. Moreover, their inherent neural features provide a promising research avenue towards improving their accuracy in prediction scenarios. This suggests that FCM-based models could be as efficient as black-boxes while retaining their ability to elucidate the system behavior through causal relations. And perhaps, after further theoretical investigation, we can stand in front of the Board members and explain why they should trust our algorithm.

This entry was posted in Business, Research by Gonzalo Nápoles. Bookmark the permalink.

About Gonzalo Nápoles

Dr. Gonzalo Nápoles received the PhD degree in Computer Science from Hasselt University, Belgium. He has published his research in several peer-reviewed journals including Information Sciences, Neurocomputing, Neural Processing Letters, Applied Soft Computing, Knowledge-based Systems, among others. He was awarded with the Cuban Academy of Science Award twice (2013 and 2014). He is one of the creators and senior developer of the FCM Expert software tool (www.fcmexpert.net) for Fuzzy Cognitive Maps. His research interests include neural networks, fuzzy cognitive maps, rough cognitive networks, learning systems and chaos theory.

Leave a Reply

Your email address will not be published. Required fields are marked *