There is no doubt about the current role of Machine Learning in the fascinating world of Business Intelligence. Predicting whether a customer will be loyal to the company or not, understanding customers’ behavior or anticipating market fluctuations are typical examples on which Machine Learning may be pivotal. Unfortunately, most successful Machine Learning algorithms like Random Forests, Neural Networks or Support Vector Machines do not provide any mechanism to explain how they arrived at a particular conclusion and behave like a “black box”. This means that they are neither transparent nor interpretable. We could understand transparency as the algorithm’s ability to explain its reasoning mechanism, while interpretability refers to the algorithm’s ability to explain the semantics behind the problem domain.