On Monday the 4th December, 2024, CEMAPRE (Centre for Applied Mathematics for Economic Forecasting and Decision Making) will hold a seminar on 'Mathematical Optimisation in Explainable Machine Learning', presented by Emilio Carrizosa, a professor at the University of Seville. The event will take place in Lecture Theatre 3 (ISEG, Quelhas Building), from 14.30 to 15.30.
The seminar will address the key role of mathematical optimisation in the development of explainable machine learning models. Although traditional explainable machine learning models are widely used in technology, science, and decision-making, they are often seen as 'black boxes'. This characteristic can limit their acceptance and social support, leading to the emergence of new models within the concept of Explainable Machine Learning.
According to Emilio Carrizosa, mathematical modelling and optimisation tools are not only essential for traditional methods of explainable machine learning, but they also have even more relevance in the explainable context. This is because they make it possible to integrate desirable properties, such as thrift and justice, through constraints or penalty terms in optimisation problems.
The presentation will highlight two specific challenges that the speaker's research team has been working on:
- Counterfactual problems: a special category of projection problems whose objective is to explore alternative scenarios and their implications.
- Collective LIME: a sparse linearisation method that helps interpret complex models through local simplifications.
The event will be a unique opportunity for academics, professionals, and students who are interested in explainable artificial intelligence , and will discuss how applied mathematics can enhance the creation of explainable machine learning models and make them more transparent and reliable.
Free admission.
Abstract
Machine Learning (ML) is becoming a common tool in Technology, Science and Decision Making. Since ML methods are usually seen as a black box, limiting their use and social support and the emergence of new models and methods that are assumed to be trustworthy have ben developed under the umbrella of Explainable Machine Learning.
While Mathematical Modelling and Mathematical Optimisation play a crucial role in traditional ML, they are even more useful in Explainable Machine Learning, as they enable practitioners to include desirable properties in the model, such as sparsity or fairness via constraints or penalty terms in an optimisation problem. In this talk, we will illustrate two challenges on which the research team has been working: Counterfactual problems, which are a very special case of projection problem, and Collective LIME, which is a sparse linearisation method for a prediction functional.