Przejdź do głównej treści

Widok zawartości stron Widok zawartości stron

Widok zawartości stron Widok zawartości stron

[The graphic presents 5 logos, in 3 rows. In the top row there is a logo of the XPM Project, which is an eyeball surrounded by a black-green spiral. Second and third rows present logotypes of the project funders. In the second row there are logos of French Agence Nationale de la Recherche and Portugese Fundação para a Ciência e a Tecnologia. The lowest row contains logos of Polish Narodowe Centrum Nauki and Swedish Vetenskapsrådet.]

Widok zawartości stron Widok zawartości stron

About the project

Click below sections to read about the scope and intended results of the project

The XPM project aims to integrate explanations into Artificial Intelligence (AI) solutions within the area of Predictive Maintenance (PM). Real-world applications of PM are increasingly complex, with intricate interactions of many components. AI solutions are a very popular technique in this domain, and especially the black-box models based on deep learning approaches are showing very promising results in terms of predictive accuracy and capability of modelling complex systems.

However, the decisions made by these black-box models are often difficult for human experts to understand – and therefore to act upon. The complete repair plan and maintenance actions that must be performed based on the detected symptoms of damage and wear often require complex reasoning and planning processes, involving many actors and balancing different priorities. It is not realistic to expect this complete solution to be created automatically – there is too much context that needs to be taken into account. Therefore, operators, technicians and managers require insights to understand what is happening, why it is happening, and how to react. Today’s mostly black-box AI does not provide these insights, nor does it support experts in making maintenance decisions based on the deviations it detects. The effectiveness of the PM system depends much less on the accuracy of the alarms the AI raises than on the relevancy of the actions operators perform based on these alarms.

In the XPM project, we will develop several different types of explanations (anything from visual analytics through prototypical examples to deductive argumentative systems) and demonstrate their usefulness in four selected case studies: electric vehicles, metro trains, steel plant and wind farms. In each of them, we will demonstrate how the right explanations of decisions made by AI systems lead to better results across several dimensions, including identifying the component or part of the process where the problem has occurred; understanding the severity and future consequences of detected deviations; choosing the optimal repair and maintenance plan from several alternatives created based on different priorities; and understanding the reasons why the problem has occurred in the first place as a way to improve system design for the future.

The project has two primary objectives. The first one is to develop novel methods for creating explanations for AI decisions within the PM domain (O1). There is a heated discussion in the literature today, about the right outlook towards XAI: should it be based on an extra layer to be added on top of existing black-box models, or achieved through the development of inherently interpretable (glass) models. A lot of work is being done in both these areas. We believe that both approaches have merit, but there are too few honest comparisons between them. Therefore, in the XPM project we plan to, within this first general objective, pursue the following two specific sub-objectives (SO):

  • SO1a: Develop a novel post hoc explainability layer for black-box PM models
  • SO1b: Develop novel algorithms for creating inherently explainable PM models

The second project objective is to develop a framework for evaluation of explanations within the XPM setting (O2). We believe that such a framework can subsequently be generalised to other domains, beyond the scope of XPM. Within this context, we have identified two specific sub-objectives:

  • SO2a: Propose multi-faceted evaluation metrics for PM explanations
  • SO2b: Design an interactive decision support system based on explainable PM

There is a need for more specific evaluation metrics, especially Functionality- and Human- Grounded ones, within the domain of PM. In particular, there are currently no solutions capable of capturing the needs of different actors, based on their individual competence level and specific goals. Still, within any given industry, multiple stakeholders need to interact with a PM system, often for different reasons. Understanding their needs and making sure each receives the right support is critical. Moreover, for Application-Grounded evaluation, it is crucial to understand how AI decisions and their explanations affect the planning and optimisation tasks that human experts use to create repair plans. To this end, we will build a decision support system that, based on the patterns detected by AI and augmented with explanations, provides different stakeholders with tools to create and update maintenance and repair plans. This will allow us to accurately measure the effect these explanations have on the final result.

In order to achieve these objectives and evaluate their impact within the duration of the project, the methods we develop will be tested and demonstrated in four different PM use cases across disparate industries. Each member of the XPM consortium has prior results in the analysis of industrial data in the PM context, including [sm,sa,fp]. Moreover, all have well-established collaboration with industrial partners interested in the PM area and the XPM project itself. Letters of interest from them are attached and available online.

The most important scientific impacts we foresee are as follows:

  1. development of novel post hoc explainability layer for black-box PM models, supporting their proper selection and parameterisation (related to SO1a),
  2. development of new glass models in PM, i.e., inherently explainable models, incorporating domain expert knowledge (related to SO1b),
  3. formulation of new metrics for explainable PM models, combining quantitative performance measures with qualitative human expert evaluation (related to SO2a),
  4. linking the explanations to the decision support system through the design of an interactive human-machine tool (SO2b)

At the industrial level, our methods, suitable for AI systems currently deployed, will lead to:

  1. an improved decision-making process in the industries currently using black-box PM, through explanations supporting the creation of maintenance plans,
  2. increased awareness of the pros and cons of glass models as an alternative to black-box ones,
  3. more efficient maintenance methods through a better understanding of the product life cycle, as well as how different factors are affecting its lifetime.

At the societal level, we expect impact related to improved trustworthiness of AI systems in the industrial context resulting from:

  1. increasing their understandability on the technical level,
  2. legal analysis of liability norms related to the development and operation of the AI systems,
  3. elaboration of guidelines, standards and criteria for the purpose of evaluation of the AI systems and their certification,
  4. the integration of human expert-based decision making with the AI operation.