On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)

Authors Charles-Maxime Gauriat, Yannick Pencolé , Pauline Ribot , Gregory Brouillet



PDF
Thumbnail PDF

File

OASIcs.DX.2024.27.pdf
  • Filesize: 1.63 MB
  • 14 pages

Document Identifiers

Author Details

Charles-Maxime Gauriat
  • LAAS-CNRS, Université de Toulouse, INSA, Toulouse, France
  • Robert BOSCH (SAS), Paris, France
Yannick Pencolé
  • LAAS-CNRS, Université de Toulouse, CNRS, Toulouse, France
Pauline Ribot
  • LAAS-CNRS, Université de Toulouse, UPS, Toulouse, France
Gregory Brouillet
  • Robert BOSCH (SAS), Paris, France

Cite As Get BibTex

Charles-Maxime Gauriat, Yannick Pencolé, Pauline Ribot, and Gregory Brouillet. On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper). In 35th International Conference on Principles of Diagnosis and Resilient Systems (DX 2024). Open Access Series in Informatics (OASIcs), Volume 125, pp. 27:1-27:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) https://doi.org/10.4230/OASIcs.DX.2024.27

Abstract

In an industrial maintenance context, degradation diagnosis is the problem of determining the current level of degradation of operating machines based on measurements. With the emergence of Machine Learning techniques, such a problem can now be solved by training a degradation model offline and by using it online. While such models are more and more accurate and performant, they are often black-box and their decisions are therefore not interpretable for human maintenance operators. On the contrary, interpretable ML models are able to provide explanations for the model’s decisions and consequently improves the confidence of the human operator about the maintenance decision based on these models. This paper proposes a new method to quantitatively measure the interpretability of such models that is agnostic (no assumption about the class of models) and that is applied on degradation models. The proposed method requires that the decision maker sets up some high level parameters in order to measure the interpretability of the models and then can decide whether the obtained models are satisfactory or not. The method is formally defined and is fully illustrated on a decision tree degradation model and a model trained with a recent neural network architecture called Multiclass Neural Additive Model.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Machine learning
Keywords
  • XAI
  • Interpretability
  • multiclass supervised learning
  • degradation diagnosis

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, and Geoffrey E Hinton. Neural additive models: Interpretable machine learning with neural nets. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 4699-4711. Curran Associates, Inc., 2021. Google Scholar
  2. Maryam Ahang, Mostafa Abbasi, Todd Charter, and Homayoun Najjaran. Condition monitoring with incomplete data: An integrated variational autoencoder and distance metric framework. In IEEE 20th International Conference on Automation Science and Engineering, Bari, Italy, 2024. Google Scholar
  3. Robert Andrews, Joachim Diederich, and Alan B. Tickle. Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8(6):373-389, 1995. Knowledge-based neural networks. URL: https://doi.org/10.1016/0950-7051(96)81920-4.
  4. Mohammad Azad, Igor Chikalov, Shahid Hussain, Mikhail Moshkov, and Beata Zielosko. Decision rules derived from optimal decision trees with hypotheses. Entropy, 23(12):1641, 2021. URL: https://doi.org/10.3390/E23121641.
  5. Clément Bénard, Gérard Biau, Sébastien Da Veiga, and Erwan Scornet. Interpretable random forests via rule extraction. In International Conference on Artificial Intelligence and Statistics, pages 937-945. PMLR, 2021. URL: http://proceedings.mlr.press/v130/benard21a.html.
  6. Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8):832, 2019. Google Scholar
  7. Xiaohan Chen, Beike Zhang, and Dong Gao. Bearing fault diagnosis base on multi-scale cnn and lstm model. Journal of Intelligent Manufacturing, 32:971-987, 2021. URL: https://doi.org/10.1007/S10845-020-01600-2.
  8. Victor Contreras, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael Schumacher, and Davide Calvaresi. A dexire for extracting propositional rules from neural networks via binarization. Electronics, 11(24), 2022. Google Scholar
  9. Charles-Maxime Gauriat, Yannick Pencolé, Pauline Ribot, and Gregory Brouillet. Multi-class Neural Additive Models : An Interpretable Supervised Learning Method for Gearbox Degradation Detection. In 2024 IEEE International Conference on Prognostics and Health Management, Spokane , WA, United States, June 2024. URL: https://hal.science/hal-04562531.
  10. Charles-Maxime Gauriat, Yannick Pencolé, Pauline Ribot, and Gregory Brouillet. An experimental setup for learning models for the rul estimation of bearings in rotary machine tools: towards the learning of more interpretable models. Technical report, LAAS-CNRS, 2023. URL: https://hal.science/hal-04715967.
  11. Trevor Hastie and Robert Tibshirani. Generalized Additive Models. Statistical Science, 1(3):297-310, 1986. URL: https://doi.org/10.1214/ss/1177013604.
  12. Carlos Hernández-Espinosa, Mercedes Fernández-Redondo, and Mamen Ortiz-Gómez. Rule extraction from a multilayer feedforward trained network via interval arithmetic inversion. In José Mira and José R. Álvarez, editors, Computational Methods in Neural Modeling, pages 622-629, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg. URL: https://doi.org/10.1007/3-540-44868-3_79.
  13. Minkyu Kim, Hyun-Soo Choi, and Jinho Kim. Higher-order neural additive models: An interpretable machine learning model with feature interactions. Arxiv, September 2022. URL: https://doi.org/10.48550/arXiv.2209.15409.
  14. Zachary C. Lipton. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31-57, June 2018. Google Scholar
  15. Brian Liu and Rahul Mazumder. Fire: An optimization approach for fast interpretable rule extraction. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1396-1405, 2023. URL: https://doi.org/10.1145/3580305.3599353.
  16. Yin Lou, Rich Caruana, and Johannes Gehrke. Intelligible models for classification and regression. In Knowledge Discovery and Data Mining, 2012. Google Scholar
  17. Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. CoRR, abs/1705.07874, 2017. URL: https://arxiv.org/abs/1705.07874.
  18. Ricards Marcinkevics and Julia Vogt. Interpretability and explainability: A machine learning zoo mini-tour, December 2020. Google Scholar
  19. Vincent Margot and George Luta. A new method to compare the interpretability of rule-based algorithms. AI, 2(4):621-635, 2021. Google Scholar
  20. Joao Marques-Silva and Alexey Ignatiev. No silver bullet: interpretable ml models must be explained. Frontiers in artificial intelligence, 6:1128212, April 2023. Google Scholar
  21. Samad Moslehi, Hossein Mahjub, Maryam Farhadian, Ali Soltanian, and Mojgan Mamani. Interpretable generalized neural additive models for mortality prediction of covid-19 hospitalized patients in hamadan, iran. BMC Medical Research Methodology, 22, December 2022. Google Scholar
  22. Marco Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 97-101, San Diego,CA,USA, June 2016. Google Scholar
  23. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5):206-215, 2019. URL: https://doi.org/10.1038/S42256-019-0048-X.
  24. Victor Feitosa Souza, Ferdinando Cicalese, Eduardo Laber, and Marco Molinaro. Decision trees with short explainable rules. Advances in neural information processing systems, 35:12365-12379, 2022. Google Scholar
  25. Sergey Yarushev, Alexey Averkin, and Vladimir Kosterev. Rule extraction methods from neural networks. In Recent Trends in Intelligence Enabled Research, pages 1-9. Springer Nature Singapore, 2023. Google Scholar
  26. Qing Zhou, Fenglu Liao, Chao Mou, and Ping Wang. Measuring interpretability for different types of machine learning models. In Mohadeseh Ganji, Lida Rashidi, Benjamin C. M. Fung, and Can Wang, editors, Trends and Applications in Knowledge Discovery and Data Mining, pages 295-308, Cham, 2018. Springer International Publishing. URL: https://doi.org/10.1007/978-3-030-04503-6_29.
  27. Zhi-Hua Zhou. Rule extraction: Using neural networks or for neural networks? Journal of Computer Science and Technology, 19(2):249-253, March 2004. URL: https://doi.org/10.1007/BF02944803.
  28. Jan Zilke, Eneldo Mencía, and Frederik Janssen. Deepred – rule extraction from deep neural networks. In Discovery Science, pages 457-473, October 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail