SPECIAL SESSION #33
Perspectives of Explainable Artificial Intelligence and Data Mining in Medical applications
ORGANIZED BY
Lerina Aversano
Engineering Department, University of Sannio, Italy
Martina Iammarino
Department of Computer Science, University of Bari Aldo Moro, Italy
Debora Montano
Centro Regionale Information Communication Technology (CeRICT), Italy
Chiara Verdone
Engineering Department, University of Sannio, Italy
ABSTRACT
Artificial intelligence is revolutionizing the world in many industries, and healthcare is no exception. Thanks to its ability to analyze large amounts of data and learn from it with machine learning, artificial intelligence is becoming an increasingly important tool for improving the healthcare system.
The use of artificial intelligence in the medical field can bring numerous benefits, including the optimization of healthcare processes, the reduction of medical errors, the reduction of costs, and the improvement of patient management, with faster and more precise diagnoses and personalization of care.
Although data-driven models produce very accurate systems, the adoption of these methods by hospital staff is not without difficulties and low confidence.
The interaction between medicine and artificial intelligence therefore has a role of fundamental importance in supporting clinical decisions, from the diagnosis of the disease to the therapeutic choices to the application of therapies, but these are delicate choices that have repercussions on life and well-being. Being people, as well as on the processes and costs of healthcare facilities, it is clear that every choice must be appropriately motivated. This is because, despite the proven effectiveness of DL approaches, models for learning and solving complex problems are often incomprehensible to humans. These models are in fact considered black boxes because the mechanisms that determine their predictions are in fact hidden.
Therefore, this session mainly focuses on exploring artificial intelligence (XAI) methods in the context of healthcare. The ability to interpret and explain models is critical to increasing confidence in the predictions and recommendations provided, and Explainable AI (XAI) techniques were created with this specific goal: to provide more understandable and analyzable results.
The goal is for artificial intelligence models to become a useful and reliable tool that helps in decisions and treatment of pathologies.
Finally, the purpose of this special session is to bring together global professionals, industry leaders, and specialists involved in evaluating, creating, and using explainability methodologies in healthcare.
TOPICS
Topics of interest include, but are not limited to, the following:
- Explainable AI for healthcare applied to IoT data;
- Explainable AI for healthcare applied to clinical assessment;
- Explainable AI for healthcare applied to imaging and clinical exams;
- Applications of Explainable Deep Learning in Diagnostics;
- Transparency and Accountability in Deep Learning models;
- Trustworthy AI;
- Explainable machine learning methods applied to healthcare and biomedical datasets;
- Machine learning software and tools in the healthcare sector;
- ML/DL based Natural Language Processing (NLP) with interpretable and explicable features for healthcare applications;
- Fuzzy logic improved approaches to explainability;
- Data Mining and Knowledge Discovery in Healthcare.
ABOUT THE ORGANIZERS
Lerina Aversano, is an associate professor at the Department of Engineering of the University of Sannio, Italy. She received the PhD in Computer Engineering in July 2003 at the same University and was assistant professor from 2005. She also was a research leader at RCOST — Research Centre On Software Technology — of the University of Sannio, Italy from 2005. Her research interests include software maintenance, program comprehension, reverse engineering, reengineering, migration, business process modeling, business process evolution, software system evolution, software quality.
Martina Iammarino, is a Researcher at the Department of Computer Science at the University of Bari Aldo Moro, Italy. She obtained the PhD in Information Technology for Engineering in February 2023 and the master's degree in 2019 at the University of Sannio. Her current research activities focus on software engineering , software and data quality, process and data engineering. Specifically, more recently, her research dealing with artificial intelligence techniques have been validated in the medical domain. In this regard, she has published several articles based on machine learning and deep learning techniques, applied to different domains. She has also been a reviewer for numerous international conferences and journals and a member of the organizing and program committees of international conferences.
Debora Montano, is a researcher of the CeRICT scrl. She has a master's degree in Statistics and Actuarial Sciences from the University of Sannio, Benevento, Italy. She researched for 3 years at the Computer Science Department of the University of Sannio, where she worked as a Data Analyst for Machine Learning and Big Data processing activities. Currently, Debora's research fields are Medical IoT security, Medical software quality, and empirical studies.
Chiara Verdone, is a PhD student at the University of Sannio, Italy. She received her master's degree in Computer Engineering in 2021. Her current research interests mainly concern data analysis and Artificial Intelligence in the field of health, specifically: diagnosis, progress and treatment of different types. In fact, you have published several articles based on Machine Learning and Deep Learning techniques applied to the domain in question for international conferences and journals. She has also been a reviewer for numerous conferences.