Automated Interpretable and Lightweight Deep Learning Models for Molecular Images
1Thapar Institute of Engineering and Technology, Patiala, India
2Thapar Institute of Engineering and Technology, Patiala, UK
3Bournemouth University, Bournemouth, UK
Automated Interpretable and Lightweight Deep Learning Models for Molecular Images
Description
Recently, deep learning models have been extensively utilized for automated analyses of medical data. These models can perform specific tasks, such as automated disease diagnosis, accurately and more effectively than medical experts. For molecular imaging, deep learning models can be used for various objectives such as image-based quantification, image acquisition improvement, and differential diagnosis. An imaging method that uses remote imaging detectors to characterize and measure biological processes on a molecular and cellular level is referred to as molecular imaging. With molecular imaging, diseases can be detected without the use of invasive methods. Alternatively, detection can be conducted using disease-associated molecular signatures, as well as using the interactions of molecular mechanisms in vivo, and using monitoring of gene expression. Nuclear medicine, magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonic imaging (US) are all used as molecular imaging tools in clinical practice.
Deep learning models tend to face overfitting, gradient vanishing, hyper-parameters tuning, and interpretability problems. Furthermore, trained models are generally very large and thus are difficult to implement on lightweight devices such as the medical internet of things (MIoT) and wearable devices. Transfer learning models have been developed to overcome the overfitting and gradient vanishing problems. Hyper-parameter tuning can be resolved with the use of automated learning models and metaheuristic techniques. However, the design and implementation of lightweight and interpretable models for automated analyses of molecular imaging continue to be an important area of research. By optimizing the network size of deep learning models, models can be compressed and implemented on lightweight devices. Nonetheless, it is difficult to interpret trained models as these models and this "Black box" problem leads to opaque deep learning models. In the molecular image domain, from legislation and law enforcement to healthcare, it is necessary to guarantee that the decisions of deep learning models are determined according to the context in which they are used. A deep learning model that can be interpreted by medical experts will allow experts to determine whether to accept and follow recommendations and predictions made by the model.
The main objective of this Special Issue is to publish original research and review papers related to lightweight and interpretable deep learning models for molecular images.
Potential topics include but are not limited to the following:
- Lightweight deep learning models for molecular images
- Explainable deep learning models for molecular images
- Interpretable and lightweight deep learning models for molecular images
- Automated disease diagnosis using interpretable deep learning models
- Metaheuristics-based interpretable and lightweight deep learning models
- Interpretable and lightweight deep transfer learning models for molecular images
- Explainable deep federated learning models for molecular images
- Interpretable and lightweight deep reinforcement learning for molecular images
- Interpretable and lightweight deep adversarial networks for molecular images
- Interpretable and lightweight deep generative models for molecular images
- Hardware for interpretable and lightweight deep learning models for molecular images
- Information and communication technology (ICT) research for e-health applications.