dc.description.abstract | Purpose — Multi-task learning (MTL) is a deep learning approach that aims to jointly learn two or more tasks with the goal of leveraging shared knowledge among the tasks. This study aimed to review existing MTL models in medical image processing to understand the current state of research, evaluate major breakthroughs, and analyze open gaps and future research direction.
Methodology — The study conducted a systematic literature review employing a search of peer-reviewed journal articles and conference proceedings. The articles were sourced from IEEE, ScienceDirect, PubMed, and Google Scholar databases. 52 primary papers published between 2016 and 2024 were considered in this study.
Results — The study's findings reveal that breakthroughs have been made in increasing the scope of task combinations in both homogenous and heterogenous tasks. Additionally, innovative architectural designs and learning methods have emerged. Although MTL has emerged as a panacea for medical image processing, some grey areas in research need to be addressed. They include task relatedness, scope of task combinations, generative MTL, and longitudinal MTL.
Conclusion — The study conducted a comprehensive analysis of multi-task models in medical image processing. The findings reveal breakthroughs in architecture, task combinations, and learning methods, and open gaps in this field. Metrics variability and proprietary datasets were the major limitations of this study.
Recommendations — Future researchers should focus on addressing the gaps identified in this study especially increasing the scope of MTL and designing more robust and highly generalizable neural networks for longitudinal MTL. Research Implications — The review evaluates the current state of medical image processing using MTL, offering insights into both theoretical and practical aspects. These insights provide direction for future researchers to advance the field and for policymakers to support ethical data collection and sharing. | en_US |