Model of image fusion based on deep learning
Researchers at the College of Data Science Software Engineering at China’s Qingdao University have developed a new “multi-modal” image fusion method based on supervised deep learning that enhances image clarity, reduces redundant image features and supports batch processing. Their findings have just been published in KeAi’s International Journal of Cognitive Computing in Engineering.
Most medical images have unilateral or limited information content; for instance, focus positions vary which can make some objects appear blurred. Having important information scattered across a number of images can hamper a doctor’s judgment. Image fusion is an effective solution—it automatically detects the information contained in those separate images and integrates them to produce one composite image.”
Researchers are increasingly turning to deep learning to improve image fusion. Deep learning, a subset of machine learning, draws on artificial neural networks that are designed to imitate how humans think and learn. That means it is capable of learning from data that is unstructured or unlabelled.
However much of the current research focuses on the application of deep learning in single image fusion processing. Studies that use it for multi-image batch processing are much rarer.During our study, we used successful image fusion results to build an image-training database. We were then able to use that database to fuse medical images in batches.
Yi Li et al, Medical image fusion method by deep learning, International Journal of Cognitive Computing in Engineering (2021). DOI: 10.1016/j.ijcce.2020.12.004