中文 English

您当前所在位置:首页 > 学术成果

学术成果

论文|Qi Gao, Zilong Li, Junping Zhang, Yi Zhang, and Hongming Shan*:CoreDiff: contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization

时间:2024-09-13

本文(CoreDiff: contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization原载IEEE Transactions on Medical Imaging四川大学张意教授等科研人员创作,系四川大学智慧法治超前部署学科系列学术成果。后续会持续分享四川大学智慧法治超前部署学科系列学术成果,欢迎大家阅读。



Low-dose computed tomography (CT) images suffer from noise and artifacts due to photon starvation and electronic noise. Recently, some works have attempted to use diffusion models to address the over-smoothness and training instability encountered by previous deep-learning-based denoising models. However, diffusion models suffer from long inference time due to a large number of sampling steps involved. Very recently, cold diffusion model generalizes classical diffusion models and has greater flexibility. Inspired by cold diffusion, this paper presents a novel COntextual eRror-modulated gEneralized Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff. First, CoreDiff utilizes LDCT images to displace the random Gaussian noise and employs a novel mean-preserving degradation operator to mimic the physical process of CT degradation, significantly reducing sampling steps thanks to the informative LDCT images as the starting point of the sampling process. Second, to alleviate the error accumulation problem caused by the imperfect restoration operator in the sampling process, we propose a novel ContextuaL Error-modulAted Restoration Network (CLEAR-Net), which can leverage contextual information to constrain the sampling process from structural distortion and modulate time step embedding features for better alignment with the input at the next time step. Third, to rapidly generalize the trained model to a new, unseen dose level with as few resources as possible, we devise a one-shot learning framework to make CoreDiff generalize faster and better using only one single LDCT image (un)paired with normal-dose CT (NDCT). Extensive experimental results on four datasets demonstrate that our CoreDiff outperforms competing methods in denoising and generalization performance, with clinically acceptable inference time. Source code is made available at https://github.com/qgao21/CoreDiff.



Qi Gao, Zilong Li, Junping Zhang, Yi Zhang, and Hongming Shan*. CoreDiff: contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization. IEEE Transactions on Medical Imaging, pp. 745-759, vol. 43, 2024.(论文下载)