中文 English

您当前所在位置:首页 > 学术成果

学术成果

论文|Zerui Shao, Yifei Pu, Jiliu Zhou, Bihan Wen*, and Yi Zhang*:Hyper RPCA: Joint maximum correntropy criterion and Laplacian scale mixture modeling on-the-fly for moving object detection

时间:2024-05-31

本文(Hyper RPCA: Joint maximum correntropy criterion and Laplacian scale mixture modeling on-the-fly for moving object detection原载IEEE Transactions on Multimedia四川大学张意教授等科研人员创作,系四川大学智慧法治超前部署学科系列学术成果。后续会持续分享四川大学智慧法治超前部署学科系列学术成果,欢迎大家阅读。



Moving object detection is critical for automated video analysis in many vision-related tasks, such as surveillance tracking, video compression coding, etc. Robust Principal Component Analysis (RPCA), as one of the most popular moving object modelling methods, aims to separate the temporally-varying (i.e., moving) foreground objects from the static background in video, assuming the background frames to be low-rank while the foreground to be spatially sparse. Classic RPCA imposes sparsity of the foreground component using ℓ1-norm, and minimizes the modeling error via ℓ2-norm. We show that such assumptions can be too restrictive in practice, which limits the effectiveness of the classic RPCA, especially when processing videos with dynamic background, camera jitter, camouflaged moving object, etc. In this paper, we propose a novel RPCA-based model, called Hyper RPCA, to detect moving objects on the fly. Different from classic RPCA, the proposed Hyper RPCA jointly applies the maximum correntropy criterion (MCC) for the modeling error, and Laplacian scale mixture (LSM) model for foreground objects. Extensive experiments have been conducted, and the results demonstrate that the proposed Hyper RPCA has competitive performance for foreground detection to the state-of-the-art algorithms on several well-known benchmark datasets.



Zerui Shao, Yifei Pu, Jiliu Zhou, Bihan Wen*, and Yi Zhang*. Hyper RPCA: Joint maximum correntropy criterion and Laplacian scale mixture modeling on-the-fly for moving object detection. IEEE Transactions on Multimedia, vol. 25, no. 1, 2023, pp. 112-125.(论文下载)