详细议程|第四届“数字法治与智慧司法”国际研讨会暨湖北省法学会法理学研究会2024年年会
会议议程丨中国法学会网络与信息法学研究会2024年年会暨第二届数字法治大会会议议程
会议通知 | 四川省法学会人工智能与大数据法治研究会会员大会暨2024年年会通知
征文启事丨CCF中国计算法学研讨会暨第三届学术年会征文启事
会议议程丨网络与信息法学学科建设论坛
获奖名单|第二届“法研灯塔”司法大数据征文比赛获奖名单出炉啦!
讲座信息|王竹:数据产权的民法规制路径
会议议程 | 四川省法学会人工智能与大数据法治研究会2023年年会暨“人工智能与数据法律风险研讨会”
会议议程|11.04 中国民商法海南冬季论坛——数据法学的当下和未来
讲座信息|王竹:数据产品的民法规制路径
时间:2022-03-09
本文(Efficient Sparse Representation for Learning with High-Dimensional Data)原载IEEE Trans. Neural. Netw. Learn. Syst,由四川大学法学院王竹教授、四川大学计算机学院陈杰副教授等科研人员创作,系四川大学智慧法治超前部署学科系列学术成果。后续会持续分享四川大学智慧法治超前部署学科系列学术成果,欢迎大家阅读。
Due to the capability of effectively learning intrinsic structures from high-dimensional data, techniques based on sparse representation have begun to display an impressive impact on several fields, such as image processing, computer vision, and pattern recognition. Learning sparse representations isoften computationally expensive due to the iterative computations needed to solve convex optimization problems in which the number of iterations is unknown before convergence. Moreover, most sparse representation algorithms focus only on determining the final sparse representation results and ignore the changes in the sparsity ratio (SR) during iterative computations. In this article, two algorithms are proposed to learn sparse representations based on locality-constrained linear representation learning with probabilistic simplex constraints. Specifically, the first algorithm, called approximated local linear representation (ALLR), obtains a closed-form solution from individual locality-constrained sparse representations. The second algorithm, called ALLR with symmetric constraints (ALLR [Math Processing Error] ), further obtains a symmetric sparse representation result with a limited number of computations; notably, the sparsity and convergence of sparse representations can be guaranteed based on theoretical analysis. The steady decline in the SR during iterative computations is a critical factor in practical applications. Experimental results based on public datasets demonstrate that the proposed algorithms perform better than several state-of-the-art algorithms for learning with high-dimensional data.
Jie Chen, Shengxiang Yang*, Zhu Wang, and Hua Mao, Efficient Sparse Representation for Learning with High-Dimensional Data, IEEE Trans. Neural. Netw. Learn. Syst., pp. 1-15, Oct. 2021, DOI: 10.1109/TNNLS.2021.3119278.(论文下载)