论文丨 Ahad Amini Pishro, Shiquan Zhang, Alain L’Hostis, Qixiao Hu, Yuetong Liu, Zhengrui Zhang, Van Duc Nguyen, Yongguo Fu, Tianzeng Li:Partial differential equations and machine learning integration for transit-oriented development
会议议程丨中国法学会网络与信息法学研究会2025年年会暨第三届数字法治大会会议日程
授课安排丨四川大学法学院王竹教授授课安排(2025-2026学年秋季学期)
报考指南丨四川大学法学院王竹教授2026-2029年博士生报考指南
申请指南|数据安全防护与智能治理教育部重点实验室2025年度开放课题申请指南
会议议程丨高校哲学社会科学实验室联盟第二届会议
详细议程|第四届“数字法治与智慧司法”国际研讨会暨湖北省法学会法理学研究会2024年年会
会议议程丨中国法学会网络与信息法学研究会2024年年会暨第二届数字法治大会会议议程
会议通知 | 四川省法学会人工智能与大数据法治研究会会员大会暨2024年年会通知
征文启事丨CCF中国计算法学研讨会暨第三届学术年会征文启事
时间:2026-03-30![]()
本文(REaMA: Building Biomedical Relation Extraction Specialized Large Language Models Through Instruction Tuning)原载IEEE Transactions on Neural Networks and Learning Systems,由四川大学李国菠教授等科研人员创作,系四川大学智慧法治超前部署学科系列学术成果。后续会持续分享四川大学智慧法治超前部署学科系列学术成果,欢迎大家阅读。
Aiming to identify entity pairs with biomedical semantic relations and assign specific relation types, biomedical relation extraction (BioRE) plays a critical role in biomedical text mining and information extraction (IE). Recent studies indicate that general large language models (LLMs) have made some breakthroughs in general relation extraction (RE) tasks. However, even the advanced open-source LLMs struggle with BioRE tasks. For example, WizardLM-70B and LLaMA-2-70B achieve F-scores of 14.05 and 12.21 on the BioRED dataset, respectively, significantly lagging behind the state-of-the-art (SOTA) method which scores 65.17. To address this gap, a multitask instruction-tuning framework is proposed, which can transform general LLMs into BioRE-specialized models with our meticulously curated instruction dataset, REInstruct, comprising 150000 diverse and quality instruction-response pairs. Consequently, we introduce REaMA, a series of open-source LLMs with sizes of 7B and 13B specifically tailored for BioRE tasks. Experimental results on seven representative BioRE datasets show that both REaMA-2-7B and REaMA-2-13B acquire promising performance on all datasets. Remarkably, the larger REaMA-2-13B outperforms the current SOTA method on five out of seven datasets. The result exhibits the effectiveness of instruction-tuning on REInstruct in eliciting strong RE capabilities in LLMs. Furthermore, we show that incorporating chain of thought (CoT) into REInstruct can further enhance the generalization ability of REaMA. The project is available at https://github.
Zhang, Y.; Yu, J.; Li, G.; He, Z.; Yen, G. G. REaMA: Building Biomedical Relation Extraction Specialized Large Language Models Through Instruction Tuning. IEEE Transactions on Neural Networks and Learning Systems 2025, 1-15.(论文下载)