"pre-trained language models" Papers
17 papers found
Conference
Certifying Language Model Robustness with Fuzzed Randomized Smoothing: An Efficient Defense Against Backdoor Attacks
Bowei He, Lihao Yin, Huiling Zhen et al.
DCA: Dividing and Conquering Amnesia in Incremental Object Detection
Aoting Zhang, Dongbao Yang, Chang Liu et al.
DELTA: Pre-Train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment
Haitao Li, Qingyao Ai, Xinyan Han et al.
Exploring the limits of strong membership inference attacks on large language models
Jamie Hayes, I Shumailov, Christopher A. Choquette-Choo et al.
Knowledge Graph Completion with Relation-Aware Anchor Enhancement
Duanyang Yuan, Sihang Zhou, Xiaoshu Chen et al.
Multi-Attribute Multi-Grained Adaptation of Pre-Trained Language Models for Text Understanding from Bayesian Perspective
You Zhang, Jin Wang, Liang-Chih Yu et al.
Multimodal Quantitative Language for Generative Recommendation
Jianyang Zhai, Zi-Feng Mai, Chang-Dong Wang et al.
PLMTrajRec: A Scalable and Generalizable Trajectory Recovery Method with Pre-trained Language Models
Tonglong Wei, Yan Lin, Youfang Lin et al.
SMI-Editor: Edit-based SMILES Language Model with Fragment-level Supervision
Kangjie Zheng, Siyue Liang, Junwei Yang et al.
Defense against Backdoor Attack on Pre-trained Language Models via Head Pruning and Attention Normalization
Xingyi Zhao, Depeng Xu, Shuhan Yuan
LangCell: Language-Cell Pre-training for Cell Identity Understanding
Suyuan Zhao, Jiahuan Zhang, Yushuai Wu et al.
Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor
Han Liu, Siyang Zhao, Xiaotong Zhang et al.
Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion
Cunhang Fan, Yujie Chen, Jun Xue et al.
Question Calibration and Multi-Hop Modeling for Temporal Question Answering
Chao Xue, Di Liang, Pengfei Wang et al.
Synergistic Anchored Contrastive Pre-training for Few-Shot Relation Extraction
Da Luo, Yanglei Gan, Rui Hou et al.
Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation
Xinyi Wang, Alfonso Amayuelas, Kexun Zhang et al.
Wikiformer: Pre-training with Structured Information of Wikipedia for Ad-Hoc Retrieval
Weihang Su, Qingyao Ai, Xiangsheng Li et al.