Poster "large language models" Papers

740 papers found • Page 10 of 15

SteerConf: Steering LLMs for Confidence Elicitation

Ziang Zhou, Tianyuan Jin, Jieming Shi et al.

NEURIPS 2025arXiv:2503.02863
8
citations

Steering LLMs' Behavior with Concept Activation Vectors

Ruixuan HUANG, Shuai Wang

ICLR 2025

Steering When Necessary: Flexible Steering Large Language Models with Backtracking

Zifeng Cheng, Jinwei Gan, Zhiwei Jiang et al.

NEURIPS 2025arXiv:2508.17621
1
citations

STEER-ME: Assessing the Microeconomic Reasoning of Large Language Models

Narun Raman, Taylor Lundy, Thiago Amin et al.

NEURIPS 2025arXiv:2502.13119
3
citations

Step-by-Step Reasoning for Math Problems via Twisted Sequential Monte Carlo

Shengyu Feng, Xiang Kong, shuang ma et al.

ICLR 2025arXiv:2410.01920
10
citations

Straight to Zero: Why Linearly Decaying the Learning Rate to Zero Works Best for LLMs

Shane Bergsma, Nolan Dey, Gurpreet Gosal et al.

ICLR 2025arXiv:2502.15938
24
citations

Streamlining Redundant Layers to Compress Large Language Models

Xiaodong Chen, Yuxuan Hu, Jing Zhang et al.

ICLR 2025arXiv:2403.19135
19
citations

StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization

Zhuoqun Li, Xuanang Chen, Haiyang Yu et al.

ICLR 2025arXiv:2410.08815
51
citations

SUMO: Subspace-Aware Moment-Orthogonalization for Accelerating Memory-Efficient LLM Training

Yehonathan Refael, Guy Smorodinsky, Tom Tirer et al.

NEURIPS 2025arXiv:2505.24749
7
citations

SWE-bench Goes Live!

Linghao Zhang, Shilin He, Chaoyun Zhang et al.

NEURIPS 2025arXiv:2505.23419
25
citations

SWE-SQL: Illuminating LLM Pathways to Solve User SQL Issues in Real-World Applications

Jinyang Li, Xiaolong Li, Ge Qu et al.

NEURIPS 2025arXiv:2506.18951
8
citations

SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond

Junteng Liu, Yuanxiang Fan, Jiang Zhuo et al.

NEURIPS 2025arXiv:2505.19641
23
citations

System Prompt Optimization with Meta-Learning

Yumin Choi, Jinheon Baek, Sung Ju Hwang

NEURIPS 2025arXiv:2505.09666
5
citations

Table as a Modality for Large Language Models

Liyao Li, Chao Ye, Wentao Ye et al.

NEURIPS 2025arXiv:2512.00947
2
citations

TANDEM: Bi-Level Data Mixture Optimization with Twin Networks

Jiaxing Wang, Deping Xiang, Jin Xu et al.

NEURIPS 2025

TANGO: Training-free Embodied AI Agents for Open-world Tasks

Filippo Ziliotto, Tommaso Campari, Luciano Serafini et al.

CVPR 2025arXiv:2412.10402
13
citations

Targeted control of fast prototyping through domain-specific interface

Yu-Zhe Shi, Mingchen Liu, Hanlu Ma et al.

ICML 2025arXiv:2506.11070
1
citations

TCM-Ladder: A Benchmark for Multimodal Question Answering on Traditional Chinese Medicine

Jiacheng Xie, Yang Yu, Ziyang Zhang et al.

NEURIPS 2025arXiv:2505.24063
3
citations

The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text

Nikhil Kandpal, Brian Lester, Colin Raffel et al.

NEURIPS 2025arXiv:2506.05209
11
citations

The Complexity of Learning Sparse Superposed Features with Feedback

Akash Kumar

ICML 2025arXiv:2502.05407

The Hyperfitting Phenomenon: Sharpening and Stabilizing LLMs for Open-Ended Text Generation

Fredrik Carlsson, Fangyu Liu, Daniel Ward et al.

ICLR 2025arXiv:2412.04318
4
citations

The Journey Matters: Average Parameter Count over Pre-training Unifies Sparse and Dense Scaling Laws

Tian Jin, Ahmed Imtiaz Humayun, Utku Evci et al.

ICLR 2025arXiv:2501.12486
1
citations

The Right to Red-Team: Adversarial AI Literacy as a Civic Imperative in K-12 Education

Devan Walton, Haesol Bae

NEURIPS 2025

The Rise of Parameter Specialization for Knowledge Storage in Large Language Models

Yihuai Hong, Yiran Zhao, Wei Tang et al.

NEURIPS 2025arXiv:2505.17260
1
citations

ThinkBench: Dynamic Out-of-Distribution Evaluation for Robust LLM Reasoning

Shulin Huang, Linyi Yang, Yan Song et al.

NEURIPS 2025arXiv:2502.16268
15
citations

ThinkBot: Embodied Instruction Following with Thought Chain Reasoning

Guanxing Lu, Ziwei Wang, Changliu Liu et al.

ICLR 2025arXiv:2312.07062
17
citations

Thinker: Learning to Think Fast and Slow

Stephen Chung, Wenyu Du, Jie Fu

NEURIPS 2025arXiv:2505.21097
8
citations

Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models

Chengyu Du, Jinyi Han, Yizhou Ying et al.

ICLR 2025arXiv:2410.13413
5
citations

Timely Clinical Diagnosis through Active Test Selection

Silas Ruhrberg Estévez, Nicolás Astorga, Mihaela van der Schaar

NEURIPS 2025arXiv:2510.18988

To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning

Zayne Sprague, Fangcong Yin, Juan Rodriguez et al.

ICLR 2025arXiv:2409.12183
250
citations

Token-Level Self-Play with Importance-Aware Guidance for Large Language Models

Tue Le, Hoang Tran, Quyen Tran et al.

NEURIPS 2025

ToolACE: Winning the Points of LLM Function Calling

Weiwen Liu, Xu Huang, Xingshan Zeng et al.

ICLR 2025arXiv:2409.00920
124
citations

TorchTitan: One-stop PyTorch native solution for production ready LLM pretraining

Wanchao Liang, Tianyu Liu, Less Wright et al.

ICLR 2025
53
citations

Towards a Theoretical Understanding of Synthetic Data in LLM Post-Training: A Reverse-Bottleneck Perspective

Zeyu Gan, Yong Liu

ICLR 2025arXiv:2410.01720
15
citations

Towards Effective Evaluations and Comparisons for LLM Unlearning Methods

Qizhou Wang, Bo Han, Puning Yang et al.

ICLR 2025arXiv:2406.09179
23
citations

Towards Federated RLHF with Aggregated Client Preference for LLMs

Feijie Wu, Xiaoze Liu, Haoyu Wang et al.

ICLR 2025arXiv:2407.03038
10
citations

Towards Higher Effective Rank in Parameter-Efficient Fine-tuning using Khatri-Rao Product

Paul Albert, Frederic Zhang, Hemanth Saratchandran et al.

ICCV 2025arXiv:2508.00230
4
citations

Towards Optimal Multi-draft Speculative Decoding

Zhengmian Hu, Tong Zheng, Vignesh Viswanathan et al.

ICLR 2025arXiv:2502.18779
11
citations

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

Sungmin Cha, Sungjun Cho, Dasol Hwang et al.

ICLR 2025arXiv:2408.06621
22
citations

Towards Understanding Safety Alignment: A Mechanistic Perspective from Safety Neurons

Jianhui Chen, Xiaozhi Wang, Zijun Yao et al.

NEURIPS 2025arXiv:2406.14144
24
citations

Toward Understanding In-context vs. In-weight Learning

Bryan Chan, Xinyi Chen, Andras Gyorgy et al.

ICLR 2025arXiv:2410.23042
15
citations

Training-Free Activation Sparsity in Large Language Models

James Liu, Pragaash Ponnusamy, Tianle Cai et al.

ICLR 2025arXiv:2408.14690
39
citations

Training-Free Bayesianization for Low-Rank Adapters of Large Language Models

Haizhou Shi, Yibin Wang, Ligong Han et al.

NEURIPS 2025arXiv:2412.05723
3
citations

Training Large Language Models for Retrieval-Augmented Question Answering through Backtracking Correction

Huawen Feng, ZekunYao, Junhao Zheng et al.

ICLR 2025
1
citations

TrajAgent: An LLM-Agent Framework for Trajectory Modeling via Large-and-Small Model Collaboration

Yuwei Du, Jie Feng, Jie Zhao et al.

NEURIPS 2025arXiv:2410.20445
4
citations

Trajectory-LLM: A Language-based Data Generator for Trajectory Prediction in Autonomous Driving

Kairui Yang, Zihao Guo, Gengjie Lin et al.

ICLR 2025

Transforming Generic Coder LLMs to Effective Binary Code Embedding Models for Similarity Detection

Litao Li, Leo Song, Steven Ding et al.

NEURIPS 2025

Traversal Verification for Speculative Tree Decoding

Yepeng Weng, Qiao Hu, Xujie Chen et al.

NEURIPS 2025arXiv:2505.12398
3
citations

Tree of Preferences for Diversified Recommendation

Hanyang Yuan, Ning Tang, Tongya Zheng et al.

NEURIPS 2025arXiv:2601.02386

Triples as the Key: Structuring Makes Decomposition and Verification Easier in LLM-based TableQA

Zhen Yang, Ziwei Du, Minghan Zhang et al.

ICLR 2025
7
citations