"foundation models" Papers

118 papers found • Page 3 of 3

One-Prompt to Segment All Medical Images

Wu, Min Xu

CVPR 2024arXiv:2305.10300
47
citations

OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning

Lingyi Hong, Shilin Yan, Renrui Zhang et al.

CVPR 2024highlightarXiv:2403.09634
123
citations

Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI

Theodore Papamarkou, Maria Skoularidou, Konstantina Palla et al.

ICML 2024arXiv:2402.00809
60
citations

Position: Cracking the Code of Cascading Disparity Towards Marginalized Communities

Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh

ICML 2024arXiv:2406.01757
3
citations

Position: On the Societal Impact of Open Foundation Models

Sayash Kapoor, Rishi Bommasani, Kevin Klyman et al.

ICML 2024

Position: Open-Endedness is Essential for Artificial Superhuman Intelligence

Edward Hughes, Michael Dennis, Jack Parker-Holder et al.

ICML 2024

Position: Towards Unified Alignment Between Agents, Humans, and Environment

Zonghan Yang, an liu, Zijun Liu et al.

ICML 2024

Prompting is a Double-Edged Sword: Improving Worst-Group Robustness of Foundation Models

Amrith Setlur, Saurabh Garg, Virginia Smith et al.

ICML 2024

Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective

Yang Chen, Cong Fang, Zhouchen Lin et al.

ICML 2024arXiv:2406.11249
2
citations

Relational Programming with Foundational Models

Ziyang Li, Jiani Huang, Jason Liu et al.

AAAI 2024paperarXiv:2412.14515
11
citations

RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation

Yufei Wang, Zhou Xian, Feng Chen et al.

ICML 2024arXiv:2311.01455
188
citations

Robustness Tokens: Towards Adversarial Robustness of Transformers

Brian Pulfer, Yury Belousov, Slava Voloshynovskiy

ECCV 2024arXiv:2503.10191

Towards Causal Foundation Model: on Duality between Optimal Balancing and Attention

Jiaqi Zhang, Joel Jennings, Agrin Hilmkil et al.

ICML 2024

Training Like a Medical Resident: Context-Prior Learning Toward Universal Medical Image Segmentation

Yunhe Gao

CVPR 2024arXiv:2306.02416
31
citations

Transferring Knowledge From Large Foundation Models to Small Downstream Models

Shikai Qiu, Boran Han, Danielle Robinson et al.

ICML 2024arXiv:2406.07337
8
citations

UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning

Shikun Feng, Yuyan Ni, Li et al.

ICML 2024arXiv:2405.10343
18
citations

V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models

Heng Wang, Jianbo Ma, Santiago Pascual et al.

AAAI 2024paperarXiv:2308.09300
75
citations

ViP: A Differentially Private Foundation Model for Computer Vision

Yaodong Yu, Maziar Sanjabi, Yi Ma et al.

ICML 2024arXiv:2306.08842
18
citations