"zero-shot performance" Papers

18 papers found

$\boldsymbol{\lambda}$-Orthogonality Regularization for Compatible Representation Learning

Simone Ricci, Niccolò Biondi, Federico Pernici et al.

NEURIPS 2025
3
citations

AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents

Ke Yang, Yao Liu, Sapana Chaudhary et al.

ICLR 2025arXiv:2410.13825
69
citations

Beyond Token Probes: Hallucination Detection via Activation Tensors with ACT-ViT

Guy Bar-Shalom, Fabrizio Frasca, Yaniv Galron et al.

NEURIPS 2025arXiv:2510.00296
1
citations

DiTTo-TTS: Diffusion Transformers for Scalable Text-to-Speech without Domain-Specific Factors

Keon Lee, Dong Won Kim, Jaehyeon Kim et al.

ICLR 2025arXiv:2406.11427
28
citations

Equivariance Everywhere All At Once: A Recipe for Graph Foundation Models

Ben Finkelshtein, Ismail Ilkan Ceylan, Michael Bronstein et al.

NEURIPS 2025arXiv:2506.14291
12
citations

Improving Regret Approximation for Unsupervised Dynamic Environment Generation

Harry Mead, Bruno Lacerda, Jakob Foerster et al.

NEURIPS 2025arXiv:2601.14957

Is Large-scale Pretraining the Secret to Good Domain Generalization?

Piotr Teterwak, Kuniaki Saito, Theodoros Tsiligkaridis et al.

ICLR 2025arXiv:2412.02856
6
citations

MuGS: Multi-Baseline Generalizable Gaussian Splatting Reconstruction

Yaopeng Lou, Liao Shen, Tianqi Liu et al.

ICCV 2025arXiv:2508.04297

PolarAnything: Diffusion-based Polarimetric Image Synthesis

Kailong Zhang, Youwei Lyu, Heng Guo et al.

ICCV 2025highlightarXiv:2507.17268
1
citations

Post-pre-training for Modality Alignment in Vision-Language Foundation Models

Shin'ya Yamaguchi, Dewei Feng, Sekitoshi Kanai et al.

CVPR 2025arXiv:2504.12717
12
citations

Seurat: From Moving Points to Depth

Seokju Cho, Gabriel Huang, Seungryong Kim et al.

CVPR 2025highlightarXiv:2504.14687
9
citations

TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data

Jeremy Irvin, Emily Liu, Joyce Chen et al.

ICLR 2025oralarXiv:2410.06234
45
citations

Zebra-Llama: Towards Extremely Efficient Hybrid Models

Mingyu Yang, Mehdi Rezagholizadeh, Guihong Li et al.

NEURIPS 2025arXiv:2505.17272
7
citations

Agent Instructs Large Language Models to be General Zero-Shot Reasoners

Nicholas Crispino, Kyle Montgomery, Fankun Zeng et al.

ICML 2024arXiv:2310.03710
40
citations

Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data

Jiahan Zhang, Qi Wei, Feng Liu et al.

ICML 2024arXiv:2406.10502
22
citations

Evolution-Inspired Loss Functions for Protein Representation Learning

Chengyue Gong, Adam Klivans, James Loy et al.

ICML 2024

L-MAGIC: Language Model Assisted Generation of Images with Coherence

zhipeng cai, Matthias Mueller, Reiner Birkl et al.

CVPR 2024arXiv:2406.01843
7
citations

Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship Detection

Tim Salzmann, Markus Ryll, Alex Bewley et al.

ECCV 2024arXiv:2403.14270
8
citations