"zero-shot learning" Papers

164 papers found • Page 2 of 4

Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation Learning

Baoqi Pei, Yifei Huang, Jilan Xu et al.

ICLR 2025arXiv:2503.00986
12
citations

MotionDiff: Training-free Zero-shot Interactive Motion Editing via Flow-assisted Multi-view Diffusion

Yikun Ma, Yiqing Li, Jiawei Wu et al.

ICCV 2025arXiv:2503.17695
1
citations

MultiADS: Defect-aware Supervision for Multi-type Anomaly Detection and Segmentation in Zero-Shot Learning

Ylli Sadikaj, Hongkuan Zhou, Lavdim Halilaj et al.

ICCV 2025arXiv:2504.06740
9
citations

Multimodal Unsupervised Domain Generalization by Retrieving Across the Modality Gap

Christopher Liao, Christian So, Theodoros Tsiligkaridis et al.

ICLR 2025arXiv:2402.04416
1
citations

Multi-party Collaborative Attention Control for Image Customization

Han Yang, Chuanguang Yang, Qiuli Wang et al.

CVPR 2025arXiv:2505.01428
5
citations

Multitask Learning with Stochastic Interpolants

Hugo Negrel, Florentin Coeurdoux, Michael Albergo et al.

NEURIPS 2025spotlightarXiv:2508.04605
1
citations

Neural Motion Simulator Pushing the Limit of World Models in Reinforcement Learning

Chenjie Hao, Weyl Lu, Yifan Xu et al.

CVPR 2025arXiv:2504.07095
5
citations

Noisy Test-Time Adaptation in Vision-Language Models

Chentao Cao, Zhun Zhong, (Andrew) Zhanke Zhou et al.

ICLR 2025arXiv:2502.14604
4
citations

Novel View Synthesis from A Few Glimpses via Test-Time Natural Video Completion

Yan Xu, Yixing Wang, Stella X. Yu

NEURIPS 2025arXiv:2511.17932

ORIGEN: Zero-Shot 3D Orientation Grounding in Text-to-Image Generation

Yunhong Min, Daehyeon Choi, Kyeongmin Yeo et al.

NEURIPS 2025arXiv:2503.22194
3
citations

PostCast: Generalizable Postprocessing for Precipitation Nowcasting via Unsupervised Blurriness Modeling

Junchao Gong, Siwei Tu, Weidong Yang et al.

ICLR 2025oralarXiv:2410.05805
7
citations

RadZero: Similarity-Based Cross-Attention for Explainable Vision-Language Alignment in Chest X-ray with Zero-Shot Multi-Task Capability

Jonggwon Park, Byungmu Yoon, Soobum Kim et al.

NEURIPS 2025arXiv:2504.07416
1
citations

RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion

Bardienus Duisterhof, Jan Oberst, Bowen Wen et al.

NEURIPS 2025arXiv:2506.05285
4
citations

Reason-before-Retrieve: One-Stage Reflective Chain-of-Thoughts for Training-Free Zero-Shot Composed Image Retrieval

Yuanmin Tang, Jue Zhang, Xiaoting Qin et al.

CVPR 2025highlightarXiv:2412.11077
18
citations

Reconstruct, Inpaint, Test-Time Finetune: Dynamic Novel-view Synthesis from Monocular Videos

Kaihua Chen, Tarasha Khurana, Deva Ramanan

NEURIPS 2025arXiv:2507.12646
2
citations

RESAnything: Attribute Prompting for Arbitrary Referring Segmentation

Ruiqi Wang, Hao Zhang

NEURIPS 2025arXiv:2505.02867
2
citations

scGeneScope: A Treatment-Matched Single Cell Imaging and Transcriptomics Dataset and Benchmark for Treatment Response Modeling

Joel Dapello, Marcel Nassar, Ridvan Eksi et al.

NEURIPS 2025

SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding

Rong Li, Shijie Li, Lingdong Kong et al.

CVPR 2025arXiv:2412.04383
43
citations

Semantic Surgery: Zero-Shot Concept Erasure in Diffusion Models

Lexiang Xiong, Liu Chengyu, Jingwen Ye et al.

NEURIPS 2025arXiv:2510.22851
1
citations

Should VLMs be Pre-trained with Image Data?

Sedrick Keh, Jean Mercat, Samir Yitzhak Gadre et al.

ICLR 2025arXiv:2503.07603

SPAZER: Spatial-Semantic Progressive Reasoning Agent for Zero-shot 3D Visual Grounding

Zhao Jin, Rong-Cheng Tu, Jingyi Liao et al.

NEURIPS 2025arXiv:2506.21924
3
citations

SplashNet: Split‑and‑Share Encoders for Accurate and Efficient Typing with Surface Electromyography

Nima Hadidi, Jason Chan, Ebrahim Feghhi et al.

NEURIPS 2025arXiv:2506.12356

Standing on the Shoulders of Giants: Reprogramming Visual-Language Model for General Deepfake Detection

Kaiqing Lin, Yuzhen Lin, Weixiang Li et al.

AAAI 2025paperarXiv:2409.02664
19
citations

Support Vector Generation: Kernelizing Large Language Models for Efficient Zero‑Shot NLP

Shohei Ohsawa

NEURIPS 2025

SVIP: Semantically Contextualized Visual Patches for Zero-Shot Learning

Zhi Chen, Zecheng Zhao, Jingcai Guo et al.

ICCV 2025arXiv:2503.10252
6
citations

TAViS: Text-bridged Audio-Visual Segmentation with Foundation Models

Ziyang Luo, Nian Liu, Xuguang Yang et al.

ICCV 2025arXiv:2506.11436
3
citations

Teaching Human Behavior Improves Content Understanding Abilities Of VLMs

SOMESH SINGH, Harini S I, Yaman Singla et al.

ICLR 2025
2
citations

The Labyrinth of Links: Navigating the Associative Maze of Multi-modal LLMs

HONG LI, Nanxi Li, Yuanjie Chen et al.

ICLR 2025arXiv:2410.01417
3
citations

TikZero: Zero-Shot Text-Guided Graphics Program Synthesis

Jonas Belouadi, Eddy Ilg, Margret Keuper et al.

ICCV 2025highlightarXiv:2503.11509
5
citations

Towards Efficient Foundation Model for Zero-shot Amodal Segmentation

Zhaochen Liu, Limeng Qiao, Xiangxiang Chu et al.

CVPR 2025
3
citations

Translation of Text Embedding via Delta Vector to Suppress Strongly Entangled Content in Text-to-Image Diffusion Models

Eunseo Koh, SeungHoo Hong, Tae-Young Kim et al.

ICCV 2025arXiv:2508.10407

TS-RAG: Retrieval-Augmented Generation based Time Series Foundation Models are Stronger Zero-Shot Forecaster

Kanghui Ning, Zijie Pan, Yu Liu et al.

NEURIPS 2025arXiv:2503.07649
15
citations

Universal Features Guided Zero-Shot Category-Level Object Pose Estimation

Wentian Qu, Chenyu Meng, Heng Li et al.

AAAI 2025paperarXiv:2501.02831

Unleashing the Potential of Multimodal LLMs for Zero-Shot Spatio-Temporal Video Grounding

Zaiquan Yang, Yuhao LIU, Gerhard Hancke et al.

NEURIPS 2025oralarXiv:2509.15178
2
citations

Video Motion Transfer with Diffusion Transformers

Alexander Pondaven, Aliaksandr Siarohin, Sergey Tulyakov et al.

CVPR 2025arXiv:2412.07776
20
citations

Vision Transformers with Self-Distilled Registers

Zipeng Yan, Yinjie Chen, Chong Zhou et al.

NEURIPS 2025spotlightarXiv:2505.21501
4
citations

Visual and Semantic Prompt Collaboration for Generalized Zero-Shot Learning

Huajie Jiang, Zhengxian Li, Xiaohan Yu et al.

CVPR 2025arXiv:2503.23030
1
citations

X-Dyna: Expressive Dynamic Human Image Animation

Di Chang, Hongyi Xu, You Xie et al.

CVPR 2025highlightarXiv:2501.10021
15
citations

X-NeMo: Expressive Neural Motion Reenactment via Disentangled Latent Attention

XiaoChen Zhao, Hongyi Xu, Guoxian Song et al.

ICLR 2025arXiv:2507.23143
20
citations

Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion

Zhenglin Zhou, Fan Ma, Hehe Fan et al.

CVPR 2025arXiv:2503.15851
4
citations

Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations

Jeong Hun Yeo, Minsu Kim, Chae Won Kim et al.

ICCV 2025arXiv:2503.06273
5
citations

ZeroMamba: Exploring Visual State Space Model for Zero-Shot Learning

Wenjin Hou, Dingjie Fu, Kun Li et al.

AAAI 2025paperarXiv:2408.14868
2
citations

ZeroSep: Separate Anything in Audio with Zero Training

Chao Huang, Yuesheng Ma, Junxuan Huang et al.

NEURIPS 2025arXiv:2505.23625
4
citations

Zero-shot forecasting of chaotic systems

Yuanzhao Zhang, William Gilpin

ICLR 2025arXiv:2409.15771
19
citations

Zero-shot Model-based Reinforcement Learning using Large Language Models

Abdelhakim Benechehab, Youssef Attia El Hili, Ambroise Odonnat et al.

ICLR 2025arXiv:2410.11711
5
citations

Zero-shot protein stability prediction by inverse folding models: a free energy interpretation

Jes Frellsen, Maher Kassem, Tone Bengtsen et al.

NEURIPS 2025arXiv:2506.05596
4
citations

Zero-Shot Styled Text Image Generation, but Make It Autoregressive

Vittorio Pippi, Fabio Quattrini, Silvia Cascianelli et al.

CVPR 2025arXiv:2503.17074
9
citations

Z-Magic: Zero-shot Multiple Attributes Guided Image Creator

Yingying Deng, Xiangyu He, Fan Tang et al.

CVPR 2025arXiv:2503.12124
3
citations

${\rm E}(3)$-Equivariant Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning

Dingyang Chen, Qi Zhang

ICML 2024arXiv:2308.11842
9
citations

A decoder-only foundation model for time-series forecasting

Abhimanyu Das, Weihao Kong, Rajat Sen et al.

ICML 2024oralarXiv:2310.10688
495
citations