All Papers

34,598 papers found • Page 689 of 692

Weighted distance nearest neighbor condensing

Lee-Ad Gottlieb, Timor Sharabi, Roi Weiss

ICML 2024arXiv:2310.15951

Weighted Ensemble Models Are Strong Continual Learners

Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione et al.

ECCV 2024arXiv:2312.08977
38
citations

Weighted Envy-Freeness for Submodular Valuations

Luisa Montanari, Ulrike Schmidt-Kraepelin, Warut Suksompong et al.

AAAI 2024paperarXiv:2209.06437
19
citations

Weighting Pseudo-Labels via High-Activation Feature Index Similarity and Object Detection for Semi-Supervised Segmentation

Prantik Howlader, Hieu Le, Dimitris Samaras

ECCV 2024arXiv:2407.12630
3
citations

Weisfeiler and Lehman Go Paths: Learning Topological Features via Path Complexes

Quang Truong, Peter Chin

AAAI 2024paperarXiv:2308.06838
12
citations

Weisfeiler-Leman at the margin: When more expressivity matters

Billy Franks, Christopher Morris, Ameya Velingker et al.

ICML 2024arXiv:2402.07568
15
citations

Weisfeiler Leman for Euclidean Equivariant Machine Learning

Snir Hordan, Tal Amir, Nadav Dym

ICML 2024arXiv:2402.02484
10
citations

Well, Now We Know! Unveiling Sarcasm: Initiating and Exploring Multimodal Conversations with Reasoning

Gopendra Singh, Mauajama Firdaus, Dushyant Singh Chauhan et al.

AAAI 2024paper

WHAC: World-grounded Humans and Cameras

Wanqi Yin, Zhongang Cai, Chen Wei et al.

ECCV 2024arXiv:2403.12959
29
citations

WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion

Soyong Shin, Juyong Kim, Eni Halilaj et al.

CVPR 2024arXiv:2312.07531
169
citations

What Algorithms can Transformers Learn? A Study in Length Generalization

Hattie Zhou, Arwen Bradley, Etai Littwin et al.

ICLR 2024arXiv:2310.16028
170
citations

What Are the Rules? Discovering Constraints from Data

Boris Wiegand, Dietrich Klakow, Jilles Vreeken

AAAI 2024paper
1
citations

What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks

Xingwu Chen, Difan Zou

ICML 2024arXiv:2404.01601
20
citations

"What Data Benefits My Classifier?" Enhancing Model Performance and Interpretability through Influence-Based Data Selection

Anshuman Chhabra, Peizhao Li, Prasant Mohapatra et al.

ICLR 2024

What Does a Query Answer Tell You? Informativeness of Query Answers for Knowledge Bases

Luca Andolfi, Gianluca Cima, Marco Console et al.

AAAI 2024paper
2
citations

What does automatic differentiation compute for neural networks?

Sejun Park, Sanghyuk Chun, Wonyeol Lee

ICLR 2024spotlight

What does the Knowledge Neuron Thesis Have to do with Knowledge?

Jingcheng Niu, Andrew Liu, Zining Zhu et al.

ICLR 2024spotlightarXiv:2405.02421
49
citations

What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation

Yihua Cheng, Yaning Zhu, Zongji Wang et al.

CVPR 2024arXiv:2403.15664
18
citations

What Effects the Generalization in Visual Reinforcement Learning: Policy Consistency with Truncated Return Prediction

Shuo Wang, Zhihao Wu, X. Hu et al.

AAAI 2024paper
17
citations

What How and When Should Object Detectors Update in Continually Changing Test Domains?

Jayeon Yoo, Dongkwan Lee, Inseop Chung et al.

CVPR 2024arXiv:2312.08875
16
citations

What If the TV Was Off? Examining Counterfactual Reasoning Abilities of Multi-modal Language Models

Letian Zhang, Xiaotong Zhai, Zhongkai Zhao et al.

CVPR 2024arXiv:2310.06627

What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding

Hongkang Li, Meng Wang, Tengfei Ma et al.

ICML 2024arXiv:2406.01977
19
citations

What is Dataset Distillation Learning?

William Yang, Ye Zhu, Zhiwei Deng et al.

ICML 2024arXiv:2406.04284
13
citations

What is the Long-Run Distribution of Stochastic Gradient Descent? A Large Deviations Analysis

Waïss Azizian, Franck Iutzeler, Jérôme Malick et al.

ICML 2024arXiv:2406.09241
14
citations

What Makes a Good Prune? Maximal Unstructured Pruning for Maximal Cosine Similarity

Gabryel Mason-Williams, Fredrik Dahlqvist

ICLR 2024
17
citations

What Makes Good Collaborative Views? Contrastive Mutual Information Maximization for Multi-Agent Perception

Wanfang Su, Lixing Chen, Yang Bai et al.

AAAI 2024paperarXiv:2403.10068
15
citations

What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning

Wei Liu, Weihao Zeng, Keqing He et al.

ICLR 2024arXiv:2312.15685
337
citations

What Makes Quantization for Large Language Model Hard? An Empirical Study from the Lens of Perturbation

Huankang Guan, Rynson W.H. Lau

AAAI 2024paper

What Matters to You? Towards Visual Representation Alignment for Robot Learning

Thomas Tian, Chenfeng Xu, Masayoshi Tomizuka et al.

ICLR 2024oralarXiv:2310.07932
16
citations

What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation

Aaditya Singh, Ted Moskovitz, Feilx Hill et al.

ICML 2024spotlightarXiv:2404.07129
64
citations

What's in a Prior? Learned Proximal Networks for Inverse Problems

Zhenghan Fang, Sam Buchanan, Jeremias Sulam

ICLR 2024arXiv:2310.14344
25
citations

What's In My Big Data?

Yanai Elazar, Akshita Bhagia, Ian Magnusson et al.

ICLR 2024spotlightarXiv:2310.20707
126
citations

What Sketch Explainability Really Means for Downstream Tasks?

Hmrishav Bandyopadhyay, Pinaki Nath Chowdhury, Ayan Kumar Bhunia et al.

CVPR 2024arXiv:2403.09480
8
citations

What’s the score? Automated Denoising Score Matching for Nonlinear Diffusions

raghav singhal, Mark Goldstein, Rajesh Ranganath

ICML 2024arXiv:2407.07998
8
citations

What to Remember: Self-Adaptive Continual Learning for Audio Deepfake Detection

XiaoHui Zhang, Jiangyan Yi, Chenglong Wang et al.

AAAI 2024paperarXiv:2312.09651
43
citations

What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions

Brian Chen, Nina Shvetsova, Andrew Rouditchenko et al.

CVPR 2024arXiv:2303.16990
9
citations

What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement

Xisen Jin, Xiang Ren

ICML 2024spotlightarXiv:2402.01865
8
citations

What Would Gauss Say About Representations? Probing Pretrained Image Models using Synthetic Gaussian Benchmarks

Ching-Yun (Irene) Ko, Pin-Yu Chen, Payel Das et al.

ICML 2024

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

Alex Trevithick, Matthew Chan, Towaki Takikawa et al.

CVPR 2024arXiv:2401.02411
15
citations

When and How Does In-Distribution Label Help Out-of-Distribution Detection?

Xuefeng Du, Yiyou Sun, Sharon Li

ICML 2024arXiv:2405.18635
11
citations

When and How do negative prompts take effect?

Yuanhao Ban, Ruochen Wang, Tianyi Zhou et al.

ECCV 2024

When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-Making

Kate Donahue, Sreenivas Gollapudi, Kostas Kollias

AAAI 2024paperarXiv:2308.11721
8
citations

When can transformers reason with abstract symbols?

Enric Boix-Adserà, Omid Saremi, Emmanuel Abbe et al.

ICLR 2024

When CEGAR Meets Regression: A Love Story in Optimal Classical Planning

Martín Pozo, Alvaro Torralba, Carlos Linares Lopez

AAAI 2024paper
3
citations

When Do Program-of-Thought Works for Reasoning?

Zhen Bi, Ningyu Zhang, Yinuo Jiang et al.

AAAI 2024paper

When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations

Aleksandar Petrov, Philip Torr, Adel Bibi

ICLR 2024arXiv:2310.19698
39
citations

When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions

Zhening Li, Gabriel Poesia, Armando Solar-Lezama

ICML 2024oralarXiv:2406.07897
1
citations

When Do We Not Need Larger Vision Models?

Baifeng Shi, Ziyang Wu, Maolin Mao et al.

ECCV 2024arXiv:2403.13043
71
citations

When Fast Fourier Transform Meets Transformer for Image Restoration

xingyu jiang, Xiuhui Zhang, Ning Gao et al.

ECCV 2024
48
citations

When is Transfer Learning Possible?

My Phan, Kianté Brantley, Stephanie Milani et al.

ICML 2024