"computational efficiency" Papers

197 papers found • Page 3 of 4

The Journey Matters: Average Parameter Count over Pre-training Unifies Sparse and Dense Scaling Laws

Tian Jin, Ahmed Imtiaz Humayun, Utku Evci et al.

ICLR 2025arXiv:2501.12486
1
citations

The Omni-Expert: A Computationally Efficient Approach to Achieve a Mixture of Experts in a Single Expert Model

Sohini Saha, Mezisashe Ojuba, Leslie Collins et al.

NEURIPS 2025

TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation

Mohan Xu, Kai Li, Guo Chen et al.

ICLR 2025oralarXiv:2410.01469
11
citations

Till the Layers Collapse: Compressing a Deep Neural Network Through the Lenses of Batch Normalization Layers.

Zhu Liao, Nour Hezbri, Victor Quétu et al.

AAAI 2025paperarXiv:2412.15077

TinySAM: Pushing the Envelope for Efficient Segment Anything Model

Han Shu, Wenshuo Li, Yehui Tang et al.

AAAI 2025paperarXiv:2312.13789
41
citations

Token Statistics Transformer: Linear-Time Attention via Variational Rate Reduction

Ziyang Wu, Tianjiao Ding, Yifu Lu et al.

ICLR 2025arXiv:2412.17810
28
citations

Training-free and Adaptive Sparse Attention for Efficient Long Video Generation

yifei xia, Suhan Ling, Fangcheng Fu et al.

ICCV 2025arXiv:2502.21079
35
citations

UGM2N: An Unsupervised and Generalizable Mesh Movement Network via M-Uniform Loss

Zhichao Wang, Xinhai Chen, Qinglin Wang et al.

NEURIPS 2025arXiv:2508.08615
1
citations

URWKV: Unified RWKV Model with Multi-state Perspective for Low-light Image Restoration

Rui Xu, Yuzhen Niu, Yuezhou Li et al.

CVPR 2025arXiv:2505.23068
5
citations

VA-MoE: Variables-Adaptive Mixture of Experts for Incremental Weather Forecasting

Hao Chen, Tao Han, Song Guo et al.

ICCV 2025arXiv:2412.02503
3
citations

Variational Bayesian Pseudo-Coreset

Hyungi Lee, Seungyoo Lee, Juho Lee

ICLR 2025arXiv:2502.21143

VCM: Vision Concept Modeling with Adaptive Vision Token Compression via Instruction Fine-Tuning

Run Luo, Renke Shan, Longze Chen et al.

NEURIPS 2025

VFlowOpt: A Token Pruning Framework for LMMs with Visual Information Flow-Guided Optimization

Sihan Yang, Runsen Xu, Chenhang Cui et al.

ICCV 2025arXiv:2508.05211
5
citations

Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models

Jinhui Yi, Syed Talal Wasim, Yanan Luo et al.

CVPR 2025arXiv:2412.18609
2
citations

Vision-centric Token Compression in Large Language Model

Ling Xing, Alex Jinpeng Wang, Rui Yan et al.

NEURIPS 2025spotlightarXiv:2502.00791
11
citations

VORTA: Efficient Video Diffusion via Routing Sparse Attention

Wenhao Sun, Rong-Cheng Tu, Yifu Ding et al.

NEURIPS 2025arXiv:2505.18809
12
citations

When Large Vision-Language Model Meets Large Remote Sensing Imagery: Coarse-to-Fine Text-Guided Token Pruning

Junwei Luo, Yingying Zhang, Xue Yang et al.

ICCV 2025arXiv:2503.07588
14
citations

Why 1 + 1 < 1 in Visual Token Pruning: Beyond Naive Integration via Multi-Objective Balanced Covering

Yangfu Li, Hongjian Zhan, Tianyi Chen et al.

NEURIPS 2025arXiv:2505.10118
1
citations

WPMixer: Efficient Multi-Resolution Mixing for Long-Term Time Series Forecasting

Md Mahmuddun Nabi Murad, Mehmet Aktukmak, Yasin Yilmaz

AAAI 2025paperarXiv:2412.17176
24
citations

3D Small Object Detection with Dynamic Spatial Pruning

Xiuwei Xu, Zhihao Sun, Ziwei Wang et al.

ECCV 2024arXiv:2305.03716
9
citations

Accelerating the Global Aggregation of Local Explanations

Alon Mor, Yonatan Belinkov, Benny Kimelfeld

AAAI 2024paperarXiv:2312.07991
7
citations

Agent Attention: On the Integration of Softmax and Linear Attention

Dongchen Han, Tianzhu Ye, Yizeng Han et al.

ECCV 2024arXiv:2312.08874
212
citations

Agglomerative Token Clustering

Joakim Bruslund Haurum, Sergio Escalera, Graham W. Taylor et al.

ECCV 2024arXiv:2409.11923
7
citations

An Efficient and Effective Transformer Decoder-Based Framework for Multi-Task Visual Grounding

Wei Chen, Long Chen, Yu Wu

ECCV 2024arXiv:2408.01120
17
citations

An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

Liang Chen, Haozhe Zhao, Tianyu Liu et al.

ECCV 2024arXiv:2403.06764
368
citations

A Simple Baseline for Efficient Hand Mesh Reconstruction

zhishan zhou, shihao zhou, Zhi Lv et al.

CVPR 2024arXiv:2403.01813
29
citations

Beyond Implicit Bias: The Insignificance of SGD Noise in Online Learning

Nikhil Vyas, Depen Morwani, Rosie Zhao et al.

ICML 2024spotlightarXiv:2306.08590
7
citations

Binarized Low-light Raw Video Enhancement

Gengchen Zhang, Yulun Zhang, Xin Yuan et al.

CVPR 2024arXiv:2403.19944
16
citations

Bi-ViT: Pushing the Limit of Vision Transformer Quantization

Yanjing Li, Sheng Xu, Mingbao Lin et al.

AAAI 2024paperarXiv:2305.12354
22
citations

Code as Reward: Empowering Reinforcement Learning with VLMs

David Venuto, Mohammad Sami Nur Islam, Martin Klissarov et al.

ICML 2024spotlightarXiv:2402.04764
27
citations

Context-Aware Iteration Policy Network for Efficient Optical Flow Estimation

Ri Cheng, Ruian He, Xuhao Jiang et al.

AAAI 2024paperarXiv:2312.07180
1
citations

Context-Guided Spatial Feature Reconstruction for Efficient Semantic Segmentation

Zhenliang Ni, Xinghao Chen, Yingjie Zhai et al.

ECCV 2024arXiv:2405.06228
72
citations

Craftax: A Lightning-Fast Benchmark for Open-Ended Reinforcement Learning

Michael Matthews, Michael Beukman, Benjamin Ellis et al.

ICML 2024spotlightarXiv:2402.16801
62
citations

CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers

Dachuan Shi, Chaofan Tao, Anyi Rao et al.

ICML 2024arXiv:2305.17455
43
citations

Deep Fusion: Efficient Network Training via Pre-trained Initializations

Hanna Mazzawi, Xavi Gonzalvo, Michael Wunder et al.

ICML 2024arXiv:2306.11903
3
citations

Differentially Private Bias-Term Fine-tuning of Foundation Models

Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.

ICML 2024arXiv:2210.00036
55
citations

Distilled Datamodel with Reverse Gradient Matching

Jingwen Ye, Ruonan Yu, Songhua Liu et al.

CVPR 2024arXiv:2404.14006
3
citations

Distilling Semantic Priors from SAM to Efficient Image Restoration Models

Quan Zhang, Xiaoyu Liu, Wei Li et al.

CVPR 2024arXiv:2403.16368
36
citations

DistiLLM: Towards Streamlined Distillation for Large Language Models

Jongwoo Ko, Sungnyun Kim, Tianyi Chen et al.

ICML 2024arXiv:2402.03898
73
citations

Distributed Semantic Segmentation with Efficient Joint Source and Task Decoding

Danish Nazir, Timo Bartels, Jan Piewek et al.

ECCV 2024arXiv:2407.11224
2
citations

Do Efficient Transformers Really Save Computation?

Kai Yang, Jan Ackermann, Zhenyu He et al.

ICML 2024arXiv:2402.13934
29
citations

Dynamic Data Selection for Efficient SSL via Coarse-to-Fine Refinement

Aditay Tripathi, Pradeep Shenoy, Anirban Chakraborty

ECCV 2024
3
citations

Efficient Cascaded Multiscale Adaptive Network for Image Restoration

Yichen Zhou, Pan Zhou, Teck Khim Ng

ECCV 2024
2
citations

Efficient Precision and Recall Metrics for Assessing Generative Models using Hubness-aware Sampling

Yuanbang Liang, Jing Wu, Yu-Kun Lai et al.

ICML 2024spotlight

Enabling Uncertainty Estimation in Iterative Neural Networks

Nikita Durasov, Doruk Oner, Jonathan Donier et al.

ICML 2024arXiv:2403.16732
11
citations

Enhancing Storage and Computational Efficiency in Federated Multimodal Learning for Large-Scale Models

Zixin Zhang, Fan Qi, Changsheng Xu

ICML 2024

Enhancing Vision Transformer: Amplifying Non-Linearity in Feedforward Network Module

Yixing Xu, Chao Li, Dong Li et al.

ICML 2024

Evaluation of Test-Time Adaptation Under Computational Time Constraints

Motasem Alfarra, Hani Itani, Alejandro Pardo et al.

ICML 2024arXiv:2304.04795
7
citations

Fast Decision Boundary based Out-of-Distribution Detector

Litian Liu, Yao Qin

ICML 2024arXiv:2312.11536
42
citations

FMBoost: Boosting Latent Diffusion with Flow Matching

Johannes Schusterbauer-Fischer, Ming Gui, Pingchuan Ma et al.

ECCV 2024