Poster "computational efficiency" Papers

158 papers found • Page 1 of 4

2DMamba: Efficient State Space Model for Image Representation with Applications on Giga-Pixel Whole Slide Image Classification

Jingwei Zhang, Anh Tien Nguyen, Xi Han et al.

CVPR 2025arXiv:2412.00678
23
citations

Acc3D: Accelerating Single Image to 3D Diffusion Models via Edge Consistency Guided Score Distillation

Kendong Liu, Zhiyu Zhu, Hui LIU et al.

CVPR 2025arXiv:2503.15975
1
citations

Accelerate 3D Object Detection Models via Zero-Shot Attention Key Pruning

Lizhen Xu, Xiuxiu Bai, Xiaojun Jia et al.

ICCV 2025arXiv:2503.08101

Accelerating Multimodal Large Language Models by Searching Optimal Vision Token Reduction

Shiyu Zhao, Zhenting Wang, Felix Juefei-Xu et al.

CVPR 2025arXiv:2412.00556
16
citations

Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical Findings

Qiong Wu, Wenhao Lin, Yiyi Zhou et al.

NEURIPS 2025arXiv:2411.19628
5
citations

Accelerating Spectral Clustering under Fairness Constraints

Francesco Tonin, Alex Lambert, Johan Suykens et al.

ICML 2025arXiv:2506.08143
2
citations

Accurate and Efficient Low-Rank Model Merging in Core Space

Aniello Panariello, Daniel Marczak, Simone Magistri et al.

NEURIPS 2025arXiv:2509.17786
3
citations

Ada-K Routing: Boosting the Efficiency of MoE-based LLMs

Zijia Zhao, Longteng Guo, Jie Cheng et al.

ICLR 2025arXiv:2410.10456
8
citations

Adaptive Inference-Time Scaling via Cyclic Diffusion Search

Gyubin Lee, Bao Truong, Jaesik Yoon et al.

NEURIPS 2025arXiv:2505.14036
2
citations

A Difference-of-Convex Functions Approach to Energy-Based Iterative Reasoning

Daniel Tschernutter, David Diego Castro, Maciej Kasiński

NEURIPS 2025

AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning

Yiwu Zhong, Zhuoming Liu, Yin Li et al.

ICCV 2025arXiv:2412.03248
24
citations

Apollo: An Exploration of Video Understanding in Large Multimodal Models

Orr Zohar, Xiaohan Wang, Yann Dubois et al.

CVPR 2025arXiv:2412.10360
55
citations

Approximately Aligned Decoding

Daniel Melcer, Sujan Kumar Gonugondla, Pramuditha Perera et al.

NEURIPS 2025arXiv:2410.01103
2
citations

A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for Accelerating Large VLMs

Wangbo Zhao, Yizeng Han, Jiasheng Tang et al.

CVPR 2025arXiv:2412.03324
27
citations

Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models

Michael Noukhovitch, Shengyi Huang, Sophie Xhonneux et al.

ICLR 2025arXiv:2410.18252
43
citations

ATP-LLaVA: Adaptive Token Pruning for Large Vision Language Models

Xubing Ye, Yukang Gan, Yixiao Ge et al.

CVPR 2025arXiv:2412.00447
38
citations

Auto-GDA: Automatic Domain Adaptation for Efficient Grounding Verification in Retrieval-Augmented Generation

Tobias Leemann, Periklis Petridis, Giuseppe Vietri et al.

ICLR 2025arXiv:2410.03461
4
citations

Balanced Token Pruning: Accelerating Vision Language Models Beyond Local Optimization

kaiyuan Li, Xiaoyue Chen, Chen Gao et al.

NEURIPS 2025arXiv:2505.22038
4
citations

Beyond Greedy Exits: Improved Early Exit Decisions for Risk Control and Reliability

Divya Jyoti Bajpai, Manjesh Kumar Hanawal

NEURIPS 2025arXiv:2509.23666
1
citations

Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs

Qizhe Zhang, Aosong Cheng, Ming Lu et al.

ICCV 2025arXiv:2412.01818
45
citations

Bio-Inspired Image Restoration

Yuning Cui, Wenqi Ren, Alois Knoll

NEURIPS 2025

Bringing RNNs Back to Efficient Open-Ended Video Understanding

Weili Xu, Enxin Song, Wenhao Chai et al.

ICCV 2025arXiv:2507.02591
8
citations

Computational Budget Should Be Considered in Data Selection

Weilin Wan, Weizhong Zhang, Cheng Jin

NEURIPS 2025arXiv:2510.16806

Context-aware Dynamic Pruning for Speech Foundation Models

Masao Someki, Yifan Peng, Siddhant Arora et al.

ICLR 2025
7
citations

CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment

Qinfeng Li, Tianyue Luo, Xuhong Zhang et al.

NEURIPS 2025arXiv:2410.13903
7
citations

Detecting High-Stakes Interactions with Activation Probes

Alex McKenzie, Urja Pawar, Phil Blandfort et al.

NEURIPS 2025arXiv:2506.10805
13
citations

DH-Set: Improving Vision-Language Alignment with Diverse and Hybrid Set-Embeddings Learning

Kun Zhang, Jingyu Li, Zhe Li et al.

CVPR 2025
2
citations

DMesh++: An Efficient Differentiable Mesh for Complex Shapes

Sanghyun Son, Matheus Gadelha, Yang Zhou et al.

ICCV 2025arXiv:2412.16776
3
citations

DUALFormer: Dual Graph Transformer

Zhuo Jiaming, Yuwei Liu, Yintong Lu et al.

ICLR 2025
3
citations

DyMU: Dynamic Merging and Virtual Unmerging for Efficient Variable-Length VLMs

Zhenhailong Wang, Senthil Purushwalkam, Caiming Xiong et al.

NEURIPS 2025
6
citations

Dynamic Diffusion Transformer

Wangbo Zhao, Yizeng Han, Jiasheng Tang et al.

ICLR 2025arXiv:2410.03456
38
citations

Each Complexity Deserves a Pruning Policy

Hanshi Wang, Yuhao Xu, Zekun Xu et al.

NEURIPS 2025arXiv:2509.23931

Early-Bird Diffusion: Investigating and Leveraging Timestep-Aware Early-Bird Tickets in Diffusion Models for Efficient Training

Lexington Whalen, Zhenbang Du, Haoran You et al.

CVPR 2025arXiv:2504.09606
2
citations

Efficient Concertormer for Image Deblurring and Beyond

Pin-Hung Kuo, Jinshan Pan, Shao-Yi Chien et al.

ICCV 2025arXiv:2404.06135

Efficient Multi-modal Large Language Models via Progressive Consistency Distillation

Zichen Wen, Shaobo Wang, Yufa Zhou et al.

NEURIPS 2025arXiv:2510.00515
9
citations

Efficient Multi-Person Motion Prediction by Lightweight Spatial and Temporal Interactions

Yuanhong Zheng, Ruixuan Yu, Jian Sun

ICCV 2025arXiv:2507.09446
2
citations

Efficient RAW Image Deblurring with Adaptive Frequency Modulation

Wenlong Jiao, Binglong Li, Wei Shang et al.

NEURIPS 2025arXiv:2505.24407
1
citations

Efficient Time Series Processing for Transformers and State-Space Models through Token Merging

Leon Götz, Marcel Kollovieh, Stephan Günnemann et al.

ICML 2025arXiv:2405.17951
5
citations

EfficientViM: Efficient Vision Mamba with Hidden State Mixer based State Space Duality

Sanghyeok Lee, Joonmyung Choi, Hyunwoo J. Kim

CVPR 2025arXiv:2411.15241
27
citations

Enhanced Expert Merging for Mixture-of-Experts in Graph Foundation Models

Lei Liu, Xingyu Xia, Qianqian Xie et al.

NEURIPS 2025

Faithful Group Shapley Value

Kiljae Lee, Ziqi Liu, Weijing Tang et al.

NEURIPS 2025arXiv:2505.19013
2
citations

Fast Solvers for Discrete Diffusion Models: Theory and Applications of High-Order Algorithms

Yinuo Ren, Haoxuan Chen, Yuchen Zhu et al.

NEURIPS 2025arXiv:2502.00234
32
citations

Flexible Frame Selection for Efficient Video Reasoning

Shyamal Buch, Arsha Nagrani, Anurag Arnab et al.

CVPR 2025
10
citations

FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference

Xunhao Lai, Jianqiao Lu, Yao Luo et al.

ICLR 2025arXiv:2502.20766
62
citations

FlowPrune: Accelerating Attention Flow Calculation by Pruning Flow Network

Shuo Xu, Yu Chen, Shuxia Lin et al.

NEURIPS 2025

Foveated Instance Segmentation

Hongyi Zeng, Wenxuan Liu, Tianhua Xia et al.

CVPR 2025arXiv:2503.21854
1
citations

Gatekeeper: Improving Model Cascades Through Confidence Tuning

Stephan Rabanser, Nathalie Rauschmayr, Achin Kulshrestha et al.

NEURIPS 2025arXiv:2502.19335
4
citations

Gradient descent with generalized Newton’s method

Zhiqi Bu, Shiyun Xu

ICLR 2025arXiv:2407.02772
8
citations

Graph Sparsification via Mixture of Graphs

Guibin Zhang, Xiangguo SUN, Yanwei Yue et al.

ICLR 2025arXiv:2405.14260
17
citations

Harnessing Input-Adaptive Inference for Efficient VLN

Dongwoo Kang, Akhil Perincherry, Zachary Coalson et al.

ICCV 2025arXiv:2508.09262
PreviousNext