"computational efficiency" Papers

197 papers found • Page 4 of 4

FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models

Junhyuk So, Jungwon Lee, Eunhyeok Park

ECCV 2024arXiv:2312.03517
15
citations

Frugal 3D Point Cloud Model Training via Progressive Near Point Filtering and Fused Aggregation

Donghyun Lee, Yejin Lee, Jae W. Lee et al.

ECCV 2024
2
citations

Grid-Attention: Enhancing Computational Efficiency of Large Vision Models without Fine-Tuning

Pengyu Li, Biao Wang, Tianchu Guo et al.

ECCV 2024

Grid Diffusion Models for Text-to-Video Generation

Taegyeong Lee, Soyeong Kwon, Taehwan Kim

CVPR 2024arXiv:2404.00234
21
citations

Headless Language Models: Learning without Predicting with Contrastive Weight Tying

Nathan Godey, Éric Clergerie, Benoît Sagot

ICLR 2024arXiv:2309.08351
5
citations

Hierarchical Separable Video Transformer for Snapshot Compressive Imaging

Ping Wang, Yulun Zhang, Lishun Wang et al.

ECCV 2024arXiv:2407.11946
4
citations

HPE-Li: WiFi-enabled Lightweight Dual Selective Kernel Convolution for Human Pose Estimation

Gian Toan D., Tien Dac Lai, Thien Van Luong et al.

ECCV 2024
7
citations

In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering

Sheng Liu, Haotian Ye, Lei Xing et al.

ICML 2024arXiv:2311.06668
224
citations

Inducing Point Operator Transformer: A Flexible and Scalable Architecture for Solving PDEs

Seungjun Lee, TaeIL Oh

AAAI 2024paperarXiv:2312.10975
18
citations

Learning Causal Dynamics Models in Object-Oriented Environments

Zhongwei Yu, Jingqing Ruan, Dengpeng Xing

ICML 2024arXiv:2405.12615
4
citations

Learning Temporal Resolution in Spectrogram for Audio Classification

Haohe Liu, Xubo Liu, Qiuqiang Kong et al.

AAAI 2024paperarXiv:2210.01719
13
citations

LION: Implicit Vision Prompt Tuning

Haixin Wang, Jianlong Chang, Yihang Zhai et al.

AAAI 2024paperarXiv:2303.09992
36
citations

LiteSAM is Actually what you Need for segment Everything

Jianhai Fu, Yuanjie Yu, Ningchuan Li et al.

ECCV 2024

Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models

Gianni Franchi, Olivier Laurent, Maxence Leguéry et al.

CVPR 2024arXiv:2312.15297
16
citations

Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection

Alireza Ganjdanesh, Yan Kang, Yuchen Liu et al.

ECCV 2024arXiv:2409.15557
12
citations

Object-Centric Diffusion for Efficient Video Editing

Kumara Kahatapitiya, Adil Karjauv, Davide Abati et al.

ECCV 2024arXiv:2401.05735
23
citations

ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models

Dongha Kim, Jaesung Hwang, Jongjin Lee et al.

ICML 2024arXiv:2301.04257
4
citations

One-stage Prompt-based Continual Learning

Youngeun Kim, YUHANG LI, Priyadarshini Panda

ECCV 2024arXiv:2402.16189
17
citations

Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation

Yixiao Wang, Chen Tang, Lingfeng Sun et al.

ECCV 2024arXiv:2408.00766
17
citations

Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty

Kaizhao Liu, Jose Blanchet, Lexing Ying et al.

ICML 2024arXiv:2404.19145
2
citations

Partially Stochastic Infinitely Deep Bayesian Neural Networks

Sergio Calvo Ordoñez, Matthieu Meunier, Francesco Piatti et al.

ICML 2024

PhAST: Physics-Aware, Scalable, and Task-Specific GNNs for Accelerated Catalyst Design

Alexandre Duval, Victor Schmidt, Santiago Miret et al.

ICML 2024arXiv:2211.12020
9
citations

Quantization-Friendly Winograd Transformations for Convolutional Neural Networks

Vladimir Protsenko, Vladimir Kryzhanovskiy, Alexander Filippov

ECCV 2024
2
citations

Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency

Sudeep Salgia, Sattar Vakili, Qing Zhao

ICML 2024arXiv:2310.15351
12
citations

RegionDrag: Fast Region-Based Image Editing with Diffusion Models

Jingyi Lu, Xinghui Li, Kai Han

ECCV 2024arXiv:2407.18247
32
citations

Removing Rows and Columns of Tokens in Vision Transformer enables Faster Dense Prediction without Retraining

Diwei Su, cheng fei, Jianxu Luo

ECCV 2024
2
citations

Rethinking Diffusion Model for Multi-Contrast MRI Super-Resolution

Guangyuan Li, Chen Rao, Juncheng Mo et al.

CVPR 2024arXiv:2404.04785
59
citations

Rethinking Video Deblurring with Wavelet-Aware Dynamic Transformer and Diffusion Model

chen rao, Guangyuan Li, Zehua Lan et al.

ECCV 2024arXiv:2408.13459
9
citations

SAFNet: Selective Alignment Fusion Network for Efficient HDR Imaging

Lingtong Kong, Bo Li, Yike Xiong et al.

ECCV 2024arXiv:2407.16308
15
citations

Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement

Xiuquan Hou, Meiqin Liu, Senlin Zhang et al.

CVPR 2024arXiv:2403.16131
82
citations

Saliency strikes back: How filtering out high frequencies improves white-box explanations

Sabine Muzellec, Thomas FEL, Victor Boutin et al.

ICML 2024arXiv:2307.09591
3
citations

Scaling Laws for Fine-Grained Mixture of Experts

Jan Ludziejewski, Jakub Krajewski, Kamil Adamczewski et al.

ICML 2024arXiv:2402.07871
120
citations

ScanFormer: Referring Expression Comprehension by Iteratively Scanning

Wei Su, Peihan Miao, Huanzhang Dou et al.

CVPR 2024arXiv:2406.18048
16
citations

See More Details: Efficient Image Super-Resolution by Experts Mining

Eduard Zamfir, Zongwei Wu, Nancy Mehta et al.

ICML 2024arXiv:2402.03412
30
citations

Self-Adapting Large Visual-Language Models to Edge Devices across Visual Modalities

Kaiwen Cai, ZheKai Duan, Gaowen Liu et al.

ECCV 2024arXiv:2403.04908
10
citations

SeTformer Is What You Need for Vision and Language

Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger et al.

AAAI 2024paperarXiv:2401.03540
7
citations

SMFANet: A Lightweight Self-Modulation Feature Aggregation Network for Efficient Image Super-Resolution

mingjun zheng, Long Sun, Jiangxin Dong et al.

ECCV 2024
72
citations

SNP: Structured Neuron-level Pruning to Preserve Attention Scores

Kyunghwan Shim, Jaewoong Yun, Shinkook Choi

ECCV 2024arXiv:2404.11630
3
citations

Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting

Anthony Chen, Huanrui Yang, Yulu Gan et al.

ICML 2024arXiv:2312.09148
5
citations

Stripe Observation Guided Inference Cost-free Attention Mechanism

Zhongzhan Huang, Shanshan Zhong, Wushao Wen et al.

ECCV 2024
1
citations

Thermometer: Towards Universal Calibration for Large Language Models

Maohao Shen, Subhro Das, Kristjan Greenewald et al.

ICML 2024arXiv:2403.08819
26
citations

Transformer-Based Selective Super-resolution for Efficient Image Refinement

Tianyi Zhang, Kishore Kasichainula, Yaoxin Zhuo et al.

AAAI 2024paperarXiv:2312.05803
19
citations

Translating Subgraphs to Nodes Makes Simple GNNs Strong and Efficient for Subgraph Representation Learning

Dongkwan Kim, Alice Oh

ICML 2024arXiv:2204.04510
6
citations

Turbo: Informativity-Driven Acceleration Plug-In for Vision-Language Large Models

Chen Ju, Haicheng Wang, Haozhe Cheng et al.

ECCV 2024arXiv:2407.11717
13
citations

Understanding and Improving Optimization in Predictive Coding Networks

Nicholas Alonso, Jeffrey Krichmar, Emre Neftci

AAAI 2024paperarXiv:2305.13562
12
citations

Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention

Zhen Qin, Weigao Sun, Dong Li et al.

ICML 2024arXiv:2405.17381
24
citations

Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention

Xingyu Zhou, Leheng Zhang, Xiaorui Zhao et al.

CVPR 2024arXiv:2401.06312
34
citations