Poster "mixture of experts" Papers

52 papers found • Page 1 of 2

CryptoMoE: Privacy-Preserving and Scalable Mixture of Experts Inference via Balanced Expert Routing

Yifan Zhou, Tianshi Xu, Jue Hong et al.

NEURIPS 2025arXiv:2511.01197
1
citations

Dense2MoE: Restructuring Diffusion Transformer to MoE for Efficient Text-to-Image Generation

Youwei Zheng, Yuxi Ren, Xin Xia et al.

ICCV 2025arXiv:2510.09094
5
citations

Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization

Taishi Nakamura, Takuya Akiba, Kazuki Fujii et al.

ICLR 2025arXiv:2502.19261
9
citations

Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts

Guorui Zheng, Xidong Wang, Juhao Liang et al.

ICLR 2025arXiv:2410.10626
11
citations

Equipping Vision Foundation Model with Mixture of Experts for Out-of-Distribution Detection

Shizhen Zhao, Jiahui Liu, Xin Wen et al.

ICCV 2025arXiv:2510.10584
1
citations

Graph Sparsification via Mixture of Graphs

Guibin Zhang, Xiangguo SUN, Yanwei Yue et al.

ICLR 2025arXiv:2405.14260
17
citations

GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning

Haonan Yuan, Qingyun Sun, Junhua Shi et al.

NEURIPS 2025arXiv:2511.05592
4
citations

HMoRA: Making LLMs More Effective with Hierarchical Mixture of LoRA Experts

Mengqi Liao, Wei Chen, Junfeng Shen et al.

ICLR 2025
8
citations

HMVLM:Human Motion-Vision-Language Model via MoE LoRA

Lei Hu, Yongjing Ye, Shihong Xia

NEURIPS 2025

Instruction-Grounded Visual Projectors for Continual Learning of Generative Vision-Language Models

Hyundong Jin, Hyung Jin Chang, Eunwoo Kim

ICCV 2025arXiv:2508.00260

Intrinsic User-Centric Interpretability through Global Mixture of Experts

Vinitra Swamy, Syrielle Montariol, Julian Blackwell et al.

ICLR 2025arXiv:2402.02933
10
citations

JanusDNA: A Powerful Bi-directional Hybrid DNA Foundation Model

Qihao Duan, Bingding Huang, Zhenqiao Song et al.

NEURIPS 2025arXiv:2505.17257
3
citations

Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings

Yehya Farhat, Hamza ElMokhtar Shili, Fangshuo Liao et al.

NEURIPS 2025arXiv:2306.08586
3
citations

LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation

Fangxun Shu, Yue Liao, Lei Zhang et al.

ICLR 2025arXiv:2408.15881
38
citations

MoE-Gyro: Self-Supervised Over-Range Reconstruction and Denoising for MEMS Gyroscopes

Feiyang Pan, Shenghe Zheng, Chunyan Yin et al.

NEURIPS 2025arXiv:2506.06318
2
citations

MoFRR: Mixture of Diffusion Models for Face Retouching Restoration

Jiaxin Liu, Qichao Ying, Zhenxing Qian et al.

ICCV 2025arXiv:2507.19770

MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation

Shen Yuan, Yin Zheng, Taifeng Wang et al.

NEURIPS 2025arXiv:2506.14436
1
citations

MoRE-Brain: Routed Mixture of Experts for Interpretable and Generalizable Cross-Subject fMRI Visual Decoding

YUXIANG WEI, Yanteng Zhang, Xi Xiao et al.

NEURIPS 2025arXiv:2505.15946
5
citations

More Experts Than Galaxies: Conditionally-Overlapping Experts with Biologically-Inspired Fixed Routing

Sagi Shaier, Francisco Pereira, Katharina Kann et al.

ICLR 2025arXiv:2410.08003

Multiple Heads are Better than One: Mixture of Modality Knowledge Experts for Entity Representation Learning

Yichi Zhang, Zhuo Chen, Lingbing Guo et al.

ICLR 2025arXiv:2405.16869
10
citations

Multi-Task Vehicle Routing Solver via Mixture of Specialized Experts under State-Decomposable MDP

Yuxin Pan, Zhiguang Cao, Chengyang GU et al.

NEURIPS 2025arXiv:2510.21453

NetMoE: Accelerating MoE Training through Dynamic Sample Placement

Xinyi Liu, Yujie Wang, Fangcheng Fu et al.

ICLR 2025
11
citations

No Need to Talk: Asynchronous Mixture of Language Models

Anastasiia Filippova, Angelos Katharopoulos, David Grangier et al.

ICLR 2025arXiv:2410.03529
3
citations

PINN Balls: Scaling Second-Order Methods for PINNs with Domain Decomposition and Adaptive Sampling

Andrea Bonfanti, Ismael Medina, Roman List et al.

NEURIPS 2025arXiv:2510.21262

Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts

Minh Le, Chau Nguyen, Huy Nguyen et al.

ICLR 2025arXiv:2410.02200
14
citations

Routing Experts: Learning to Route Dynamic Experts in Existing Multi-modal Large Language Models

Qiong Wu, Zhaoxi Ke, Yiyi Zhou et al.

ICLR 2025
7
citations

SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts

Gengze Zhou, Yicong Hong, Zun Wang et al.

ICCV 2025arXiv:2412.05552
5
citations

Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts

Junmo Kang, Leonid Karlinsky, Hongyin Luo et al.

ICLR 2025arXiv:2406.12034
19
citations

SkySense V2: A Unified Foundation Model for Multi-modal Remote Sensing

Yingying Zhang, Lixiang Ru, Kang Wu et al.

ICCV 2025arXiv:2507.13812
7
citations

Swift Hydra: Self-Reinforcing Generative Framework for Anomaly Detection with Multiple Mamba Models

Hoang Khoi Nguyen Do, Truc Nguyen, Malik Hassanaly et al.

ICLR 2025arXiv:2503.06413
2
citations

The Omni-Expert: A Computationally Efficient Approach to Achieve a Mixture of Experts in a Single Expert Model

Sohini Saha, Mezisashe Ojuba, Leslie Collins et al.

NEURIPS 2025

Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts

Xiaoming Shi, Shiyu Wang, Yuqi Nie et al.

ICLR 2025arXiv:2409.16040
194
citations

Towards Accurate and Efficient 3D Object Detection for Autonomous Driving: A Mixture of Experts Computing System on Edge

Linshen Liu, Boyan Su, Junyue Jiang et al.

ICCV 2025arXiv:2507.04123
1
citations

Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders

James Oldfield, Shawn Im, Sharon Li et al.

NEURIPS 2025arXiv:2505.21364
1
citations

VA-MoE: Variables-Adaptive Mixture of Experts for Incremental Weather Forecasting

Hao Chen, Tao Han, Song Guo et al.

ICCV 2025arXiv:2412.02503
3
citations

Wasserstein Distances, Neuronal Entanglement, and Sparsity

Shashata Sawmya, Linghao Kong, Ilia Markov et al.

ICLR 2025arXiv:2405.15756
5
citations

Acquiring Diverse Skills using Curriculum Reinforcement Learning with Mixture of Experts

Onur Celik, Aleksandar Taranovic, Gerhard Neumann

ICML 2024arXiv:2403.06966
16
citations

A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts

Mohammed Nowaz Rabbani Chowdhury, Meng Wang, Kaoutar El Maghraoui et al.

ICML 2024arXiv:2405.16646
10
citations

BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation

Daeun Lee, Jaehong Yoon, Sung Ju Hwang

ICML 2024arXiv:2402.08712
20
citations

Boost Your NeRF: A Model-Agnostic Mixture of Experts Framework for High Quality and Efficient Rendering

Francesco Di Sario, Riccardo Renzulli, Marco Grangetto et al.

ECCV 2024arXiv:2407.10389
5
citations

Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters

Yuhang Zhou, Zhao Zihua, Siyuan Du et al.

ICML 2024arXiv:2406.09679
8
citations

Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?

Huy Nguyen, Pedram Akbarian, Nhat Ho

ICML 2024arXiv:2401.13875
20
citations

Merging Multi-Task Models via Weight-Ensembling Mixture of Experts

Anke Tang, Li Shen, Yong Luo et al.

ICML 2024arXiv:2402.00433
84
citations

Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection

Alireza Ganjdanesh, Yan Kang, Yuchen Liu et al.

ECCV 2024arXiv:2409.15557
12
citations

MoAI: Mixture of All Intelligence for Large Language and Vision Models

Byung-Kwan Lee, Beomchan Park, Chae Won Kim et al.

ECCV 2024arXiv:2403.07508
34
citations

MoDE: CLIP Data Experts via Clustering

Jiawei Ma, Po-Yao Huang, Saining Xie et al.

CVPR 2024arXiv:2404.16030
25
citations

MoEAD: A Parameter-efficient Model for Multi-class Anomaly Detection

Shiyuan Meng, Wenchao Meng, Qihang Zhou et al.

ECCV 2024
14
citations

Multi-Task Dense Prediction via Mixture of Low-Rank Experts

Yuqi Yang, Peng-Tao Jiang, Qibin Hou et al.

CVPR 2024arXiv:2403.17749
60
citations

Norface: Improving Facial Expression Analysis by Identity Normalization

Hanwei Liu, Rudong An, Zhimeng Zhang et al.

ECCV 2024arXiv:2407.15617
14
citations

On Least Square Estimation in Softmax Gating Mixture of Experts

Huy Nguyen, Nhat Ho, Alessandro Rinaldo

ICML 2024arXiv:2402.02952
23
citations
PreviousNext