Poster "partial observability" Papers

18 papers found

COMBO: Compositional World Models for Embodied Multi-Agent Cooperation

Hongxin Zhang, Zeyuan Wang, Qiushi Lyu et al.

ICLR 2025arXiv:2404.10775
37
citations

DyWA: Dynamics-adaptive World Action Model for Generalizable Non-prehensile Manipulation

Jiangran Lyu, Ziming Li, Xuesong Shi et al.

ICCV 2025arXiv:2503.16806
14
citations

Exponential Topology-enabled Scalable Communication in Multi-agent Reinforcement Learning

Xinran Li, Xiaolu Wang, Chenjia Bai et al.

ICLR 2025arXiv:2502.19717
5
citations

Mixture of Attentions For Speculative Decoding

Matthieu Zimmer, Milan Gritta, Gerasimos Lampouras et al.

ICLR 2025arXiv:2410.03804
14
citations

Multi-Environment POMDPs: Discrete Model Uncertainty Under Partial Observability

Eline M. Bovy, Caleb Probine, Marnix Suilen et al.

NEURIPS 2025arXiv:2510.23744

On Evaluating Policies for Robust POMDPs

Merlijn Krale, Eline M. Bovy, Maris F. L. Galesloot et al.

NEURIPS 2025

On Minimizing Adversarial Counterfactual Error in Adversarial Reinforcement Learning

Roman Belaire, Arunesh Sinha, Pradeep Varakantham

ICLR 2025
1
citations

Predictive Coding Enhances Meta-RL To Achieve Interpretable Bayes-Optimal Belief Representation Under Partial Observability

Po-Chen Kuo, Han Hou, Will Dabney et al.

NEURIPS 2025arXiv:2510.22039

Quantifying Generalisation in Imitation Learning

Nathan Gavenski, Odinaldo Rodrigues

NEURIPS 2025arXiv:2509.24784

Real-World Reinforcement Learning of Active Perception Behaviors

Edward Hu, Jie Wang, Xingfang Yuan et al.

NEURIPS 2025arXiv:2512.01188

Stabilizing LTI Systems under Partial Observability: Sample Complexity and Fundamental Limits

Ziyi Zhang, Yorie Nakahira, Guannan Qu

NEURIPS 2025
1
citations

Student-Informed Teacher Training

Nico Messikommer, Jiaxu Xing, Elie Aljalbout et al.

ICLR 2025arXiv:2412.09149
6
citations

Trajectory-Class-Aware Multi-Agent Reinforcement Learning

Hyungho Na, Kwanghyeon Lee, Sumin Lee et al.

ICLR 2025arXiv:2503.01440
1
citations

A Sparsity Principle for Partially Observable Causal Representation Learning

Danru Xu, Dingling Yao, Sébastien Lachapelle et al.

ICML 2024arXiv:2403.08335
22
citations

How to Explore with Belief: State Entropy Maximization in POMDPs

Riccardo Zamboni, Duilio Cirino, Marcello Restelli et al.

ICML 2024arXiv:2406.02295
6
citations

Learning to Play Atari in a World of Tokens

Pranav Agarwal, Sheldon Andrews, Samira Ebrahimi Kahou

ICML 2024arXiv:2406.01361
6
citations

Model-based Reinforcement Learning for Confounded POMDPs

Mao Hong, Zhengling Qi, Yanxun Xu

ICML 2024

Rethinking Transformers in Solving POMDPs

Chenhao Lu, Ruizhe Shi, Yuyao Liu et al.

ICML 2024arXiv:2405.17358
9
citations