"partial observability" Papers
27 papers found
Conference
COMBO: Compositional World Models for Embodied Multi-Agent Cooperation
Hongxin Zhang, Zeyuan Wang, Qiushi Lyu et al.
DyWA: Dynamics-adaptive World Action Model for Generalizable Non-prehensile Manipulation
Jiangran Lyu, Ziming Li, Xuesong Shi et al.
Exponential Topology-enabled Scalable Communication in Multi-agent Reinforcement Learning
Xinran Li, Xiaolu Wang, Chenjia Bai et al.
Forecasting in Offline Reinforcement Learning for Non-stationary Environments
Suzan Ece Ada, Georg Martius, Emre Ugur et al.
Mixture of Attentions For Speculative Decoding
Matthieu Zimmer, Milan Gritta, Gerasimos Lampouras et al.
Multi-Environment POMDPs: Discrete Model Uncertainty Under Partial Observability
Eline M. Bovy, Caleb Probine, Marnix Suilen et al.
On Evaluating Policies for Robust POMDPs
Merlijn Krale, Eline M. Bovy, Maris F. L. Galesloot et al.
On Minimizing Adversarial Counterfactual Error in Adversarial Reinforcement Learning
Roman Belaire, Arunesh Sinha, Pradeep Varakantham
On Shallow Planning Under Partial Observability
Randy Lefebvre, Audrey Durand
Predictive Coding Enhances Meta-RL To Achieve Interpretable Bayes-Optimal Belief Representation Under Partial Observability
Po-Chen Kuo, Han Hou, Will Dabney et al.
Quantifying Generalisation in Imitation Learning
Nathan Gavenski, Odinaldo Rodrigues
Real-World Reinforcement Learning of Active Perception Behaviors
Edward Hu, Jie Wang, Xingfang Yuan et al.
REVECA: Adaptive Planning and Trajectory-Based Validation in Cooperative Language Agents Using Information Relevance and Relative Proximity
SeungWon Seo, SeongRae Noh, Junhyeok Lee et al.
Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives
Marius Belly, Nathanaël Fijalkow, Hugo Gimbert et al.
Stabilizing LTI Systems under Partial Observability: Sample Complexity and Fundamental Limits
Ziyi Zhang, Yorie Nakahira, Guannan Qu
Student-Informed Teacher Training
Nico Messikommer, Jiaxu Xing, Elie Aljalbout et al.
To Distill or Decide? Understanding the Algorithmic Trade-off in Partially Observable RL
Yuda Song, Dhruv Rohatgi, Aarti Singh et al.
Trajectory-Class-Aware Multi-Agent Reinforcement Learning
Hyungho Na, Kwanghyeon Lee, Sumin Lee et al.
A Sparsity Principle for Partially Observable Causal Representation Learning
Danru Xu, Dingling Yao, Sébastien Lachapelle et al.
Constrained Bayesian Optimization under Partial Observations: Balanced Improvements and Provable Convergence
Shengbo Wang, Ke Li
FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning
Yonghyeon Jo, Sunwoo Lee, Junghyuk Yum et al.
How to Explore with Belief: State Entropy Maximization in POMDPs
Riccardo Zamboni, Duilio Cirino, Marcello Restelli et al.
Learning the Causal Structure of Networked Dynamical Systems under Latent Nodes and Structured Noise
Augusto Santos, Diogo Rente, Rui Seabra et al.
Learning to Play Atari in a World of Tokens
Pranav Agarwal, Sheldon Andrews, Samira Ebrahimi Kahou
Model-based Reinforcement Learning for Confounded POMDPs
Mao Hong, Zhengling Qi, Yanxun Xu
Rethinking Transformers in Solving POMDPs
Chenhao Lu, Ruizhe Shi, Yuyao Liu et al.
Task Planning for Object Rearrangement in Multi-Room Environments
Karan Mirakhor, Sourav Ghosh, Dipanjan Das et al.