Loading Events
  • This event has passed.

Tensor-low rank models for reinforcement learning

November 6 @ 4:00 pm - 5:00 pm CST

Abstract: Reinforcement Learning (RL) has emerged as a promising paradigm for addressing sequential optimization problems when the dynamics of the underlying systems are unknown. The primary objective in RL is to learn a policy that maximizes expected future rewards, or value functions. This is typically achieved through learning the optimal value functions or, alternatively, the optimal policy. The performance of RL algorithms is often limited by the choice of models used, which strongly depends on the specific problem. However, a common feature of many RL problems is that the optimal value functions and policies tend to be low rank. Motivated by this observation, this talk explores low-rank modeling as a general tool for RL problems. Specifically, we demonstrate how low-rank matrix and tensor models can approximate both value functions and policies. Additionally, we show how low-rank models can be applied to alternative setups, such as multi-task RL. This approach results in parsimonious algorithms that balance the rapid convergence of simple linear models with the high reward potential of neural networks.
Co-sponsored by: Rice University ECE Department Seminar
Speaker(s): Antonio G. Marques
Agenda:
Presentation at 4 to 5:00pm CST
Room: Room 1064, Bldg: Duncan Hall, Rice University, 6100 Main Street, Houston, Texas, United States, 77005