Reinforcement Learning Across Realistic Communication ChannelsSpeakerVijay Gupta AffiliationProfessor, Electrical and Computer Engineering AbstractFederated and distributed reinforcement learning have both been proposed to reduce the data hunger of traditional reinforcement learning algorithms. However, as we know from traditional distributed and networked control literature, constraints on information availability and sharing data across realistic communication channels can affect the stability and performance of the closed loop significantly. We consider such effects in reinforcement learning. We show that information patterns known to be tractable in traditional distributed control, such as partially nested patterns, continue to be tractable in reinforcement learning. On the other hand, while, in general, imperfect communication degrades the performance of RL algorithms, we also establish (perhaps surprisingly) that under suitable conditions on the channel, one can design coding schemes that incur no loss in performance. Bio
|