Deep multi-agent Reinforcement Learning for cost-efficient distributed load frequency control

Abstract
The rise of microgrid-based architectures is modifying significantly the energy control landscape in distribution systems, making distributed control mechanisms necessary to ensure reliable power system operations. In this article, the use of Reinforcement Learning techniques is proposed to implement load frequency control (LFC) without requiring a central authority. To this end, a detailed model of power system dynamic behaviour is formulated by representing individual generator dynamics, generator rate and network constraints, renewable-based generation, and realistic load realisations. The LFC problem is recast as a Markov Decision Process, and the Multi-Agent Deep Deterministic Policy Gradient algorithm is used to approximate the optimal solution of all LFC layers, that is, primary, secondary and tertiary. The proposed LFC framework operates through centralised learning and distributed implementation. In particular, there is no information interchange between generating units during operation. Thus, no communication infrastructure is necessary and information privacy between them is respected. The proposed framework is validated through numerical results and it is shown that it can be used to implement LFC in a distributed and cost-efficient manner.

This publication has 38 references indexed in Scilit: