Deep Reinforcement Learning-Based Voltage Control to Deal with Model Uncertainties in Distribution Networks

Abstract
This paper addresses the voltage control problem in medium-voltage distribution networks. The objective is to cost-efficiently maintain the voltage profile within a safe range, in presence of uncertainties in both the future working conditions, as well as the physical parameters of the system. Indeed, the voltage profile depends not only on the fluctuating renewable-based power generation and load demand, but also on the physical parameters of the system components. In reality, the characteristics of loads, lines and transformers are subject to complex and dynamic dependencies, which are difficult to model. In such a context, the quality of the control strategy depends on the accuracy of the power flow representation, which requires to capture the non-linear behavior of the power network. Relying on the detailed analytical models (which are still subject to uncertainties) introduces a high computational power that does not comply with the real-time constraint of the voltage control task. To address this issue, while avoiding arbitrary modeling approximations, we leverage a deep reinforcement learning model to ensure an autonomous grid operational control. Outcomes show that the proposed model-free approach offers a promising alternative to find a compromise between calculation time, conservativeness and economic performance.

This publication has 34 references indexed in Scilit: