Reinforcement Learning for Voltage Control-based Ancillary Service using Thermostatically Controlled Loads

Show simple item record

dc.contributor.author Lukianykhin, Oleh
dc.date.accessioned 2020-02-21T14:47:11Z
dc.date.available 2020-02-21T14:47:11Z
dc.date.issued 2020
dc.identifier.citation Lukianykhin, Oleh. Reinforcement Learning for Voltage Control-based Ancillary Service using Thermostatically Controlled Loads : Master Thesis : manuscript rights / Oleh Lukianykhin ; Supervisor Dr. Tetiana Bogodorova ; Ukrainian Catholic University, Department of Computer Sciences. – Lviv : [s.n.], 2020. – 53 p. : ill. uk
dc.identifier.uri http://er.ucu.edu.ua/handle/1/2038
dc.language.iso en uk
dc.subject Reinforcement Learning uk
dc.subject Control-based Ancillary Servic uk
dc.subject Q-learning uk
dc.title Reinforcement Learning for Voltage Control-based Ancillary Service using Thermostatically Controlled Loads uk
dc.type Preprint uk
dc.status Публікується вперше uk
dc.description.abstracten Advances in the demand response for energy imbalance management (EIM) ancillary services can change the future power systems. These changes are subject for research in academia and industry. Although an important/promising part of this research is the application of Machine Learning methods to shape future power systems domain, the domain has not fully benefited from this application yet. Thus, the main objective of the presented project is to investigate and assess opportunities for applying reinforcement learning (RL) to achieve such advances by developing an intelligentvoltagecontrol-basedancillaryservicethatusesthermostaticallycontrolled loads (TCLs). Two stages of the project are presented: a proof of concept (PoC) and extensions. The PoC includes modelling and training of a voltage controller utilising Q-learning, chosen due to its efficiency that is achieved without unnecessary sophistication. Simplest relevant for demand response power system of 20 TCLs is considered in the experiments to provide ancillary service. The power system model is developed with Modelica tools. ExtensionsaimtoexceedPoCperformancebyapplyingadvancedRLmethods: Qlearning modification that uses a window of environment states as an input (WIQL), smartdiscretisationstrategiesforenvironment’scontinuousstatespaceandadeepQnetwork(DQN)withexperiencereplay. Toinvestigateparticularitiesofthedeveloped controller , modifications in an experimental setup such as controller testing longer than training, different simulation start time are considered. The improvement of 4% in median performance is achieved compared to the competing analytical approach - optimal constant control chosen using whole time interval simulation for the same voltage controller design. The presented results and corresponding discussions can be useful for both further work on the RL-driven voltage controllers for EIM and other applications of RL in power system domain using Modelica models. uk


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Browse

My Account