open access publication

Article, 2024

Data-Driven hierarchical energy management in multi-integrated energy systems considering integrated demand response programs and energy storage system participation based on MADRL approach

SUSTAINABLE CITIES AND SOCIETY, ISSN 2210-6707, 2210-6707, Volume 103, 10.1016/j.scs.2024.105264

Contributors

Khodadadi, Amin 0000-0003-2118-0496 [1] [2] Adinehpour, Sara [1] [2] Sepehrzad, Reza 0000-0001-7911-581X (Corresponding author) [3] Al-Durra, Ahmed [4] Anvari-Moghadam, Amjad 0000-0002-5505-3252 [5]

Affiliations

  1. [1] Arman Niroo Hormozgan Co, Dept Elect Engn, Bandar Abbas, Iran
  2. [NORA names: Iran; Asia, Middle East];
  3. [2] Islamic Azad Univ, Dept Elect Engn, Sci & Res Branch, Tehran, Iran
  4. [NORA names: Iran; Asia, Middle East];
  5. [3] Politecn Milano Univ, Dept Elect Engn, Milan, Italy
  6. [NORA names: Italy; Europe, EU; OECD];
  7. [4] Khalifa Univ, Adv Power & Energy Ctr, EECS Dept, Abu Dhabi, U Arab Emirates
  8. [NORA names: United Arab Emirates; Asia, Middle East];
  9. [5] Aalborg Univ, Dept Energy AAU Energy, DK-9220 Aalborg, Denmark
  10. [NORA names: AAU Aalborg University; University; Denmark; Europe, EU; Nordic; OECD]

Abstract

In this study, an intelligent and data-driven hierarchical energy management approach considering the optimal participation of renewable energy resources (RER), energy storage systems (ESSs) and the integrated demand response (IDR) programs execution based on wholesale and retail market signals in the multi-integrated energy system (MIES) structure is presented. The proposed objective function is presented on four levels, which include minimizing operating costs, minimizing environmental pollution costs, minimizing risk costs, and reducing the destructive effects of cyberattacks such as false data injection (FDI). The proposed approach is implemented in the structure of the central controller and local controller and is based on the multi-agent deep reinforcement learning method (MADRL). The MADRL model is formulated based on the Markov decision process equations and solved by multi-agent soft actor-critic and deep Q-learning algorithms in two levels of offline training and online operation. The different scenario results show operation cost reduction equivalent to 19.51 %, risk cost equivalent to 19.69 %, cyber security cost equivalent to 24 %, and pollution cost equivalent to 20.24 %. The proposed approach has provided an important step in responding to smart cities challenges and requirements considering advantage of fast response, high accuracy and also reducing the computational time and burden.

Keywords

Intelligent and hierarchical energy, Multi -agent deep reinforcement learning, Multi -integrated energy system, Smart cities, management

Data Provider: Clarivate