Energy Sharing Systems (ESS) are envisioned to be the future of power systems. In these systems, consumers equipped with renewable energy generation capabilities are able to participate in an energy market to sell their energy. This paper proposes an ESS that, differently from previous works, takes into account the consumers’ preference, engagement, and bounded rationality. The problem of maximizing the energy exchange while considering such user modeling is formulated and shown to be NP-Hard. To learn the user behavior, two heuristics are proposed: a Reinforcement Learning-based algorithm, which provides a bounded regret, and a more computationally efficient heuristic, named BPT-K, with guaranteed termination and correctness. A comprehensive experimental analysis is conducted against state-of-the-art solutions using realistic datasets. Results show that including user modeling and learning provides significant performance improvements compared to state-of-the-art approaches. Specifically, the proposed algorithms result in 25% higher efficiency and 27% more transferred energy. Furthermore, the learning algorithms converge to a value less than 5% of the optimal solution in less than 3 months of learning.

Timilsina A., Khamesi A.R., Agate V., Silvestri S. (2021). A Reinforcement Learning Approach for User Preference-aware Energy Sharing Systems. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 5(3), 1138-1153 [10.1109/TGCN.2021.3077854].

A Reinforcement Learning Approach for User Preference-aware Energy Sharing Systems

Agate V.;
2021-09-01

Abstract

Energy Sharing Systems (ESS) are envisioned to be the future of power systems. In these systems, consumers equipped with renewable energy generation capabilities are able to participate in an energy market to sell their energy. This paper proposes an ESS that, differently from previous works, takes into account the consumers’ preference, engagement, and bounded rationality. The problem of maximizing the energy exchange while considering such user modeling is formulated and shown to be NP-Hard. To learn the user behavior, two heuristics are proposed: a Reinforcement Learning-based algorithm, which provides a bounded regret, and a more computationally efficient heuristic, named BPT-K, with guaranteed termination and correctness. A comprehensive experimental analysis is conducted against state-of-the-art solutions using realistic datasets. Results show that including user modeling and learning provides significant performance improvements compared to state-of-the-art approaches. Specifically, the proposed algorithms result in 25% higher efficiency and 27% more transferred energy. Furthermore, the learning algorithms converge to a value less than 5% of the optimal solution in less than 3 months of learning.
set-2021
Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni
Timilsina A., Khamesi A.R., Agate V., Silvestri S. (2021). A Reinforcement Learning Approach for User Preference-aware Energy Sharing Systems. IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 5(3), 1138-1153 [10.1109/TGCN.2021.3077854].
File in questo prodotto:
File Dimensione Formato  
A_Reinforcement_Learning_Approach_for_User_Preference-Aware_Energy_Sharing_Systems.pdf

Solo gestori archvio

Descrizione: Articolo principale
Tipologia: Versione Editoriale
Dimensione 1.92 MB
Formato Adobe PDF
1.92 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
10317839.pdf

accesso aperto

Tipologia: Pre-print
Dimensione 3.75 MB
Formato Adobe PDF
3.75 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/517594
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 3
social impact