Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/108689
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Building Environment and Energy Engineering | - |
| dc.creator | Xu, W | - |
| dc.creator | Li, Y | - |
| dc.creator | He, G | - |
| dc.creator | Xu, Y | - |
| dc.creator | Gao, W | - |
| dc.date.accessioned | 2024-08-27T04:40:01Z | - |
| dc.date.available | 2024-08-27T04:40:01Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/108689 | - |
| dc.language.iso | en | en_US |
| dc.publisher | MDPI AG | en_US |
| dc.rights | © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | en_US |
| dc.rights | The following publication Xu W, Li Y, He G, Xu Y, Gao W. Performance Assessment and Comparative Analysis of Photovoltaic-Battery System Scheduling in an Existing Zero-Energy House Based on Reinforcement Learning Control. Energies. 2023; 16(13):4844 is available at https://doi.org/10.3390/en16134844. | en_US |
| dc.subject | Battery storage | en_US |
| dc.subject | Energy cost | en_US |
| dc.subject | PV consumption | en_US |
| dc.subject | Reinforcement learning | en_US |
| dc.subject | Reward design | en_US |
| dc.title | Performance assessment and comparative analysis of photovoltaic-battery system scheduling in an existing zero-energy house based on reinforcement learning control | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 16 | - |
| dc.identifier.issue | 13 | - |
| dc.identifier.doi | 10.3390/en16134844 | - |
| dcterms.abstract | The development of distributed renewable energy resources and smart energy management are efficient approaches to decarbonizing building energy systems. Reinforcement learning (RL) is a data-driven control algorithm that trains a large amount of data to learn control policy. However, this learning process generally presents low learning efficiency using real-world stochastic data. To address this challenge, this study proposes a model-based RL approach to optimize the operation of existing zero-energy houses considering PV generation consumption and energy costs. The model-based approach takes advantage of the inner understanding of the system dynamics; this knowledge improves the learning efficiency. A reward function is designed considering the physical constraints of battery storage, photovoltaic (PV) production feed-in profit, and energy cost. Measured data of a zero-energy house are used to train and test the proposed RL agent control, including Q-learning, deep Q network (DQN), and deep deterministic policy gradient (DDPG) agents. The results show that the proposed RL agents can achieve fast convergence during the training process. In comparison with the rule-based strategy, test cases verify the cost-effectiveness performances of proposed RL approaches in scheduling operations of the hybrid energy system under different scenarios. The comparative analysis of test periods shows that the DQN agent presents better energy cost-saving performances than Q-learning while the Q-learning agent presents more flexible action control of the battery with the fluctuation of real-time electricity prices. The DDPG algorithm can achieve the highest PV self-consumption ratio, 49.4%, and the self-sufficiency ratio reaches 36.7%. The DDPG algorithm outperforms rule-based operation by 7.2% for energy cost during test periods. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Energies, July 2023, v. 16, no. 13, 4844 | - |
| dcterms.isPartOf | Energies | - |
| dcterms.issued | 2023-07 | - |
| dc.identifier.scopus | 2-s2.0-85164771142 | - |
| dc.identifier.eissn | 1996-1073 | - |
| dc.identifier.artn | 4844 | - |
| dc.description.validate | 202408 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | China National Key R&D Program ‘Research on the Energy Efficiency and Health Performance Improvement of Building Operations based on Lifecycle Carbon Emissions Reduction’; Shandong Natural Science Foundation ‘Research on Flexible District Integrated Energy System under High Penetration Level of Renewable Energy’; Xiangjiang Plan ‘Development of Smart Building Management Technologies Towards Carbon Neutrality | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| energies-16-04844.pdf | 8.89 MB | Adobe PDF | View/Open |
Page views
114
Citations as of Feb 9, 2026
Downloads
42
Citations as of Feb 9, 2026
SCOPUSTM
Citations
13
Citations as of May 8, 2026
WEB OF SCIENCETM
Citations
3
Citations as of Feb 13, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



