Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116822
PIRA download icon_1.1View/Download Full Text
Title: Energy-efficient LLM training in GPU datacenters with immersion cooling systems
Authors: Zhu, S 
Wang, D 
Issue Date: 2025
Source: In E-ENERGY '25: Proceedings of the 2025 the 16th ACM International Conference on Future and Sustainable Energy Systems, p. 407-414. New York, New York: The Association for Computing Machinery, 2025
Abstract: With the increase in AI applications, the energy consumption of datacenters that run AI jobs is greatly increasing. The overall energy consumption of a datacenter is closely linked with that of its cooling system. Recently, there has been a revolution in immersion cooling technologies, in which servers can be directly immersed in dielectric cooling liquid (coolant). However, there is a lack of understanding of how the performance of AI jobs is affected by immersion cooling systems. While the physics behind immersion cooling is understood, in this paper we observe key restricting factors: (1) the boiling state of the coolant and (2) the heat removal rate of the coolant may not match the heat generation rate of the GPUs, triggering the thermal-throttle mechanisms of the GPUs. In this paper, we study the energy-efficient and delay-ensured computing of large language model (LLM) training jobs over a cluster of GPUs in immersion cooling systems. We model the thermal characteristics of the system (e.g., heat generation, heat removal, and temperature) and develop an algorithm with workload assignment and frequency scaling to avoid the delay incurred by the thermal-throttle mechanisms and to execute the workloads in energy-efficient frequencies. In our evaluation, we simulate the computational fluid dynamics (CFD) of the immersion cooling systems through the Ansys Fluent software. We show that we outperform baseline algorithms by up to 53.2% in energy and 22.5% in delays.
Keywords: Immersion Cooling
LLM Training
Thermal Control
Publisher: The Association for Computing Machinery
ISBN: 979-8-4007-1125-1
DOI: 10.1145/3679240.3734609
Description: 16th ACM International Conference on Future and Sustainable Energy Systems, Rotterdam, Netherlands, June 17-20, 2025
Rights: This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0).
©2025 Copyright held by the owner/author(s).
The following publication Zhu, S., & Wang, D. (2025). Energy-efficient LLM Training in GPU datacenters with Immersion Cooling Systems Proceedings of the 16th ACM International Conference on Future and Sustainable Energy Systems is available at https://doi.org/10.1145/3679240.3734609.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
3679240.3734609.pdf3.15 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.