Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/110995
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.contributor | Mainland Development Office | - |
| dc.creator | Guo, S | - |
| dc.creator | Wang, H | - |
| dc.creator | Zhan, Y | - |
| dc.date.accessioned | 2025-02-17T01:35:01Z | - |
| dc.date.available | 2025-02-17T01:35:01Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/110995 | - |
| dc.language.iso | zh | en_US |
| dc.publisher | 中华人民共和国国家知识产权局 | en_US |
| dc.rights | Assignee: 香港理工大学深圳研究院 | en_US |
| dc.title | Distributed machine learning method and device, terminal equipment and storage medium | en_US |
| dc.type | Patent | en_US |
| dc.description.otherinformation | Inventor name used in this publication: 郭嵩 | en_US |
| dc.description.otherinformation | Inventor name used in this publication: 王号召 | en_US |
| dc.description.otherinformation | Inventor name used in this publication: 詹玉峰 | en_US |
| dc.description.otherinformation | Title in Traditional Chinese: 一種分布式機器學習方法、裝置、終端設備及存儲介質 | en_US |
| dcterms.abstract | The invention is suitable for the technical field of computers, and provides a distributed machine learning method and device, terminal equipment and a storage medium, and the method comprises the steps: training an initial model of this training, and obtaining a first gradient corresponding to the initial model; whether global parameters sent by a parameter server are received in a first time period is judged, and the first time period is the time period of training the initial model this time; if the global parameters are not received in the first time period, obtaining a candidate model based on a first gradient, and taking the candidate model as an initial model of next training; according to the method and the device, the candidate model is obtained by using the first gradient when the global parameter is not received, and the candidate model is continuously trained, so that the working node is always in a trained state within the time when the parameter server calculates the global parameter and transmits the global parameter to the working node, and the working node does not need to be continuously trained after receiving the global parameter; the model training time is saved, and the model training speed is higher. | - |
| dcterms.abstract | 本申请适用于计算机技术领域,提供了一种分布式机器学习方法、装置、终端设备及存储介质,该方法包括:对本次训练的初始模型进行训练,得到初始模型对应的第一梯度;判断在第一时间段内是否接收到参数服务器发送的全局参数,其中,第一时间段为本次训练初始模型的时间段;若在第一时间段内未接收到全局参数,则基于第一梯度,得到候选模型,并将候选模型作为下一次训练的初始模型;本申请在没有接收到全局参数时使用第一梯度得到候选模型,并对候选模型继续训练,使参数服务器在计算全局参数和向工作节点传输全局参数的时间内,工作节点一直处于训练的状态,不用必须接收到全局参数后再继续训练,节约了模型训练的时间,使模型训练速度更快。 | - |
| dcterms.accessRights | open access | en_US |
| dcterms.alternative | 一种分布式机器学习方法、装置、终端设备及存储介质 | - |
| dcterms.bibliographicCitation | 中国专利 ZL 202011016938.4 | - |
| dcterms.issued | 2024-09-10 | - |
| dc.description.country | China | - |
| dc.description.validate | 202502 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | NA | en_US |
| Appears in Collections: | Patent | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| ZL202011016938.4.pdf | 1.45 MB | Adobe PDF | View/Open |
Page views
4
Citations as of Apr 14, 2025
Downloads
44
Citations as of Apr 14, 2025
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.


