Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/97822
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Industrial and Systems Engineering | - |
dc.creator | Li, S | - |
dc.creator | Zheng, P | - |
dc.creator | Fan, J | - |
dc.date.accessioned | 2023-03-23T03:33:08Z | - |
dc.date.available | 2023-03-23T03:33:08Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/97822 | - |
dc.language.iso | zh | en_US |
dc.publisher | 中华人民共和国国家知识产权局 | en_US |
dc.rights | Assignee: 香港理工大学深圳研究院 | en_US |
dc.title | Man-machine cooperation method and system based on multi-modal behavior online prediction | en_US |
dc.type | Patent | en_US |
dc.description.otherinformation | Inventor name used in this publication: 李树飞 | en_US |
dc.description.otherinformation | Inventor name used in this publication: 郑湃 | en_US |
dc.description.otherinformation | Inventor name used in this publication: 范峻铭 | en_US |
dc.description.otherinformation | Title in Traditional Chinese: 一種基於多模態行為在線預測的人機協作方法和系統 | en_US |
dcterms.abstract | The invention discloses a man-machine cooperation method and system based on multi-modal behavior online prediction. The method comprises the steps of obtaining video data; determining visual semantic deep features and human body posture features corresponding to human body behaviors of operators according to the video data, wherein the visual semantic deep features are used for reflecting time-space semantic information of the human body behaviors in a time-sequence visual mode; determining a target human body behavior intention corresponding to the operator according to the visual semantic deep features and the human body posture features; and determining an execution operation and a moving path corresponding to the mobile cooperative robot according to the target human body behavior intention. The problems that in the prior art, a manual assembling mode needs to consume a large amount of assembling time, and the manual assembling mode is difficult to adapt to a development mode that the life cycle is gradually shortened and product innovation is increasingly accelerated in an industrial technical system are solved. | - |
dcterms.abstract | 本发明公开了一种基于多模态行为在线预测的人机协作方法和系统,所述方法包括:获取视频数据;根据所述视频数据,确定与作业人员的人体行为所对应的视觉语义深层特征和人体姿态特征;所述视觉语义深层特征用于反映所述人体行为在时序性视觉模式下的时空间语义信息;根据所述视觉语义深层特征和所述人体姿态特征,确定所述作业人员对应的目标人体行为意图;根据所述目标人体行为意图确定移动式协作机器人对应的执行操作和移动路径。解决了现有技术中手工装配模式需要耗费大量的装配时间,难以适应工业技术体系中生命周期逐渐缩短、产品创新日益加快的发展模式的问题。 | - |
dcterms.accessRights | open access | en_US |
dcterms.alternative | 一种基于多模态行为在线预测的人机协作方法和系统 | - |
dcterms.bibliographicCitation | 中国专利 ZL202110692988.2 | - |
dcterms.issued | 2022-08-12 | - |
dc.description.country | China | - |
dc.description.validate | 202303 bcch | - |
dc.description.oa | Version of Record | en_US |
dc.description.pubStatus | Published | en_US |
Appears in Collections: | Patent |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ZL202110692988.2.PDF | 1.09 MB | Adobe PDF | View/Open |
Page views
77
Citations as of May 11, 2025
Downloads
22
Citations as of May 11, 2025

Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.