Back to results list
Please use this identifier to cite or link to this item:
|Title:||Efficient and robust photo-based methods for precise shape and pose modeling of human subjects||Authors:||Zhu, Shuaiyin||Advisors:||Mok, P. Y. Tracy (ITC)||Keywords:||Computer animation
Human body -- Computer simulation
Fashion design -- Data processing
|Issue Date:||2017||Publisher:||The Hong Kong Polytechnic University||Abstract:||Accurate modeling of human subjects of diverse shapes and sizes in arbitrary poses is vitally important in many research applications, for example for the development of fashion products, anthropometric studies, and/or computer graphics applications. Different methods, including scan-based, image-based and example-based, have been developed over the years. However, for the customization of an individual subject's shape, these methods have known limitations. For example, scan-based methods have to involve expensive scanners and the subjects must be scanned in special clothing at specific locations. Image-based reconstructive methods have uncontrollable 3D shape errors due to oversimplified 2D-to-3D approximation. Although example-based reconstructive methods generate models with a realistic appearance, the size accuracy of the resulting models is questionable. Example-based methods may not model the local shape characteristics of individuals well, and the output models often have an 'average' shape. This project proposes new and efficient methods for modeling individuals of customized sizes and shapes in arbitrary dynamic poses. The size measurements and the shapes of the resulting models must be accurate enough, fulfilling the specific requirements of the clothing industry for fashion applications. In addition to accurate shape modeling, methods are developed to deform the customized models into various poses in real time. A total of five methods/systems are developed in this study to realize automatic shape modeling and dynamic pose deformation.
The first method is Automatic Shape Customization of Human subjects in tight-fitting clothing, called 'ASCHt'. ASCHt presents a complete automatic pipeline for extracting body shape features from input images and customizing 3D human models. The inputs of ASCHt are two orthogonal-view photographs of the subject, and the output of the system is a customized model of the subject in the input images with precise size measurements. ASCHt requires the subjects to be photographed in tight-fitting clothing. The second method, named 'ASCHa', dispenses with such restrictions on clothing types, and realizes automatic shape customization for human subjects in arbitrary clothing, including tight-fitting, normal-fitting or even loose-fitting clothing. ASCHa incorporates an intelligent algorithm, predicting under-the-clothes body profiles of the subjects based on input images where the body profiles are covered. According to the predicted body profiles, the subject's 3D body model is customized. The third method, 'ASCHp', is the automatic shape customization of the human based on the cutting-edge human parsing technology. ASCHp improves the robustness, efficiency and accuracy of shape modeling of individuals. All three methods are comprehensively evaluated by experiments. It is shown that the proposed methods can customize 3D models for individuals based on two input images; the output models have accurate size and shape details, and the size accuracy of the output models is comparable to that of a scan. The fourth development of this study is a system that adopts the above shape modeling methods on a client-server system architecture. The shape modeling methods are implemented on the server end, which serves requests from different clients like mobiles, websites and standalone systems. We have demonstrated such architecture in a mobile-server application. The fifth method developed in this study is for pose modeling, and it is called rapid automatic pose deformation (RAPD). It deforms human models of various body shapes into a series of dynamic poses. RAPD incorporates a new skeleton embedding algorithm that quickly embeds a skeleton into any customized models. With skeleton information, customized models can be deformed into different poses based on given motion data. To correct the skin surface deformation errors in the above rigid deformation, RAPD trains pose-induced non-rigid surface deformation from a dataset of registered scan models in diverse poses. By integrating RAPD with the shape modelling method ASCHp, an individual's body shape model canbe deformed into various dynamic poses in real-time. The proposed shape and pose modeling methods of human subjects can provide competitive advantages to the fashion industry. They allow a customized model to be created completely automatically within seconds. These customized models can support the fashion industry on efficient product development, enabling seamless collaboration among design houses and off-shore manufacturing facilities. In addition, the customized models can be rapidly deformed into various poses with a realistic appearance. This enables a more comprehensive fit evaluation in the development of high-performance clothing, such as sportswear and/or functional garments. Moreover, the output models can also be applied in online stores, allowing customers to visualize try-on effects before purchases. They also ease the difficulties of taking body measurements, helping customers with size selection in online clothing purchases. In addition, the technology can be applied to niche markets like bespoke markets and/or applications in other domains like medical and fitness.
|Description:||xx, 228 pages : color illustrations
PolyU Library Call No.: [THS] LG51 .H577P ITC 2017 Zhu
|URI:||http://hdl.handle.net/10397/70361||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|991021965754903411_link.htm||For PolyU Users||167 B||HTML||View/Open|
|991021965754903411_pira.pdf||For All Users (Non-printable)||7.76 MB||Adobe PDF||View/Open|
Citations as of Jun 18, 2018
Citations as of Jun 18, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.