Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105695
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorWang, Fen_US
dc.creatorZuo, Wen_US
dc.creatorLin, Len_US
dc.creatorZhang, Den_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-04-15T07:35:57Z-
dc.date.available2024-04-15T07:35:57Z-
dc.identifier.isbn978-1-4673-8851-1 (Electronic)en_US
dc.identifier.isbn978-1-4673-8852-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/105695-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication F. Wang, W. Zuo, L. Lin, D. Zhang and L. Zhang, "Joint Learning of Single-Image and Cross-Image Representations for Person Re-identification," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 1288-1296 is available at https://doi.org/10.1109/CVPR.2016.144.en_US
dc.titleJoint learning of single-image and cross-image representations for person re-identificationen_US
dc.typeConference Paperen_US
dc.identifier.spage1288en_US
dc.identifier.epage1296en_US
dc.identifier.doi10.1109/CVPR.2016.144en_US
dcterms.abstractPerson re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 26 June - 1 July 2016, Las Vegas, Nevada, p. 1288-1296en_US
dcterms.issued2016-
dc.identifier.scopus2-s2.0-84986260197-
dc.relation.conferenceIEEE Conference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberCOMP-1384-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS13932674-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Wang_Joint_Learning_Single-Image.pdfPre-Published version2.6 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

17
Citations as of May 12, 2024

Downloads

2
Citations as of May 12, 2024

SCOPUSTM   
Citations

368
Citations as of May 17, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.