Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/105695
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Computing | - |
dc.creator | Wang, F | en_US |
dc.creator | Zuo, W | en_US |
dc.creator | Lin, L | en_US |
dc.creator | Zhang, D | en_US |
dc.creator | Zhang, L | en_US |
dc.date.accessioned | 2024-04-15T07:35:57Z | - |
dc.date.available | 2024-04-15T07:35:57Z | - |
dc.identifier.isbn | 978-1-4673-8851-1 (Electronic) | en_US |
dc.identifier.isbn | 978-1-4673-8852-8 (Print on Demand(PoD)) | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/105695 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.rights | © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.rights | The following publication F. Wang, W. Zuo, L. Lin, D. Zhang and L. Zhang, "Joint Learning of Single-Image and Cross-Image Representations for Person Re-identification," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 1288-1296 is available at https://doi.org/10.1109/CVPR.2016.144. | en_US |
dc.title | Joint learning of single-image and cross-image representations for person re-identification | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.spage | 1288 | en_US |
dc.identifier.epage | 1296 | en_US |
dc.identifier.doi | 10.1109/CVPR.2016.144 | en_US |
dcterms.abstract | Person re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 26 June - 1 July 2016, Las Vegas, Nevada, p. 1288-1296 | en_US |
dcterms.issued | 2016 | - |
dc.identifier.scopus | 2-s2.0-84986260197 | - |
dc.relation.conference | IEEE Conference on Computer Vision and Pattern Recognition [CVPR] | - |
dc.description.validate | 202402 bcch | - |
dc.description.oa | Accepted Manuscript | en_US |
dc.identifier.FolderNumber | COMP-1384 | - |
dc.description.fundingSource | RGC | en_US |
dc.description.pubStatus | Published | en_US |
dc.identifier.OPUS | 13932674 | - |
dc.description.oaCategory | Green (AAM) | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Wang_Joint_Learning_Single-Image.pdf | Pre-Published version | 2.6 MB | Adobe PDF | View/Open |
Page views
17
Citations as of May 12, 2024
Downloads
2
Citations as of May 12, 2024
SCOPUSTM
Citations
368
Citations as of May 17, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.