Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105461
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorLi, Men_US
dc.creatorLi, Sen_US
dc.creatorLi, Len_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-04-15T07:34:30Z-
dc.date.available2024-04-15T07:34:30Z-
dc.identifier.isbn978-1-6654-4509-2 (Electronic)en_US
dc.identifier.isbn978-1-6654-4510-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/105461-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication M. Li, S. Li, L. Li and L. Zhang, "Spatial Feature Calibration and Temporal Fusion for Effective One-stage Video Instance Segmentation," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 11210-11219 is available at https://doi.org/10.1109/CVPR46437.2021.01106.en_US
dc.titleSpatial feature calibration and temporal fusion for effective one-stage video instance segmentationen_US
dc.typeConference Paperen_US
dc.identifier.spage11210en_US
dc.identifier.epage11219en_US
dc.identifier.doi10.1109/CVPR46437.2021.01106en_US
dcterms.abstractModern one-stage video instance segmentation networks suffer from two limitations. First, convolutional features are neither aligned with anchor boxes nor with ground-truth bounding boxes, reducing the mask sensitivity to spatial location. Second, a video is directly divided into individual frames for frame-level instance segmentation, ignoring the temporal correlation between adjacent frames. To address these issues, we propose a simple yet effective one-stage video instance segmentation framework by spatial calibration and temporal fusion, namely STMask. To ensure spatial feature calibration with ground-truth bounding boxes, we first predict regressed bounding boxes around ground-truth bounding boxes, and extract features from them for frame-level instance segmentation. To further explore temporal correlation among video frames, we aggregate a temporal fusion module to infer instance masks from each frame to its adjacent frames, which helps our frame-work to handle challenging videos such as motion blur, partial occlusion and unusual object-to-camera poses. Experiments on the YouTube-VIS valid set show that the proposed STMask with ResNet-50/-101 backbone obtains 33.5 % / 36.8 % mask AP, while achieving 28.6 / 23.4 FPS on video instance segmentation. The code is released online https://github.com/MinghanLi/STMask.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19-25 June 2021, p. 11210-11219en_US
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85121125797-
dc.relation.conferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberCOMP-0048-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS56309707-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Li_Spatial_Feature_Calibration.pdfPre-Published version4.13 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

96
Last Week
9
Last month
Citations as of Nov 9, 2025

Downloads

83
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

50
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

46
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.