Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/109857
Title: | Exploring spatiotemporal consistency and unified frameworks for video segmentation | Authors: | Li, Minghan | Degree: | Ph.D. | Issue Date: | 2024 | Abstract: | Video segmentation (VS) divides a video into segments, enabling applications like video understanding, region-guided video generation, interactive video editing, and augmented reality. This thesis presents four studies exploring three essential aspects in the video segmentation field: spatiotemporal consistency, weak supervision, and unified frameworks. The first three studies improve video instance segmentation (VIS) in various scenarios and reduce annotation costs for segmentation datasets. The fourth study introduces a unified model that handles all six video segmentation tasks concurrently, boosting generalization capabilities. In Chapter 1, we introduce common image and video segmentation tasks, review some typical methods, and discuss contribution and organization of this thesis. In Chapter 2, we introduce the spatiotemporal feature consistency into one-stage CNN-based segmentation methods via incorporating spatial feature calibration and temporal feature fusion modules. In Chapter 3, we propose to mine discriminative object embeddings via leveraging inter-frame object query association for transformer-based methods. This can siginificantly improve instance segmentation performance on challenging videos, such as similar-looking objects, complex object trajectories, and mutually occluded objects. In Chapter 4, to reduce the cost of pixel-wise mask annotation for video segmentation datasets, we adapt state-of-the-art pixel-supervised VIS models to a box-supervised VIS baseline and design a box-center guided spatial-temporal pairwise affinity loss to promote better spatial and temporal consistency. In Chapter 5, we tackle the challenge of unifying all six tasks of video segmentation within a single transformer-based framework, resulting in a video segmentation framework with greater generalization capabilities. The proposed unified framework shows a commendable balance between performance and universality on more than 10 challenging video segmentation benchmarks, including generic segmentation, language-guided segmentation and visual prompt-guided segmentation. In summary, these thesis contributes to the advancement of video segmentation techniques by addressing specific challenges and proposing novel approaches to enhance performance, reduce annotation efforts, and establish a unified framework for tackling multiple video segmentation tasks. |
Subjects: | Image processing -- Digital techniques Digital video Hong Kong Polytechnic University -- Dissertations |
Pages: | xxi, 163 pages : color illustrations |
Appears in Collections: | Thesis |
Access
View full-text via https://theses.lib.polyu.edu.hk/handle/200/13243
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.