Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/9328
Title: Higher level segmentation : detecting and grouping of invariant repetitive patterns
Authors: Cai, Y
Baciu, G 
Keywords: Computer vision
Geometry
Image registration
Image segmentation
Image texture
Issue Date: 2012
Publisher: IEEE
Source: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 16-21 June 2012, Providence, RI, p. 694-701 How to cite?
Abstract: The efficient and robust extraction of invariant patterns from an image is a long-standing problem in computer vision. Invariant structures are often related to repetitive or near-repetitive patterns. The perception of repetitive patterns in an image is strongly linked to the visual interpretation and composition of textures. Repetitive patterns are products of both repetitive structures as well as repetitive reflections or color patterns. In other words, patterns that exhibit near-stationary behavior provide a rich information about objects, their shapes, and their texture in an image. In this paper, we propose a new algorithm for repetitive pattern detection and grouping. The algorithm follows the classical region growing image segmentation scheme. It utilizes a mean-shift-like dynamics to group local image patches into clusters. It exploits a continuous joint alignment to (a) match similar patches and (b) refine the subspace grouping. The result of higher-level grouping for image patterns can be used to infer the geometry of object surfaces and estimate the general layout of a crowded scene.
URI: http://hdl.handle.net/10397/9328
ISBN: 978-1-4673-1226-4
978-1-4673-1227-1 (E-ISBN)
ISSN: 1063-6919
DOI: 10.1109/CVPR.2012.6247738
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

4
Last Week
0
Last month
0
Citations as of Oct 15, 2017

Page view(s)

44
Last Week
1
Last month
Checked on Oct 22, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.