Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/11275
Title: Visual orientation inhomogeneity based scale-invariant feature transform
Authors: Zhong, SH
Liu, Y 
Chen, QC
Keywords: Least discriminability
Orientation inhomogeneity
Real-world distribution
Scale-invariant feature transform
Issue Date: 2015
Publisher: Pergamon Press
Source: Expert systems with applications, 2015, v. 42, no. 13, p. 5658-5667 How to cite?
Journal: Expert systems with applications 
Abstract: Scale-invariant feature transform (SIFT) is an algorithm to detect and describe local features in images. In the last fifteen years, SIFT plays a very important role in multimedia content analysis, such as image classification and retrieval, because of its attractive character on invariance. This paper intends to explore a new path for SIFT research by making use of the findings from neuroscience. We propose a more efficient and compact scale-invariant feature detector and descriptor by simulating visual orientation inhomogeneity in human system. We validate that visual orientation inhomogeneity SIFT (V-SIFT) can achieve better or at least comparable performance with less computation resource and time cost in various computer vision tasks under real world conditions, such as image matching and object recognition. This work also illuminates a wider range of opportunities for integrating the inhomogeneity of visual orientation with other local position-dependent detectors and descriptors.
URI: http://hdl.handle.net/10397/11275
ISSN: 0957-4174
EISSN: 1873-6793
DOI: 10.1016/j.eswa.2015.01.012
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

7
Last Week
1
Last month
0
Citations as of Aug 18, 2017

WEB OF SCIENCETM
Citations

6
Last Week
0
Last month
0
Citations as of Aug 15, 2017

Page view(s)

41
Last Week
1
Last month
Checked on Aug 21, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.