Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/66120
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorYao, M-
dc.creatorJia, K-
dc.creatorSiu, W-
dc.date.accessioned2017-05-22T02:09:42Z-
dc.date.available2017-05-22T02:09:42Z-
dc.identifier.issn0254-0037-
dc.identifier.urihttp://hdl.handle.net/10397/66120-
dc.language.isozhen_US
dc.publisher北京工業大學學報編輯部en_US
dc.rights© 2016 中国学术期刊电子杂志出版社。本内容的使用仅限于教育、科研之目的。en_US
dc.rights© 2016 China Academic Journal Electronic Publishing House. It is to be used strictly for educational and research purposes.en_US
dc.subjectFeature descriptionen_US
dc.subjectFeature matchingen_US
dc.subjectGradient binarizationen_US
dc.subjectScene recognitionen_US
dc.titleOrientation and scale invariant scene matching with high speed and performanceen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1634-
dc.identifier.epage1642-
dc.identifier.volume42-
dc.identifier.issue11-
dc.identifier.doi10.11936/bjutxb2016020007-
dcterms.abstract針對場景匹配技術中以二進制穩健基元獨立特征 (binary robust independent elementary features, BRIEF) 為代表的實時算法匹配錯誤率高的問題, 提出一種基于局部梯度二值化的特征描述算法. 該算法利用重心向量方向歸一化特征描述區域, 保證了特征描述符的方向不變性. 同時, 融合基于局部梯度二值化的區域紋理信息以降低特征匹配錯誤率. 使用國際通用數據庫對算法進行了驗證, 實驗結果表明: 提出的場景匹配算法其平均匹配準確率比 BRIEF 算法高 44.59%, 具有較高的魯棒性.-
dcterms.abstractFor real-time scene feature extraction and matching, conventional binary descriptors improve the speed of the descriptor generation and matching procedure while the false matching rate is high, such as Binary Robust Independent Elementary Features (BRIEF) which is only based on pixel intensity comparisons. To solve this problem, an improved binary descriptor was proposed in this paper, which preserved not only the pixel intensity information, but also the local texture information based on the gradient value. Additionally, the orientation of the centroid vector was also used in the descriptor calculation process, so that the binary descriptors were orientation-invariant. Image Sequences dataset was used to evaluate the performance of the proposed method, and the average matching accuracy rate of the proposed method was 44.59%, higher than that of the BRIEF algorithm. Experimental results show that the proposed descriptors have high accuracy and robustness when dealing with image rotation and scale transformation.-
dcterms.accessRightsopen accessen_US
dcterms.alternative旋轉尺度不變的實時高精度場景匹配算法-
dcterms.bibliographicCitation北京工業大學學報 (Journal of the Beijing Polytechnic University), 2016, v. 42, no. 11, p. 1634-1642-
dcterms.isPartOf北京工業大學學報 (Journal of the Beijing Polytechnic University)-
dcterms.issued2016-
dc.identifier.scopus2-s2.0-85012295453-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_IR/PIRAen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Yao_Orientation_Scale_Invariant.pdf1.03 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

94
Last Week
0
Last month
Citations as of Apr 21, 2024

Downloads

18
Citations as of Apr 21, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.