Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/79433
Title: A fully-convolutional framework for semantic segmentation
Authors: Jiang, Y 
Chi, Z 
Keywords: Complement models
Deep learning
Neural network complexity
Over-fitting
Semantic segmentation
Issue Date: 2017
Publisher: Institute of Electrical and Electronics Engineers Inc.
Source: DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications, 2017, v. 2017-December, p. 1-7 How to cite?
Abstract: In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.
Description: 2017 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2017, Sydney, Australia, 29 Nov - 1 Dec 2017
URI: http://hdl.handle.net/10397/79433
ISBN: 9781538628393
DOI: 10.1109/DICTA.2017.8227388
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

2
Citations as of Mar 29, 2019

Page view(s)

26
Citations as of Jul 16, 2019

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.