Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/66238
Title: The ℓ2,1-norm stacked robust autoencoders for domain adaptation
Authors: Jiang, W
Gao, H
Chung, FL
Huang, H
Issue Date: 2016
Publisher: AAAI press
Source: 30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016, p. 1723-1729 How to cite?
Abstract: Recently, deep learning methods that employ stacked denoising autoencoders (SDAs) have been successfully applied in domain adaptation. Remarkable performance in multi-domain sentiment analysis datasets has been reported, making deep learning a promising approach to domain adaptation problems. SDAs are distinguished by learning robust data representations for recovering the original features that have been artificially corrupted with noise. The idea has been further exploited to marginalize out the random corruptions by a stateof- the-art method called mSDA. In this paper, a deep learning method for domain adaptation called ℓ2,1-norm stacked robust autoencoders (ℓ2,1-SRA) is proposed to learn useful representations for domain adaptation tasks. Each layer of ℓ2,1-SRA contains two steps: a robust linear reconstruction step which is based on ℓ2,1 robust regression and a non-linear squashing transformation step. The experimental results demonstrate that the proposed method is very effective in multiple cross domain classification datasets which include Amazon review dataset, spam dataset from ECML/PKDD discovery challenge 2006 and 20 newsgroups dataset.
Description: 30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, US, 12-17 February 2016
URI: http://hdl.handle.net/10397/66238
ISBN: 9781577357605
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

3
Last Week
0
Last month
Citations as of Dec 11, 2017

Page view(s)

29
Last Week
11
Last month
Checked on Dec 11, 2017

Google ScholarTM

Check



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.