Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107265
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorCheung, CC-
dc.creatorXu, SS-
dc.creatorNg, SC-
dc.date.accessioned2024-06-13T01:04:59Z-
dc.date.available2024-06-13T01:04:59Z-
dc.identifier.isbn978-1-5090-0620-5 (Electronic)-
dc.identifier.isbn978-1-5090-0621-2 (Print on Demand(PoD))-
dc.identifier.urihttp://hdl.handle.net/10397/107265-
dc.description2016 International Joint Conference on Neural Networks (IJCNN), 24-29 July 2016, Vancouver, BC, Canadaen_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Chi-Chung Cheung, Sean Shensheng Xu and Sin-Chun Ng, "A systematic algorithm to escape from local minima in training feed-forward neural networks," 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 2016, pp. 396-402 is available at https://doi.org/10.1109/IJCNN.2016.7727226.en_US
dc.titleA systematic algorithm to escape from local minima in training feed-forward neural networksen_US
dc.typeConference Paperen_US
dc.identifier.spage396-
dc.identifier.epage402-
dc.identifier.doi10.1109/IJCNN.2016.7727226-
dcterms.abstractA learning process is easily trapped into a local minimum when training multi-layer feed-forward neural networks. An algorithm called Wrong Output Modification (WOM) was proposed to help a learning process escape from local minima, but WOM still cannot totally solve the local minimum problem. Moreover, there is no performance analysis to show that the learning has a higher probability of converging to a global solution by using this algorithm. Additionally, the generalization performance of this algorithm was not investigated when the early stopping method of training is applied. Based on these limitations of WOM, we propose a new algorithm to ensure the learning can escape from local minima, and its performance is analyzed. We also test the generalization performance of this new algorithm when the early stopping method of training is applied.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of 2016 International Joint Conference on Neural Networks (IJCNN), 24-29 July 2016, Vancouver, BC, Canada-
dcterms.issued2016-
dc.identifier.scopus2-s2.0-85007207577-
dc.relation.conferenceInternational Joint Conference on Neural Networks [IJCNN]-
dc.description.validate202403 bckw-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0842en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe Hong Kong Polytechnic University Competitive Research Grants for Newly Recruited Junior Academic Staff 2007-2008; The Open University of Hong Kong Research Granten_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20269912en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Cheung_Systematic_Algorithm_Escape.pdfPre-Published version960.71 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

5
Citations as of Jun 30, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.