Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107969
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorCosmo, Len_US
dc.creatorMinello, Gen_US
dc.creatorBicciato, Aen_US
dc.creatorBronstein, MMen_US
dc.creatorRodolà, Een_US
dc.creatorRossi, Len_US
dc.creatorTorsello, Aen_US
dc.date.accessioned2024-07-22T02:44:41Z-
dc.date.available2024-07-22T02:44:41Z-
dc.identifier.issn2162-237Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/107969-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication L. Cosmo et al., "Graph Kernel Neural Networks," in IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 4, pp. 6257-6270, April 2025 is available at https://doi.org/10.1109/TNNLS.2024.3400850.en_US
dc.subjectDeep learningen_US
dc.subjectGraph kernelen_US
dc.subjectGraph neural network (GNN)en_US
dc.titleGraph kernel neural networksen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage6257en_US
dc.identifier.epage6270en_US
dc.identifier.volume36en_US
dc.identifier.issue4en_US
dc.identifier.doi10.1109/TNNLS.2024.3400850en_US
dcterms.abstractThe convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this article, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similar to what happens for convolutional masks in traditional convolutional neural networks (CNNs). We perform an extensive ablation study to investigate the model hyperparameters’ impact and show that our model achieves competitive performance on standard graph classification and regression datasets.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on neural networks and learning systems, Apr. 2025, v. 36, no. 4, p. 6257-6270en_US
dcterms.isPartOfIEEE transactions on neural networks and learning systemsen_US
dcterms.issued2025-04-
dc.identifier.eissn2162-2388en_US
dc.description.validate202407 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera3057-
dc.identifier.SubFormID49303-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Cosmo_Graph_Kernel_Neural.pdfPre-Published version4.66 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

44
Citations as of Apr 13, 2025

WEB OF SCIENCETM
Citations

2
Citations as of Apr 24, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.