Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113650
Title: Efficient multi-view discrete co-clustering with learned graph
Authors: Nie, J
Qiang, Q 
Zhang, JC 
Hao, F 
Issue Date: Dec-2025
Source: Pattern recognition, Dec. 2025, v. 168, 111811
Abstract: Graph-based multi-view clustering typically involves constructing view-specific similarity graphs, fusing graphs from multiple views, and performing two-step spectral clustering. However, several challenges arise: (1) constructing similarity graphs is computationally expensive, (2) balancing and integrating information across views is complex, and (3) the two-step clustering leads to information loss, solution deviations, and high computational cost. To address these issues, we present an efficient multi-view discrete co-clustering framework. It automatically learns a multi-view consistent anchor similarity matrix, dynamically weighting the contributions of different views based on the original data structure and evolving indicators. The resulting anchor similarity matrix serves as the weight matrix for a bipartite graph, facilitating efficient co-clustering of raw data and anchors. Additionally, we introduce a time-economical optimization algorithm to solve for discrete indicators directly. Extensive experiments demonstrate that the proposed method outperforms multiple competitors, highlighting its superior performance and efficiency. The code is available at https://github.com/caccode/EMDC.
Keywords: Anchor similarity graph
Co-clustering
Discrete indicator matrix
Graph-based clustering
Multi-view clustering
Publisher: Elsevier
Journal: Pattern recognition 
ISSN: 0031-3203
EISSN: 1873-5142
DOI: 10.1016/j.patcog.2025.111811
Appears in Collections:Journal/Magazine Article

Open Access Information
Status embargoed access
Embargo End Date 2027-12-30
Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

1
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.