Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112909
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineering-
dc.creatorZhu, Den_US
dc.creatorShan, Xen_US
dc.creatorWu, Cen_US
dc.creatorYung, Ken_US
dc.creatorIp, AWHen_US
dc.date.accessioned2025-05-15T06:58:55Z-
dc.date.available2025-05-15T06:58:55Z-
dc.identifier.issn1552-6283en_US
dc.identifier.urihttp://hdl.handle.net/10397/112909-
dc.language.isoenen_US
dc.publisherIGI Globalen_US
dc.rightsThis article published as an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and production in any medium, provided the author of the original work and original publication source are properly credited.en_US
dc.rightsThe following publication Zhu, D., Shan, X., Wu, C., Yung, K., & Ip, A. W. (2024). Multi Frame Obscene Video Detection With ViT: An Effective for Detecting Inappropriate Content. International Journal on Semantic Web and Information Systems (IJSWIS), 20(1), 1-18 is available at https://dx.doi.org/10.4018/IJSWIS.359768.en_US
dc.subjectComputer Visionen_US
dc.subjectDeep Learningen_US
dc.subjectObscene Video Detectionen_US
dc.subjectPornography classificationen_US
dc.subjectSelf Attention Mechanismen_US
dc.subjectVideo Analysisen_US
dc.subjectVideo Classificationen_US
dc.subjectVision Transformeren_US
dc.subjectViT-based Modelsen_US
dc.titleMulti frame obscene video detection with vit : an effective for detecting inappropriate contenten_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage18en_US
dc.identifier.volume20en_US
dc.identifier.issue1en_US
dc.identifier.doi10.4018/IJSWIS.359768en_US
dcterms.abstractWith the development of the Internet, people are surrounded by various types of information daily, including obscene videos. The quantity of such videos is increasing daily, making the detection and filtering of this information a crucial step in preventing its spread. However, a significant challenge remains in detecting obscene information in obscure scenarios, like indecent behavior occurring while wearing normal clothing, causing significant negative impacts, such as harmful influence on children. To address this issue, an innovative multi frame obscene video detection base on ViT is proposed by this manuscript per the authors, aiming to automatically detect and filter obscene content in videos. Extensive experiments conducted on the public NPDI dataset demonstrate that this method achieves better results than existing state-of-the-art methods, achieving 96.2%. Additionally, it achieves satisfactory classification accuracy on a dataset of obscure obscene videos.This provides a powerful tool for future video censorship and protects minors and the general public.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationInternational journal on semantic web and information systems, Jan.-Dec. 2024, v. 20, no. 1, p. 1-18en_US
dcterms.isPartOfInternational journal on semantic web and information systemsen_US
dcterms.issued2024-12-
dc.identifier.scopus2-s2.0-85210775042-
dc.identifier.eissn1552-6291en_US
dc.description.validate202505 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextGuangdong Provincial Demonstrative Industrial College for Artificial Intelligence and Robotics Educationen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zhu_Multi_Frame_Obscene.pdf1.28 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.