Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/81909
Title: Artificial intelligence in music : composition and emotion
Authors: Chen, Gong
Advisors: Liu, Yan (COMP)
Keywords: Artificial intelligence -- Musical applications
Computer music
Computer composition (Music)
Emotions in music
Issue Date: 2019
Publisher: The Hong Kong Polytechnic University
Abstract: Music is a universal feature in all human societies. People compose music for deliver their mind. In turn, music evokes people's emotion. This thesis investigates new artificial intelligence techniques for music composition and emotion analysis. Algorithmic composition enables the computer to compose music just like human musician. We utilize deep learning, a powerful artificial intelligence technology, in algorithmic composition. To achieve both good musicality and novelty of machine-composed music, we propose a musicality-novelty generative adversarial neural nets (MNGANs) model. Two games named musicality game and novelty game are designed and optimized alternatively. Specifically, the musicality game makes the output music samples follow the distribution of human-composed music examples, while the novelty game makes the output samples to be far from the nearest human example. A series of experiments validate the effectiveness of the proposed algorithmic composition technique. To provide more convincing evaluation of the machine-composed music, we propose to analyze the brainwave of audience by using electroencephalography (EEG). A new psychological paradigm employing auditory stimuli with different extent of musicality are designed. Based on this paradigm, we propose a novel computational model to evaluate the musicality for machine-composed music. Several empirical experiments are performed, and the results validate the effectiveness of the proposed method. Some interesting neuroscience observations are also reported. For the music emotion analysis, we deliver two pieces of works. The first work is a novel multi-label emotion classification model named quantum convolutional neural network (QCNN), which classify the music according to the audio signal. Inspired by the concepts of superposition state and collapse in quantum mechanism, we model the emotion measurement process by a new designed convolutional neural network. Empirical experiments validate that our method has achieved good classification result by learning from the labels provided by humans. The second work of music emotion analysis is based on the EEG of the audience. Considering that the EEG-based music emotion classification can be influenced by the inter-trial effect, we design a novel single-trial music listening paradigm. The new paradigm avoids the inter-trial effect on EEG data. To address the severe inter-subject difference in the single-trial setting, we propose a novel computation model named resting-state alignment, which aims to eliminate the difference between subjects and at the same time, find the features contributing to emotion recognition. Empirical experiments demonstrate the performance of the proposed technique and the observations are consistent with the previous studies from neuroscience. Future works will be explored in algorithmic composition to evoke certain emotion, which has both academic values and commercial potentials.
Description: xviii, 111 pages : color illustrations
PolyU Library Call No.: [THS] LG51 .H577P COMP 2019 Chen
URI: http://hdl.handle.net/10397/81909
Rights: All rights reserved.
Appears in Collections:Thesis

Files in This Item:
File Description SizeFormat 
991022289507603411_link.htmFor PolyU Users168 BHTMLView/Open
991022289507603411.pdfFor All Users6.38 MBAdobe PDFView/Open
Show full item record
PIRA download icon_1.1View/Download Contents

Page view(s)

17
Citations as of May 6, 2020

Download(s)

7
Citations as of May 6, 2020

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.