Please use this identifier to cite or link to this item:
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorChen, Gong-
dc.titleArtificial intelligence in music : composition and emotion-
dcterms.abstractMusic is a universal feature in all human societies. People compose music for deliver their mind. In turn, music evokes people's emotion. This thesis investigates new artificial intelligence techniques for music composition and emotion analysis. Algorithmic composition enables the computer to compose music just like human musician. We utilize deep learning, a powerful artificial intelligence technology, in algorithmic composition. To achieve both good musicality and novelty of machine-composed music, we propose a musicality-novelty generative adversarial neural nets (MNGANs) model. Two games named musicality game and novelty game are designed and optimized alternatively. Specifically, the musicality game makes the output music samples follow the distribution of human-composed music examples, while the novelty game makes the output samples to be far from the nearest human example. A series of experiments validate the effectiveness of the proposed algorithmic composition technique. To provide more convincing evaluation of the machine-composed music, we propose to analyze the brainwave of audience by using electroencephalography (EEG). A new psychological paradigm employing auditory stimuli with different extent of musicality are designed. Based on this paradigm, we propose a novel computational model to evaluate the musicality for machine-composed music. Several empirical experiments are performed, and the results validate the effectiveness of the proposed method. Some interesting neuroscience observations are also reported. For the music emotion analysis, we deliver two pieces of works. The first work is a novel multi-label emotion classification model named quantum convolutional neural network (QCNN), which classify the music according to the audio signal. Inspired by the concepts of superposition state and collapse in quantum mechanism, we model the emotion measurement process by a new designed convolutional neural network. Empirical experiments validate that our method has achieved good classification result by learning from the labels provided by humans. The second work of music emotion analysis is based on the EEG of the audience. Considering that the EEG-based music emotion classification can be influenced by the inter-trial effect, we design a novel single-trial music listening paradigm. The new paradigm avoids the inter-trial effect on EEG data. To address the severe inter-subject difference in the single-trial setting, we propose a novel computation model named resting-state alignment, which aims to eliminate the difference between subjects and at the same time, find the features contributing to emotion recognition. Empirical experiments demonstrate the performance of the proposed technique and the observations are consistent with the previous studies from neuroscience. Future works will be explored in algorithmic composition to evoke certain emotion, which has both academic values and commercial potentials.-
dcterms.accessRightsopen access-
dcterms.extentxviii, 111 pages : color illustrations-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
dcterms.LCSHArtificial intelligence -- Musical applications-
dcterms.LCSHComputer music-
dcterms.LCSHComputer composition (Music)-
dcterms.LCSHEmotions in music-
Appears in Collections:Thesis
Show simple item record

Page views

Citations as of Jun 26, 2022

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.