DagSemProc.06171.3.pdf
- Filesize: 76 kB
- 11 pages
Search and retrieval of specific musical content (e.g. emotion, melody) has become an important aspect of system development but only little research is user-oriented. The success of music information retrieval technology primarily depends on both assessing and meeting the needs of its users. Potential users of music information retrieval systems, however, draw upon various ways of expressing themselves. But, who are the potential users of MIR systems and how would they describe music qualities? High-level concepts contribute to the definition of meaning in music. How can we measure meaning and emotion in music? How can we define the higher-order understanding of features of music that the average users share? Information on listener’s perception of qualities of music is needed to make automated access to music content attractive to system users. The emphasis of our investigation is on a user-oriented approach to the semantic description of music. We report the results of an experiment that explores how users perceive affects in music, and what structural descriptions of music best characterize their understanding of music expression. 79 potential users of music information retrieval systems rated different sets of adjectives, while they were listening to 160 pieces of real music. The subject group (79) was recruited amongst 774 participants in a large survey on the music background, habits and interests, preferred genres, taste and favourite titles of people who are willing to use interactive music systems. Moreover, the stimuli used reflected the musical taste of the average participant in the large survey (774). The study reveals that perceived qualities of music are affected by the profile of the user. Significant subject dependencies are found for age, music expertise, musicianship, broadness of taste and familiarity with classical music. Furthermore, interesting relationships are discovered between expressive and structural features. Analyses show that the targeted population most unanimously agrees on loudness and tempo, whilst less unanimity was found for timbre and articulation. Finally, our findings are tested and validated by means of a demo of a semantic music recommender system prototype that supports the querying of a music database by semantic descriptors for affect, structure and motion. The system, that recommends music from a relational database containing the quality ratings provided by the participants, illustrates the potential of a user-dependent and emotion-based retrieval of music.
Feedback for Dagstuhl Publishing