License
when quoting this document, please refer to the following
DOI: 10.4230/DFU.Vol3.11041.157
URN: urn:nbn:de:0030-drops-34711
URL: http://drops.dagstuhl.de/opus/volltexte/2012/3471/

Grosche, Peter ; Müller, Meinard ; Serrà, Joan

Audio Content-Based Music Retrieval

pdf-format:
Dokument 1.pdf (1,833 KB)


Abstract

The rapidly growing corpus of digital audio material requires novel retrieval strategies for exploring large music collections. Traditional retrieval strategies rely on metadata that describe the actual audio content in words. In the case that such textual descriptions are not available, one requires content-based retrieval strategies which only utilize the raw audio material. In this contribution, we discuss content-based retrieval strategies that follow the query-by-example paradigm: given an audio query, the task is to retrieve all documents that are somehow similar or related to the query from a music collection. Such strategies can be loosely classified according to their "specificity", which refers to the degree of similarity between the query and the database documents. Here, high specificity refers to a strict notion of similarity, whereas low specificity to a rather vague one. Furthermore, we introduce a second classification principle based on "granularity", where one distinguishes between fragment-level and document-level retrieval. Using a classification scheme based on specificity and granularity, we identify various classes of retrieval scenarios, which comprise "audio identification", "audio matching", and "version identification". For these three important classes, we give an overview of representative state-of-the-art approaches, which also illustrate the sometimes subtle but crucial differences between the retrieval scenarios. Finally, we give an outlook on a user-oriented retrieval system, which combines the various retrieval strategies in a unified framework.

BibTeX - Entry

@InCollection{grosche_et_al:DFU:2012:3471,
  author =	{Peter Grosche and Meinard M{\"u}ller and Joan Serr{\`a}},
  title =	{{Audio Content-Based Music Retrieval}},
  booktitle =	{Multimodal Music Processing},
  pages =	{157--174},
  series =	{Dagstuhl Follow-Ups},
  ISBN =	{978-3-939897-37-8},
  ISSN =	{1868-8977},
  year =	{2012},
  volume =	{3},
  editor =	{Meinard M{\"u}ller and Masataka Goto and Markus Schedl},
  publisher =	{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{http://drops.dagstuhl.de/opus/volltexte/2012/3471},
  URN =		{urn:nbn:de:0030-drops-34711},
  doi =		{http://dx.doi.org/10.4230/DFU.Vol3.11041.157},
  annote =	{Keywords: music retrieval, content-based, query-by-example, audio identification, audio matching, cover song identification}
}

Keywords: music retrieval, content-based, query-by-example, audio identification, audio matching, cover song identification
Seminar: Multimodal Music Processing
Issue date: 2012
Date of publication: 27.04.2012


DROPS-Home | Fulltext Search | Imprint Published by LZI