ACE: A Framework for optimizing music classification - PowerPoint PPT Presentation

About This Presentation
Title:

ACE: A Framework for optimizing music classification

Description:

... naive Bayes, k-nearest neighbour, neural networks, support vector machines ... ACE can perform at least as well as a state of the art algorithm with no tweaking ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 26
Provided by: corym
Category:

less

Transcript and Presenter's Notes

Title: ACE: A Framework for optimizing music classification


1
ACE A Framework for optimizing music
classification
  • Cory McKay
  • Rebecca Fiebrink
  • Daniel McEnnis
  • Beinan Li
  • Ichiro Fujinaga
  • Music Technology Area
  • Faculty of Music
  • McGill University

2
Goals
  • Highlight limitations of existing pattern
    recognition software when applied to MIR
  • Present solutions to these limitations
  • Stress importance of standardized classification
    and feature extraction software
  • Ease of use, portability and extensibility
  • Present the ACE software framework
  • Uses meta-learning
  • Uses classification ensembles

3
Existing music classification systems
  • Systems often implemented with specific tasks in
    mind
  • Not extensible to general tasks
  • Often difficult to use for those not involved in
    project
  • Need standardized systems for a variety of MIR
    problems
  • No need to reimplement existing algorithms
  • More reliable code
  • More usable software
  • Facilitates comparison of methodologies
  • Important foundations
  • Marsyas (Tzanetakis Cook 1999)
  • M2K (Downie 2004)

4
Existing general classification systems
  • Available general-purpose systems
  • PRTools (van der Heijden et al. 2004 )
  • Weka (Witten Frank 2005)
  • Other meta-learning systems
  • AST (Lindner and Studer 1999)
  • Metal (www.metal-kdd.org)

5
Problems with existing systems
  • Distribution problems
  • Proprietary software
  • Not open source
  • Limited licence
  • Music-specific systems are often limited
  • None use meta-learning
  • Classifier ensembles rarely used
  • Interfaces not oriented towards end users
  • General-purpose systems not designed to meet the
    particular needs of music

6
Special needs of music classification (1)
  • Assign multiple classes to individual recordings
  • A recording may belong to multiple genres, for
    example
  • Allow classification of sub-sections and of
    overall recordings
  • Audio features often windowed
  • Useful for segmentation problems
  • Maintain logical grouping of multi-dimensional
    features
  • Musical features often consist of vectors (e.g.
    MFCCs)
  • This relatedness can provide classification
    opportunities

7
Special needs of music classification (2)
  • Maintain identifying meta-data about instances
  • Title, performer, composer, date, etc.
  • Take advantage of hierarchically structured
    taxonomies
  • Humans often organize music hierarchically
  • Can provide classification opportunities
  • Interface for any user

8
Standardized file formats
  • Existing formats such as Wekas ARFF format
    cannot represent needed information
  • Important to enable classification systems to
    communicate with arbitrary feature extractors
  • Four XML file formats that meet the above needs
    are described in proceedings

9
The ACE framework
  • ACE (Autonomous Classification Engine) is a
    classification framework that can be applied to
    arbitrary types of music classification
  • Meets all requirements presented above
  • Java implementation makes ACE portable and easy
    to install

10
ACE and meta-learning
  • Many classification methodologies available
  • Each have different strengths and weaknesses
  • Uses meta-learning to experiment with a variety
    of approaches
  • Finds approaches well suited to each problem
  • Makes powerful pattern recognition tools
    available to non-experts
  • Useful for benchmarking new classifiers and
    features

11
(No Transcript)
12
Algorithms used by ACE
  • Uses Weka class libraries
  • Makes it easy to add or develop new algorithms
  • Candidate classifiers
  • Induction trees, naive Bayes, k-nearest
    neighbour, neural networks, support vector
    machines
  • Classifier parameters are also varied
    automatically
  • Dimensionality reduction
  • Feature selection using genetic algorithms,
    principal component analysis, exhaustive searches
  • Classifier ensembles
  • Bagging, boosting

13
Classifier ensembles
  • Multiple classifiers operating together to arrive
    at final classifications
  • e.g. AdaBoost (Freund and Shapire 1996)
  • Success rates in many MIR areas are behaving
    asymptotically (Aucouturier and Pachet 2004)
  • Classifier ensembles could provide some
    improvement

14
Musical evaluation experiments
  • Achieved a 95.6 success with a five-class
    beatbox recognition experiment (Sinyor et al.
    2005)
  • Repeated Tindales percussion recognition
    experiment (2004)
  • ACE achieved 96.3 success, as compared to
    Tindales best rate of 94.9
  • A reduction in error rate of 27.5

15
General evaluation experiments
  • Applied ACE to six commonly used UCI datasets
  • Compared results to recently published algorithm
    (Kotsiantis and Pintelas 2004)

16
Results of UCI experiments (1)
Data Set ACE's Selected Classifier Kotsiantis' Success Rate ACE's Success Rate
autos AdaBoost 81.70 86.30
diabetes Naïve Bayes 76.60 78.00
ionosphere AdaBoost 90.70 94.30
iris FF Neural Net 95.60 97.30
labor k-NN 93.40 93.00
vote Decision Tree 96.20 96.30
17
Results of UCI experiments (2)
  • ACE performed very well
  • Statistical uncertainty makes it difficult to say
    that ACEs results are inherently superior
  • ACE can perform at least as well as a state of
    the art algorithm with no tweaking
  • ACE achieved these results using only one minute
    per learning scheme for training and testing

18
Results of UCI experiments (3)
  • Different classifiers performed better on
    different datasets
  • Supports ACEs experimental meta-learning
    approach
  • Effectiveness of AdaBoost (chosen 2 times out of
    6) demonstrates strength of classifier ensembles

19
Feature extraction
  • ACE not tied to any particular feature extraction
    system
  • Reads Weka ARFF as well as ACE XML files
  • Does include two powerful and extensible feature
    extractors are bundled with ACE
  • Write Weka ARFF as well as ACE XML

20
jAudio
  • Reads
  • .mp3
  • .wav
  • .aiff
  • .au
  • .snd

21
jSymbolic
  • Reads MIDI
  • Uses 111 Bodhidharma features

22
ACEs interface
  • Graphical interface
  • Includes an on-line manual
  • Command-line interface
  • Batch processing
  • External calls
  • Java API
  • Open source
  • Well documented
  • Easy to extend

23
Current status of ACE
  • In alpha release
  • Full release scheduled for January 2006
  • Finalization of GUI
  • User constraints on training, classification and
    meta-learning times
  • Feature weighting
  • Expansion of candidate algorithms
  • Long-term
  • Distributed processing, unsupervised learning,
    blackboard systems, automatic cross-project
    optimization

24
Conclusions
  • Need standardized classification software able to
    deal with the special needs of music
  • Techniques such as meta-learning and classifier
    ensembles can lead to improved performance
  • ACE designed to address these issues

25
  • Web site
  • coltrane.music.mcgill.ca/ACE
  • E-mail
  • cory.mckay_at_mail.mcgill.ca
Write a Comment
User Comments (0)
About PowerShow.com