Funding for the Methods Network ended March 31st 2008. The website will be preserved in its current state.

Modern Methods for Musicology: Abstracts

Computer-Representation of Music in the Research Environment

Geraint Wiggins, Goldsmiths College, University of London.

Even before the advent of electronics, Ada, Lady Lovelace (1843) had imagined the possibility of processing or generating music by means of computers – by which she meant mechanical tools like Charles Babbage's (1864) proposed Analytical Engine.

However, in order to process or generate any artifact, physical or otherwise, by computer, it is first necessary to design an adequate representation of that thing, in much the same way that an engineer must have a clear understanding of the materials that she is using in order to build a physical structure. What is more, that understanding needs to be expressed in standard mathematical parlance, so that it can be understood by others, but also, crucially, so that standard mathematical methods may be applied to it. In this way, for example, it becomes possible to calculate the maximum load-bearing capacity of a bridge, or the thickness of column necessary to support a roof, without actually building the structure. In other words, it becomes possible to use mathematical and formal-logical methods to infer information about the structures.

Taking the position that music is organised sound, the analogy with engineering structures is direct. The experience of listeners is determined by both the detailed and broader structure of the notes in relation to each other (e.g., rhythms, melodies, chords, verses, choruses), and the sound of the notes (or sound-objects) themselves (e.g., pianos, flutes, combinations of instruments, non-instrumental sounds).

However, this structuralist view, which is itself a simplification, is not the only valid meaning of the word "music" in academia. Milton Babbit (1965) argues for the existence of three domains, each of which is one aspect of music, but each of which is equally referred to as "music": the graphemic domain of notation, the acoustic domain of physical sound, and the auditory domain of human perceptual response to sound. Each of these may be viewed as a different representation of the same (abstract and ineffable) thing; each has its own properties, and the relationships between them are not simple.

In order to gain a toe-hold on the forbidding slopes of the problem of music representation, it is necessary to take the engineering approach, not to the physical or aesthetic properties of music, but to the formal properties of the systems most commonly used to describe it. For example, Regener (1973) describes the system of common Western notation in strict mathematical terms; Lewin (1987) is interested in modelling tonal intervals and operations which may be applied to them, whether musically familiar or not; whilst Wiggins et al. (1993) use a very low-level representation of constant-pitch notes placed in the context of hierarchical structures representing musical grouping. In this last case, the pivotal concept is abstraction (Smaill et al., 1993), which allows mathematical manipulation of the represented music using words and concepts familiar to musicians while maintaining mathematical rigour and thus admitting automated logical inference. This is the approach on which I expand in this paper.

Digital Critical Editions of Music: A Multidimensional Model

Frans Wiering, University of Utrecht.

The aim of this paper is to think through some of the implications that ICT may have for critical editing and scholarly editions of music. These implications go beyond already accepted practices such as the use of music notation software for the preparation of scores, the online distribution of music in PDF format, or even the interchange of score data in some encoded format. In each of these, the visual aspect of the score is crucial; the function of the underlying encoding is only to make storage, and display and manipulation of the score possible. I propose to reverse the relationship between the two, and to regard the encoding itself as the most important component of an edition, containing the information from which a number of concrete, visual representations can be generated according to the user’s wishes. In this manner, the edition is no longer confined to what can be represented on the two- dimensional page, allowing it to be enriched with information that is not directly meant as part of the printed score but is important to the researcher.

Jerome McGann (1995) has argued that the process of textual transmission can be represented in book format only at the expense of user-friendliness, and this is true all the more of music, where not only scores but also performances, form a integral part of the transmission of a work. McGann’s solution is what he calls ‘HyperEditing,’ resulting ideally in a ‘fully networked hypermedia archive.’ For the present discussion, we could substitute the expression ‘digital critical edition of music.’ Such an edition can be imagined as a collection of digitised and enriched source materials, modelled in a multidimensional space. At the user’s request, two-dimensional slices through this space are generated that may correspond to certain stages in the work’s transmission history, an editor’s interpretation of the work, or visualise the transmission process itself. Certain views may be frozen for future reference, but the edition as such would be dynamic and collaborative, so that the editing process can be incremental and no effort is wasted on redoing the groundwork for the next edition.

In literary studies, the ‘critical edition in the digital age’ has been an issue of debate for at least ten years, and McGann’s views are by no means uncontested. Yet it is curious to observe that there has been no comparable debate about digital critical editions of music, and that only a handful of experiments have been done. I will discuss a number of cases where such an edition may help solve problems encountered in traditional scholarly editions. At the same time, these will exemplify some of the dimensions of the editorial space, such as transcription, normalisation, emendation, annotation, variant readings, and intertextual relations. Several of my examples involve lute music, as my particular research is in developing an encoding method for critical edition of this repertoire for use in ECOLM (Electronic Corpus of Lute Music). The practical results of some experiments will also be shown. Hopefully these cases will serve as a starting-point for a debate about whether HyperEditing of music ought to be part of musicology in the twenty first century at all, and if so, how we can ensure it will be.

The Online Chopin Variorum Edition: Music and Musicology in New Perspectives

John Rink, Royal Holloway, University of London

Although numerous variorum projects exist in the field of textual studies, many of which exploit sophisticated technologies with regard to image manipulation and collation/cross- referencing across discrete filiation chains, musicology has only begun to exploit the applica- tion of such technologies to complicated source networks such as those pertaining to Chopin as well as to Bach, Mozart, Beethoven, and other composers. The aim of the Online Chopin Variorum Edition (OCVE) – funded by the Andrew W. Mellon Foundation – is to capitalize upon emerging technical capacities for text/image comparison and new music-recognition technologies that allow unprecedented manipulation and comparison of diverse musical elements. Its primary scholarly goal is to facilitate and enhance comparative analysis of three categories of source material: manuscripts (sketches, autographs, scribal copies, glosses in student copies, etc.); first impressions of the first editions; and later impressions of the first editions which contain variants attributable either to the composer or to others involved in the editorial process. This is being achieved by unprecedented juxtapositional techniques with potential application to the music of a wide range of composers.

This paper will first offer some general remarks on the use of digital technologies in musicological research before undertaking more detailed discussion of OCVE as well as another ongoing project – Chopin’s First Editions Online (CFEO), funded by the AHRC. Demonstrations of the CFEO prototype will be followed by a similar presentation of the innovative OCVE website, with comments on its implications for understanding musical works in potentially more dynamic ways, and on the new approaches that may be taken to editing and performing them as a result of this project.

The Music Map: Towards a Mapping of ICT in Creative Music Practice

Celia Duffy, Royal Scottish Academy of Music and Drama.

Creative music practice (encompassing performance and composition) seems not to sit comfortably within the apparently narrow confines of the title of this seminar: Modern Methods for Musicology. Music is a very broad discipline and, with such a diversity of teaching, learning and research in music as a backdrop, this paper proposes that there is a need for a better understanding of the wide field of ICT application across the whole field. In the particular context of the AHRC and the UK research establishment, there is a case for paying special attention to creative practice.

The rationale behind the proposed mapping activity is simple: in the same way that Willard McCarty and Harold Short’s landmark Intellectual Map for Humanities Computing (2002) signalled a recognition of the maturity of humanities computing, the time is now ripe for a similar exercise in music. We need to understand what’s happening right across our discipline and think about: the various types of ICT tools, applications and approaches, what they’re used for and why, what could be further developed and how, how best to support those developments, and what the relationships are between the various constituent parts both within the broad field of music and outside in other disciplines (e.g. computing science, engineering, information science). A way of starting to draw the map is to mark a boundary between the use of ICT in assisting study of musical texts (which until recently and perhaps with the honourable exception of the sub-discipline of ethnomusicology has been musicology’s main concern), from the study and production of musical sounds (the performer’s main concern). A separate stake can be claimed for the use of ICT in composition. So far so good, but Leigh Landy’s work in providing useful working classifications of the various genres of electroacoustic music indicates that this is complex terrain. Adding in the recent explosion of networked audio and creative interactivity via readily-available music software further complicates the picture.

This paper will concentrate on mapping ICT in performance contexts including both conventional performance training and new creative opportunities provided by ICT tools in enhancing or inventing new instruments and ways of working. It is hoped that its tentative proposals for a preliminary mapping and taxonomy of one area might lead to contributions to a wider debate and better understanding of the role of ICT across the discipline.

The Computer and the Singing Voice

David Howard, University of York.

The advent and ubiquity of cheap multi-media computers which are capable of analysing acoustic parameters in real-time has seen the emergence of various kinds of software for voice analysis, some of which is freeware. Many singers, actors, voice teachers, and professional voice users are becoming very interested in the potential offered by such software for voice analysis, voice training and for enhancing vocal performances. This paper will describe and demonstrate (where appropriate) different types of software that are available for the singing voice, and discuss their application and reliability in terms of how well algorithms can quantify aspects of human singing voice production. It will also review research on the singing voice that makes use of computers for data gathering and quantification.

Filling Gaps between Current Musicological Practice and Computer Technology at IRCAM

Michael Fingerhut, IRCAM, Paris.

Knowledge build-up is a process which involves complex interactions between intellectual pursuits and the tools used to examine reality. While the interdependence between research and its instruments is more readily apparent in such fields as (say) neurophysics or microbiology, it is usually obscured in musicology, where the nature of the knowledge which is produced is rarely explicitly correlated to the devices which allow for its emergence. Computers provide new means to put in relation, organise, process, ascribe meaning to, and reuse a wide variety of musical information, that which lends itself to digitisation (from traces of the compositional process such as sketches, notes, etc., to computer “patches”, musical scores, books and other forms of publications about the work; recordings of live events and on information about them, etc.), in massive in-depth and broad scopes, and thus cannot but have a major impact on contemporary musicology. Their use addresses a multiplicity of related domains (acoustical, perceptual, musical, technological, historical, social, legal, etc, and levels of interpretation (physical, symbolic, semantic, cognitive, etc). At the crossroad of the musical creative process, production and performance on the one hand, and research and development in the related sciences and technologies, IRCAM holds a particular place which allows for the examination of these interdependences in conjunction with the development of specific tools. In this paper, we will attempt at presenting the utopian vision of the musicologist - his ideal instrumentation - emerging from this reflexion, as well as some of the concepts and tools which are already in use or in the course of realisation.

Audio Tools for Music Discovery and Structural Analysis

Michael Casey, Goldsmiths College, University of London.

An open problem in music research is to establish the lineage, or evolution, of musical ideas between different works, possibly by different artists. In this paper I will present new tools that assist in research processes that look for inter-work similarities in large recorded collections. Our system is build using the Goldsmiths MPEG-7 Toolkit, an implementation of the Low Level Audio Descriptors from the MPEG-7 International Standard for Multimedia Content Description (ISO 15938). The toolkit rapidly extracts musical features from thousands (or millions) of digital recordings. I will demonstrate that these features correspond with intuitive notions about music, such as harmony and timbre; in other words they are perceptual features. This means they can assist in identifying latent musical relationships in large collections, for example, passages in different works that exhibit a strong degree of similarity among these perceptual attributes; similar chord sequences or melodies being examples.

Finding such connections between works involves an enormous amount of computation. For collections greater than a few works, this computation becomes intractable because a bottle neck is formed by the exhaustive pair-wise comparison between all entries in the database. The second tool we introduce is Musically Sensitive Hashing, a method for finding close matches in a large database without having to explicitly compare a fragment against all parts of all works. Instead, we use the inherent numerical properties of similar works in the feature space to define a region of interest, and we only perform searching within that narrow region. This speeds up similarity computations by several orders of magnitude, thereby making the methods usable for research on large audio collections. I will give examples of matching in relatively large databases of audio and present some recent results from audio and music data mining experiments on contrasting collections.

ICT Tools for Searching, Annotation and Analysis of Audio-Visual Media

Adam Lindsay, University of Lancaster.

This talk will outline the "ICT Tools for Searching, Annotation and Analysis of Audio-Visual Media" (hereafter, ICT4AV) technology consultation project, some preliminary results, and other musings in the area of applying ICT tools and activities to music research.

The ICT4AV project's goal is to survey the tools, technologies, and research being explored by technologists and to survey the needs and requirements of humanities researchers. The domains are time-based multimedia: primarily music, speech, and video. Much of the project's dissemination is being performed in the open: we are collecting data and immediately publishing it as a weblog, which has some interesting side-effects. Our plan for phase two of the project is to approach open-minded researchers, and conduct directed interviews about their needs, from a requirements engineering point of view.

We foresee a few challenges in communicating about ICT with those who have not incorporated it into their research. There may be resistance to ICT tools in general, or the tendency to ascribe magical properties to ICT. It is hard to explain how computer technology may really affect humanities research. Our best approach so far is to spout a few Minsky- esque aphorisms: the simple is hard, and hard things are easy - ICT tools work best at a much smaller or much larger scale than people perceive.

In our explorations so far, we have benefited from sharing information between music, speech, and video research. In addition to the expected research from conferences like ICMC, ISMIR, DAFx, NIME, and ICAD, we are finding conjunctions with other areas, such as video surveillance (the importance of synchronising multiple, parallel media streams) and commercial interests (market forces, especially in digital music, will strongly affect how researchers will carry out their research). In addition, digital rights management, and other means for enforcing copyright versus fair use, has the potential for stifling research for a whole generation.