Funding for the Methods Network ended March 31st 2008. The website will be preserved in its current state.

New Protocols in Electroacoustic Music Analysis Workshop Report

Report by Professor Leigh Landy

Electroacoustic music is a form of music that is reliant on technology. Most of today’s electroacoustic music is reliant on digital technology to a large extent. Therefore, any workshop concerning this body of work takes for granted that those present are extremely ‘computer literate’.

The analysis of this new corpus of music need not involve digital technology. For example, an analyst may listen to a given recording of an electroacoustic work and transcribe what (s)he hears using some form of evocative transcription technique. Such notation can be created by hand. Nevertheless, a good deal of analysis of this repertoire is dependent on digital technology. For example, transcription obviously can also be created through digital analysis of a work’s sound and then having this sonic analytical information translated into a some kind of readable form.

The event that took place at De Montfort University on 12 June 2007 was intended to span the space of ICT-based areas related to analysis. The concluding plenary served to identify where we are in terms of development and to identify future needs.

The workshop co-ordinator did not have a great deal of difficulty in terms of selecting workshop speakers as he and a colleague at the MTI Research Centre, Prof. Simon Emmerson, were involved in the preparation of an AHRC large grant submission entitled ’New and Evolving Forms of Electroacoustic Music Analysis‘. In this application, a consortium of specialists was created to act as an advisory group. All consortium members save one who was unavailable on the day (Prof. Michael Casey, currently at Goldsmiths but soon to move to Dartmouth College in the US) were able to participate in the event. In short, all areas specified on this application with the exception of computational analysis of sound types/data mining, would be represented during this workshop. At the plenary, inevitably the subject of sound classification came up thus closing the loop.

The six subjects and their representatives were:

  • Interactive Composition/Sybil Software – Prof. Michael Clarke (Huddersfield University)
  • Production Documentation/Hypermedia Publications – Prof. Barry Truax (Simon Fraser University, Burnaby, B. C., Canada)
  • Computational Analysis – Prof. Eduardo Miranda (University of Plymouth: he ended up discussing modelling neurological means of listening and hearing)
  • Multi-media Presentation of Analysis/Evocative Transcription/Acousmographe Software – Yann Geslin (INA/GRM, Paris)
  • Hypermedia Publication of Analytical Results – Dr. Pierre Couprie (MTI, MINT – Sorbonne, Paris and the e-journal Musimédiane)
  • Intention/Reception Analysis: The Music Psychology Point of View – New ICT-based Metrics – Prof. Leigh Landy (MTI/DMU) and Dr. Kate Stevens (MARCS Auditory Laboratory of the University of Western Sydney)

The six all gave twenty-minute presentations (with time for ten minutes of questions) during the morning session. They then offered hands-on demonstrations during the afternoon session before the workshop’s closing plenary. All presenters will be submitting papers and related documentation related to their work. These papers will be published in the online publications section of the ElectroAcoustic Resource Site (EARS: <>).

This being the case, there will be no in-depth discussion of the presentations here as the authors can express their work better than the author of this report. The following paragraphs will summarise the different raisons d’être for the approaches, however.

Jean-Jacques Nattiez in his 1990 Princeton University Press book, Music and Discourse: Toward a Semiology of Music, in citing Jean Molino and others, spoke of three forms of musical analysis: poietical (e.g. from the composer’s point of view), aesthesic (e.g. from the listener’s point of view) and neutral (e.g. a computer-based rendition of recorded sound). All three were represented in one form or another during this workshop.

Clearly, there are many ways of investigating the poiesis of electroacoustic works. The goal in all cases is to aid others in the understanding of the composer’s intentions and procedures involved in the creation of a given work. The two contributions that were mainly focused on poiesis were those of Clarke and Truax. In the latter case, Truax was of the belief that for people to understand the minutiae of the digital means of creating a composition, a composer can offer extremely valuable information through documenting how sounds are created, manipulated and structured in a work. In a sense making this information available allows an interested party the opportunity to understand both the left-hand margin of a traditional score (i.e. which ‘instruments’ can be called upon) as well as the contents of the score itself (when is that ‘instrument’ playing; what is it playing; and how is it playing it). When presented alongside dramaturgical information (the ‘why’ of a work, something Truax has not prioritized thus far), such documentation offers in-depth information into the entire compositional process. Such documentation is totally dependent on ICT methods related to electroacoustic composition.

Michael Clarke is also involved with pioneering work in that he has created sound creation and manipulation tools to be included in the analytical process. Clarke offers his users the opportunity to work interactively within his analytical framework using his specially created Sybil software. His approach is, simply stated, learning by doing. That is, Clarke offers the analyst the opportunity to re-create sounds or create sounds in a similar manner to a composer by offering a situation that is analogous to the one the composer used whilst creating a work. His example of Jonathan Harvey’s Mortuous Plango, Vivos Voco is a superb example of interactive analysis offering a variety of means to create and re-create sounds as Harvey did at IRCAM in Paris when he created this digital composition.

The neutral level is at the basis of Geslin’s work or, better said, the work undertaken at the Groupe de Recherches Musicales (GRM) in Paris where the acousmographe software was developed. The twist here is, and this is something this rapporteur agrees with, that the output of neutral level analysis, an FFT diagram, sonogram or the like is often not as helpful as many contend. The issue is that the ear can combine acoustic signals into a texture, something not easily seen on a sonogram in some cases. Furthermore, the sonogram may offer acoustic information that is inaudible to the listener. The acousmographe takes FFT information as input and allows the user the opportunity to create colourful evocative scores to highlight salient characteristics of an electroacoustic piece as it scrolls along with the sound. In other words, neutral level information is used to create an aesthesic score as the listener chooses what those characteristics are and how they are best notated. The acousmographe is an invaluable tool, in particular for less experienced listeners to this still relatively young body of work as it offers listeners something visual, and by analogy, aural, to hold on to whilst listening to electroacoustic works. It obviously can also be used for more traditional approaches to, say, segmentation analysis. (Please note that Casey’s work on sound classification and segmentation is equally highly reliant on neutral-level approaches.)

Couprie and Miranda represented aesthesis in terms of new means of analysis. The former is internationally known for his work related to the acousmographe project. In the case of the new e-journal Musimédiane, results of electroacoustic analysis, much of which is highly reliant on aesthesic approaches, leads to publication of hypermedia-based analysis, some involving interactive options similar to Clarke’s, the likes of which could never be published in a paper-based journal. The combination of movie, still image and audio files with sound-based software and text files forms the basis of articles appearing in this journal, some of which address note-based music by the way. What is of interest here is the breakthrough in terms of the page-by-page analytical paradigm that analysts have relied upon throughout the centuries. His demonstration delineated the types of hypermedia used thus far in this new effort.

Miranda’s talk took a turn from artificial intelligence modelling to understand formalized aspects of certain types of electroacoustic music to the world of neurological approaches rather late in the game. This was due to the change of focus in his own recent research. His view is that by simulating how the ears and brain work in tandem, we are able to better understand how the listening experience works. As most electroacoustic music exists without a score, our need to understand the act of listening is essential to understand how electroacoustic music appreciation takes place according to Miranda.

The final presentation by the co-ordinator and Stevens represented ‘the odd one out’ for two reasons: firstly, their collaboration is yet to begin although both have worked in the area of intention/reception in recent years. Secondly, the ICT methods relevant here have been tested in other areas but have yet to be applied regarding electroacoustic music analysis. This is the one presentation that brought together poiesis and aesthesis. The story begins with the announcement in 2001 by Landy at the International Computer Music Conference (Havana) that the MTI Research Centre was to embark on a long-term project that was designed to investigate two goals: a) to what extent is the composer’s intention (when articulated) of a given electroacoustic work being received by listeners with different experience levels with this repertoire; and b) to what extent do inexperienced listeners find this music accessible, in particular when offered composers’ intention information? The project’s history (including the PhD work of Dr Rob Weale) was introduced during the first part of the talk and its startling results were shared: in all works investigated (all of which included real-world sounds) the majority of inexperienced listeners were interested in hearing more such music. They also (large majority) found being introduced to this music particularly rewarding when offered intention information from the composers. The intention/reception loop and how this methodology might be applied within an action research project were not presented at length during the workshop. At this point, Stevens took over and presented relevant aspects of a recent project in intention and reception in contemporary dance involving inexperienced audiences that took place in Australia. In her case, techniques adopted from psychology formed part of the testing. In other words, alongside the participants’ verbal responses, similar to the DMU project, other aspects were tracked such as eye movement, heart and even sweat responses. The idea is, if funding is achieved, to link the two projects’ methodologies together to form part of a new analytical strategy as well as to support a curriculum for people of all ages in which the intention/reception loop forms the basis in terms of access, appreciation and thus understanding of sound-based electroacoustic composition.

The plenary discussion was interesting, as inevitably gaps were filled. For example, during the day questions concerning the preservation of older (and current) works came up time and again. This concerns issues such as our ever-changing technological landscape. Works reliant on instruments, software, controllers and the like that are now redundant need to be kept alive somehow. Archives related to preservation of works and the databases containing information needed to make the archives accessible to the widest user group are developing, be it slowly. (Works on magnetic tape are all turning to vinegar as this media deconstructs after a number of years.) As the amount of potential works to be archived is immense the question was raised whether everything needs to be archived and, if not, which aspects of valorisation might be called upon to assist in choosing what is archived. In short migration strategies and preservation form points of concern regarding this repertoire.

The role of notation was presented although there will never be a consensus of opinion as to how relevant post-scriptive notations can be. That said, Truax’s documentation forms an exceptional type of notation, one that is fairly complete in terms of detail. Following from the sixth presentation, the role of reception was emphasised as a relatively under-researched topic (as was the desire that musicians provide listeners with better programme notes). In this case, no ICT-based method other than those presented was offered as a specific way forward. A discussion took place concerning to what extent ‘recipe’ or ‘technological’ listening (the listener’s focusing on how a piece was made) was relevant. For obvious reasons, no consensus was possible given the different goals of composers (and listeners) regarding electroacoustic music.

One participant was concerned to what extent electronica or electronic music (as used by today’s youth) was being excluded. The co-ordinator countered this worry by stating that this separation was only in the mind of the beholder and that DMU’s EARS project in no way encouraged that synthetic separation, i.e. electroacoustic music and electronica were not exclusive; the latter forms part of the former. Related to this discussion was one concerning access. Some felt comfortable with the lot of some electroacoustic music as a minority interest. Others obviously want to use results of electroacoustic music analysis as a means of widening participation (in terms of appreciation and creative endeavour).

Last, but by no means least, the issue of classification, of sounds, gestures and works arose. Without an appropriate terminology and appropriate means of classification, much of the excellent work that was represented during this successful workshop will not receive the attention and credit it deserves.

This workshop was held on the same day as the opening of the 3-1/2 day Electroacoustic Music Studies (EMS07) conference. Feedback for both was excellent. Although I did not do a head count on the day, I believe that ca. 60 participants were present for the workshop and about 120 for the conference, an excellent turnout in the evolving field of electroacoustic music studies.

The author gladly ends this report with his thanks to all involved, in particular the AHRC ICT Methods Network for supporting this unique event. The day demonstrated clearly how much exciting work has been achieved thus far focusing on new and evolving ICT methods and suggested clearly where the key foci for future research can be identified.

Prof. Leigh Landy, Workshop Co-ordinator Director: Music, Technology and Innovation Research Centre

24 June 2007.

AHDS Methods Taxonomy Terms

This item has been catalogued using a discipline and methods taxonomy. Learn more here.


  • Music


  • Practice-led Research - Digital sound generation
  • Practice-led Research - Digital sound recording
  • Practice-led Research - Music composition
  • Practice-led Research - Sound editing
  • Practice-led Research - User Interface design
  • Data Analysis - Content-based sound searching/retrieval
  • Data Analysis - Sound analysis
  • Data Analysis - Visual analysis/visualisation
  • Data publishing and dissemination - Audio-based collaborative publishing
  • Data publishing and dissemination - Streaming audio