The sonic arts have long been fascinated with accessing and revealing phenomena unavailable to direct aural perception. Seeking above, under or altogether beyond our scope of auditory perception – pursuing cries of bats, tectonic rumble, and electromagnetic fields – or merely those sounds inaccessible due to the constraints of time and space, such as vast distances or long intervals. Navigating a world of sound, in awe of the most discreet.
1. Wonderful and unfaithful listening
With the above in mind, one can even wonder about such sonic paradigm changes such as the invention of telephony in the late 1800s. Reliable long-distance communication is undoubtedly a very practical aim, but could it also have been driven by the urge to synchronously overhear beyond the spatial confines of the immediate shared acoustic room? Two voices connecting beyond the physically imposed limits of time and space, freedom through disembodied communion, souls reaching beyond the limited range of the bodies – and other such florid formulations? In the case of telephony, the mediating technology seeks to capture and reconstruct at both ends of the conversation voices and sounds, in a way that comes close to what the unmediated aural experience would be if both listeners were in the same room. Telephony, audio-recordings, and most other sonic approaches where a microphone is at one end, close to the source, and a speaker is at the other, close to the listener, cannot help but to tangle with the issue of “fidelity” – either embracing it, rejecting it, or something in-between. Etymologically, fidelity can be traced to the latin fidelis (“faithful, sincere, trustworthy”) and, in sonic terms, this notion is implicitly grounded on the comparison between an original and its reproduction. Hence, it entails two forms of access: the direct (between one’s own ears and the phenomenon) and the indirect one (between one’s own ears and the technologically assisted reproduction of the phenomenon). Yet, when it comes to listening to the phenomena which are beyond the scope of our naked ears, is fidelity not irrelevant? In this case, might we not argue that what is sought is a kind of listening that is both wonderful and unfaithful?
In a poem written in 1858 when he was 11 years old, Alexander Graham Bell (1) – yes, that Alexander Graham Bell, is it any surprise an inventor starts out a poet? – muses about the Aurora Borealis: “a wavering stream of coloured light / was in the dark blue sky at night / when nature was asleep”. Nature is often asleep, or so it seems, however, it does occasionally snore – awaking us abruptly. In 2012, a team of scientists from Aalto University captured for the first time the reputed sound of the Aurora Borealis (2), more specifically the sonic reflections of the electromagnetic sparks generated by the compression between thermal layers of air, occurring around 70 meters above the ground. It sounded like a mix between clapping and two clave sticks hitting each other. Who would have thought that such a psychedelic-inducing colourful cosmic pageant would, while spiralling and sweeping, have the voice of a percussionist? One would expect harps and chimes, or at the very least a few slide guitar glissandos with a bit of wah and fuzz distortion at the business end. This gap between the phenomena and the sounds it produces is stirring, unnerving, awe-inspiring, it sounds weird, it is cognitively dissonant, it breeds new potentials.
2. Sonification as both displacement and re-entanglement
The tool most often used when capturing sounds is some kind of microphone array. These are instruments which translate variations in pressure waves traversing a medium (gaseous, liquid or solid) into an electric signal, which is then converted back into mechanical vibrations at the speaker end of the chain. As anyone who has ever been in a garage band will know, microphone placement is essential: its position, the distance from the source, where it is aimed at. As anyone who has ever been a vocalist in a punk garage band will know, the right placement for the microphone is inside the mouth and, if possible, down the throat… But where do we place a microphone if we want to listen to brainwaves (3), to the worldwide fluctuation of the value of cryptocurrency (4), to the abundance of microbial species and the seasonal changes in algae population (5), or to the extinction of a species (6)?
This is where not only fidelity but auditory access itself fail us and sonification comes in. There is no direct sonic path between our ears and the phenomena listed above, but there are diverse indirect ones, through the data collected by means other than sonic, which can then be converted into sound. Technically the possibilities are endless, any phenomena whose occurrence can be measured, either by digital or analogic means, can be sonified. Temperature, light, humidity, vibration, moisture, conductivity, acceleration, and so on, in any imaginable combination, all of these can be converted from sensor input to data output. Once gathered, this data output will then be processed through a system of rules, variables, behaviours and instructions – in other words, through algorithms – whose final outcome will be tonal, rhythmic and resonant – in short, sound.
In its most pragmatic form sonification aims at the “intuitive audio representation of complex, multi-dimensional data” (7), or in other words, simple translation from ungraspable complexity to a pattern one can engage with, learn from or respond to quickly and effectively. This is clear in examples such as a heart rate monitor where accuracy in the real-time representation of a bodily process is the goal, or in the sonic queues of an aeroplane’s cockpit dashboard, meant to keep the pilot up-to-date while taking up the least possible amount of brain processing power and concentration. In situations such as these, sonification works like an auditory prosthetics dedicated to optimising pattern recognition and information processing, reforming intricate systems into something that can be instantly heard and understood, like a familiar tune. However, in hybrid artistic research contexts, sonification can move beyond the mere pragmatic and embrace new entanglements between the original phenomena, the data gathered and the sound manifested, exploring uncharted associations and creating new correspondences, new sonic matrixes that not simply illustrate or translate, but generate a new kind of meaning.
Take one of the examples quoted above (and linked to in note 6 below), the “Extinction Gong” (2017) project by artists Crystelle Vu (FR) and Julian Oliver (NZ). This nomadic installation consists in a traditional Chinese gong which is occasionally struck by a mechanical mallet, and is inspired by the “statistic representation of the rhythm of species extinction, estimated by biologist E.O. Wilson to be about 27000 losses a year, or once every 19 minutes”. The setup includes an embedded computer with a 3g link so that “should biologists declare a new species extinct while the Extinction Gong is active it will receive an update and perform a special ceremony: four strikes in quick succession alongside a text-to-speech utterance of the Latin Name of the species lost”. Let us consider the sonification chain in this (literally!) striking piece: the original phenomena (a species’ extinction), the data (scientifical survey of a species, focused on the quantitative assessment of its remaining population) and the sound (the gong and the voice-synthetiser intoning the species’ Latin name). This piece combines the pragmatic function of sonification – translating a complex phenomenon into an immediate percussive call-to-attention – with the rich associative entanglements of mourning and funereal rites (the gong strike), with an arguably slightly ironic performance of scientific nomenclature, an explicitly artificial and dryly detached eulogy of sorts. The sounds produced do not correspond to the auditory nature of the phenomena in any direct way – what sound would a species’ extinction even make that could be grasped by the naked ear? A last gasp of air from the last remaining member, a fading pulse, the bubbling and gurgling of organic decomposition, mere silence standing for the newly opened sonic gap, an empty acoustic niche in a given ecosystem? In any case one thing is certain, where once there was life there is no decay without sonic decay, the very vanishing of any given sound produced, held as vibration in a medium for a while and then lost to stillness.
Setting aside for a while its intricate weaving of techniques and processes, as well as the constantly renewing questions it raises, one might think of sonification essentially as kind of displacement, a bridging between phenomena and ear, a process dedicated to in-betweenness, intrinsically relational. Through sonification one creates connections and sounds them out, driven by affinities elective rather than mimetic – a formulation tributary to both Goethe and Benjamin, if slightly askew(!). Connections and relationships which move us beyond the question – “what does it actually sound like?” – into stranger realms where the phenomena, our understanding of them, and their sensorial manifestation are encouraged to reconfigure and re-entangle: “what kinds of relationships does sound activate?”.
3. Living Room – a sonorous metabolic ecosystem
As a sound artist collaborating in the collective experimental environment which is the Living Room, sonification is the approach I am mostly interested in deploying in order to explore the relational aspect of the processes of decay taking place. In one of the ecosystems/boxes being prepared, wax worms will be the agent of decomposition, given their ability to also digest matter usually thought to be non-edible, such as certain plastics (8). The worms and a few selected museum objects will become a dynamic metabolic system inside of the box, which will be monitored by a sensor array sensitive to environmental factors such as temperature, humidity, electrical conductivity, sound level, and vibration.
The data output produced by these analog sensors will be routed via an Arduino controller through Max/MSP, a visual programming language, and processed by a DAW (Digital Audio Workstation), in this case Ableton Live, where it will trigger different sonic variables and “behaviours”, manipulating and composing a real-time soundscape which will be experienced by the Living Room’s visitors. Apart from the technical pathway, this project is built upon the kinds of conceptual and structural questions that are common to most sonification projects:
- What kinds of sound will be used and how to they relate to the original phenomena?
- What kind of rules and behaviours are to be setup, and how to they create a meaningful experience?
- What are the aesthetic goals of the composition? How do these relate to the kind of scientific knowledge at stake?
- How will the sounds change over time? What are the temporal, rhythmic and tonal parameters of the sonification?
- What kind of listening experience is to be created? In what position will the installation place the listener and what kind of dialogue does it motivate?
At this early stage of the project, a lot of energy was expended in getting a working technical signal chain. The next phase will be dedicated to the physical installation, where premises and early potentials will be manifested and tested, and where ideas and approaches will rise and fall through collaborative trial-and-error, since the creative practice is no less metabolic than the processes happening inside of the boxes. As a matter of fact, the sonification is not to be interpreted merely as an external intervention, or as a collection (both noun and verb meant here!) of information, but as another facet of the Living Room as an ecosystem dedicated to cyclical transformation, and to prioritising renewed engagement, curiosity and learning over static preservation.
- https://www.loc.gov/resource/magbell.39100202
- http://www.sci-news.com/othersciences/geophysics/sounds-northern-lights-03980.html)
- http://news.bbc.co.uk/2/hi/science/nature/8016869.stm
- https://www.bsomusic.org/stories/an-instrument-is-turning-cryptocurrency-fluctuations-into-music/
- https://www.livescience.com/23626-microbe-music-algae-songs.html
- https://extinctiongong.com/
- BEN-TAL, O. & BERGER, J. 2004. Creative Aspects of Sonification. Leonardo, 37, 229-232.
- BOMBELLI, P., HOWE, C. J. & BERTOCCHINI, F. 2017. Polyethylene bio-degradation by caterpillars of the wax moth Galleria mellonella. Current Biology, 27, R292-R293.