I will be quite present at the upcoming SuperCollider Symposium in London. Find below the abstracts and general descriptions of all my activities.
Anemos Sonore (Till Bovermann)
Anemos Sonore investigates into an hypothetical culture in which sound and dynamics are considered the primary source of evidence in science and research. One of the fundamental differences to our common understanding (often based on constant values, averaged and computed from the actual measurements) is that sound naturally offers context in the form of its past and future gestalt in which its current state is embedded. This circumstance draws several implications regarding research methodology in such a culture. Most prominently these are:
- Signal analysis is most likely done with the sense that is best suited for it: hearing.
- As context is considered valuable, there would be at least a tendency to recognise and analyse the world in a more holistic way than we do.
- Not only the displays’ design would’ve taken an alternate way, but also the sensory elements, starting with the very basic elements to measure our direct environment.
To investigate this world, I focused on the creation of tools and methods for a relatively simplistic research interest: the understanding of local wind phenomena. For this, I devised two different scenarios: One incorporating the development of scientific tools for an evolving scientific culture and one featuring site-specific installations. While the first is based on the invention of scientific tools for locative wind investigations, the second one aims for site-specific installations which alter the sonic and visual experience of site visitors to allow them for an alternate environmental experience. Its intention is to add tightly connected, semi-artificial keynote sounds (devised after R. Murray Schaffer; Soundscape: Tunings of the World) to the natural environment.
Both scenarios are based on the same sensory element: a tensioned ribbon with a magnetic, weighted centre element arranged between two electromagnetic coils. Used on a windy location, the ribbon starts to flutter and induces a constant change of the electromagnetic field, creating an electric signal that is tightly coupled to the wind surrounding it. This signal is captured and carefully filtered to eventually create a sonic experience for the researcher respectively visitor.
Site-specific installation: As known from cinema and film, a soundscape can dramatically change the perception of an otherwise unaltered environment. The Anemos Sonore installation invites visitors to stay for a while, listen to the wind-caused sounds, and experience their surrounding in an unusual way. For the London SuperCollider Symposium, I intend to set up three stationary installations where visitors plug in either their own or provided headphones. SuperCollider is used as the central element for filtering the captured vibrations.
Photos, videos, and a more detailed description can be found here.
BetaBlocker performance (Till Bovermann, Dave Griffiths)
A meta live coding performance, toplap-compliant, with BetaBlocker in its various incarnations.
BetaBlocker is a fictional CPU with 256 bytes of memory, running as homebrew software on a Nintendo DS and as demand-rate UGens on scsynth. It is a substantial part of several diverse projects such as a live coding environment with which one is able to perform while being drunk, and a research tool for the investigation into the materiality of digital media. Dave uses it to slow down the process of computation for live coding malleability while Till speeds it up to the Nyquist frequency to explore algorithms tangibly and hear their execution.
Rocking the ambience with our BetaBlocker engines, this performance will contain various aspects of algorithms programming themselves, code as data as code as sound, genetic algorithms, SuperCollider, homebrew Nintendo DS software, and a serious sound system.
Modality (Miguel Negrão, Marije Baalman, Till Bovermann)
Modality is a toolkit simplifying the creation of your own very personal instruments with SuperCollider, using hardware controllers of any kind. It provides a high level electro-instrumental model that can be assembled in a wide variety of ways. Users develop instruments based on the generalized internal model either by using the Modality GUIs, by directly writing configuration files, and/or via direct access to objects via the SuperCollider environment and 3rd party extensions and plugins.
Tangible musical interfaces with SuperCollider (Anna Xambó, Gerard Roma, Till Bovermann)
This workshop will allow participants to discover how to build low-cost real-time tangible musical interfaces (TMI) using SuperCollider along with the open source computer vision engine reacTIVision. We will draw upon the ideas of the d-touch project for assembling a low-cost DIY individual portable musical environment based on interactions with physical objects.
BetaBlocker: further adventures in live coding (Till Bovermann, Dave Griffiths and Tom Hall)
In this talk, we want to tell a story about how SuperCollider provided an environment for investigation, starting its journey in live coding and ending up in making computing tangible.
It is a case study, Dave vs. Till, of the invention, creation, and adaptation of a fictional CPU with 256 bytes of memory. Beginning as an implementation in Scheme on Fluxus, to running as homebrew software on a Nintendo DS, it was reborn recently as a demand-rate UGen in the world of scsynth, featuring a high-level control mechanism written in SuperCollider language. In its various incarnations, it is a substantial part of several diverse projects such as a live coding environment with which one is able to perform while being drunk, and a research tool for the investigation into the materiality of digital media. Dave uses it to slow down the process of computation for live coding malleability while Till speeds it up to the Nyquist frequency to explore algorithms tangibly and hear their execution.
We will tell the story of BetaBlocker featuring it as all an artistic project, an addition to scsynth as an interpreted language, and a technical as well as mental challenge. Its tale will be accompanied by quite some sound and noisy code examples.
A sound recording can be found here.
Modality (Jeff Carey, Marije Baalman, Miguel Negrão, Alberto de Campo, Till Bovermann, Hannes Hölzl, Robert van Heumen, Wouter Snoei and Bjørnar Habbestad)
Modality is a toolkit simplifying the creation of your own very personal instruments with SuperCollider, using controllers of any kind. It provides a high level electro-instrumental model that can be assembled in a wide variety of ways. Users develop instruments based on the generalized internal model either by using the Modality GUIs, by directly writing configuration files, and/or via direct access to objects via the SuperCollider environment and 3rd party extensions and plugins.
Vowel Synthesis with SuperCollider (Till Bovermann, Florian Grond)
We present insights into the design and implementation of the Vowel quark, a high-level representation of vowel sounds for the SuperCollider environment. Its primary objectives are to provide an intuitive, efficient yet scalable interface for the creation, manipulation and morphing of formants to form vowels. The perception of timbre is not only affected by spectral properties. Building on top of Vowel’s first iteration that has been presented and released to the public at ICAD 2011, we extended it in order to simplify the control of the evolution of spectral properties in time.
Originating in FormantLib — a class that is part of the Republic quark (Alberto de Campo, Julian Rohrhuber) — we designed and implemented the class Vowel, which handles the formant-related parameter. Initiated by the SuperCollider symposium, we further extent its feature-set without changing its overall interface. This particularly covers the addition of interface options for the convenient creation and storage of new and alternative formant combinations and their seamless integration into the synthesis and control process, as well as a more flexible design of the synthesis chain options to allow for a broad variety of usages. In addition to the language-side interface, we designed a custom interfaces for high-level server-side control of vowel synthesis.
In the talk, we give an introduction to Vowel and introduce various pseudo-Ugens, designed to be used for vowel-based additive and subtractive synthesis. Their description will be accompanied by a presentation of several code and sound examples incorporating the introduced classes. This will particularly include code examples for data sonification, spatialisation, and additive respectively subtractive synthesis. We will wrap up with examples in broader synthesis aspects, which unfolds the Vowel building blocks into a powerful toolset to control the shape of the spectral contour of existing sounds.