Proceedings of the International Conference on New Interfaces for Musical Expression 2011, Oslo, Norway

Dan Overholt.
The overtone fiddle: an actuated acoustic instrument.
(Pages 4-7). [ bib | pdf ]

Abstract: The Overtone Fiddle is a new violin-family instrument that incorporates electronic sensors, integrated DSP, and physical actuation of the acoustic body. An embedded tactile sound transducer creates extra vibrations in the body of the Overtone Fiddle, allowing performer control and sensation via both traditional violin techniques, as well as extended playing techniques that incorporate shared man/machine control of the resulting sound. A magnetic pickup system is mounted to the end of the fiddle's fingerboard in order to detect the signals from the vibrating strings, deliberately not capturing vibrations from the full body of the instrument. This focused sensing approach allows less restrained use of DSP-generated feedback signals, as there is very little direct leakage from the actuator embedded in the body of the instrument back to the pickup.

Keywords: Actuated Musical Instruments, Hybrid Instruments, Active Acoustics, Electronic Violin

Matthew Montag, Stefan Sullivan, Scott Dickey, and Colby Leider.
A low-cost and low-latency multi-touch table with haptic feedback for musical applications.
(Pages 8-13). [ bib | pdf ]

Abstract: During the past decade, multi-touch surfaces have emerged as valuable tools for collaboration, display, interaction, and musical expression. Unfortunately, they tend to be costly and often suffer from two drawbacks for music performance: (1) relatively high latency owing to their sensing mechanism, and (2) lack of haptic feedback. We analyze the latency present in several current multi-touch platforms, and we describe a new custom system that reduces latency to an average of 30 ms while providing programmable haptic feed-back to the user. The paper concludes with a description of ongoing and future work.

Keywords: multi-touch, haptics, frustrated total internal reflection, music performance, music composition, latency, DIY

Greg Shear and Matthew Wright.
The electromagnetically sustained rhodes piano.
(Pages 14-17). [ bib | pdf ]

Abstract: The Electromagnetically Sustained Rhodes Piano is an augmentation of the original instrument with additional control over the amplitude envelope of individual notes. This includes slow attacks and infinite sustain while preserving the familiar spectral qualities of this classic electromechanical piano. These additional parameters are controlled with aftertouch on the existing keyboard, extending standard piano technique. Two sustain methods were investigated, driving the actuator first with a pure sine wave, and second with the output signal of the sensor. A special isolation method effectively decouples the sensors from the actuators and tames unruly feedback in the high-gain signal path.

Keywords: Rhodes, keyboard, electromagnetic, sustain, augmented instrument, feedback, aftertouch

Laurel Pardue, Christine Southworth, Andrew Boch, Matt Boch, and Alex Rigopulos.
Gamelan elek trika: An electronic balinese gamelan.
(Pages 18-23). [ bib | pdf ]

Abstract: This paper describes the motivation and construction of Gamelan Elektrika, a new electronic gamelan modeled after a Balinese Gong Kebyar. The first of its kind, Elektrika consists of seven instruments acting as MIDI controllers accompanied by traditional percussion and played by 11 or more performers following Balinese performance practice. Three main percussive instrument designs were executed using a combination of force sensitive resistors, piezos, and capacitive sensing. While the instrument interfaces are designed to play interchangeably with the original, the sound and travel possiblilities they enable are tremendous. MIDI enables a massive new sound palette with new scales beyond the quirky traditional tuning and non-traditional sounds. It also allows simplified transcription for an aurally taught tradition. Significantly, it reduces the transportation challenges of a previously large and heavy ensemble, creating opportunities for wider audiences to experience Gong Kebyar's enchanting sound. True to the spirit of oneness in Balinese music, as one of the first large all-MIDI ensembles, Elek Trika challenges performers to trust silent instruments and develop an understanding of highly intricate and interlocking music not through the sound of the individual, but through the sound of the whole.

Keywords: Bali, gamelan, musical instrument design, MIDI ensemble

Jeong-Seob Lee and Woon Seung Yeo.
Sonicstrument: A musical interface with stereotypical acoustic transducers.
(Pages 24-27). [ bib | pdf ]

Abstract: This paper introduces Sonicstrument, a sound-based interface that traces the user“s hand motions. Sonicstrument utilizes stereotypical acoustic transducers (i.e., a pair of earphones and a microphone) for transmission and reception of acoustic signals whose frequencies are within the highest area of human hearing range that can rarely be perceived by most people. Being simpler in structure and easier to implement than typical ultrasonic motion detectors with special transducers, this system is robust and offers precise results without introducing any undesired sonic disturbance to users. We describe the design and implementation of Sonicstrument, evaluate its performance, and present two practical applications of the system in music and interactive performance.

Keywords: Stereotypical transducers, audible sound, Doppler effect, hand-free interface, musical instrument, interactive performance

Scott Smallwood.
Solar sound arts: Creating instruments and devices powered by photovoltaic technologies.
(Pages 28-31). [ bib | pdf ]

Abstract: This paper describes recent developments in the creation of sound-making instrument and devices powered by photovoltaic (PV) technologies. With the rise of more efficient PV products in diverse packages, the possibilities for creating solar-powered musical instruments, sound installations, and loudspeakers are becoming increasingly realizable. This paper surveys past and recent developments in this area, including several projects by the author, and demonstrates how the use of PV technologies can influence the creative process in unique ways. In addition, this paper discusses how solar sound arts can enhance the aesthetic direction taken by recent work in soundscape studies and acoustic ecology. Finally, this paper will point towards future directions and possibilities as PV technologies continue to evolve and improve in terms of performance, and become more affordable.

Keywords: Solar Sound Arts, Circuit Bending, Hardware Hacking, Human-Computer Interface Design, Acoustic Ecology, Sound Art, Electroacoustics, Laptop Orchestra, PV Technology

Niklas Klügel, Marc René Frieß, Georg Groh, and Florian Echtler.
An approach to collaborative music composition.
(Pages 32-35). [ bib | pdf ]

Abstract: This paper provides a discussion of how the electronic, solely IT based composition and performance of electronic music can be supported in realtime with a collaborative application on a tabletop interface, mediating between single-user style music composition tools and co-located collaborative music improvisation. After having elaborated on the theoretical backgrounds of prerequisites of co-located collaborative tabletop applications as well as the common paradigms in music composition/notation, we will review related work on novel IT approaches to music composition and improvisation. Subsequently, we will present our prototypical implementation and the results.

Keywords: Tabletop Interface, Collaborative Music Composition, Creativity Support

Nicolas Gold and Roger Dannenberg.
A reference architecture and score representation for popular music human-computer music performance systems.
(Pages 36-39). [ bib | pdf ]

Abstract: Popular music (characterized by improvised instrumental parts, beat and measure-level organization, and steady tempo) poses challenges for human-computer music performance (HCMP). Pieces of music are typically rearrangeable on-the-fly and involve a high degree of variation from ensemble to ensemble, and even between rehearsal and performance. Computer systems aiming to participate in such ensembles must therefore cope with a dynamic high-level structure in addition to the more traditional problems of beat-tracking, score-following, and machine improvisation. There are many approaches to integrating the components required to implement dynamic human-computer music performance systems. This paper presents a reference architecture designed to allow the typical sub-components (e.g. beat-tracking, tempo prediction, improvisation) to be integrated in a consistent way, allowing them to be combined and/or compared systematically. In addition, the paper presents a dynamic score representation particularly suited to the demands of popular music performance by computer.

Keywords: Live Performance, Software Design, Popular Music

Mark Bokowiec.
V'oct (ritual): An interactive vocal work for bodycoder system and 8 channel spatialization.
(Pages 40-43). [ bib | pdf ]

Abstract: V'OCT(Ritual) is a work for solo vocalist/performer and Bodycoder System, composed in residency at Dartington College of Arts (UK) Easter 2010. This paper looks at the technical and compositional methodologies used in the realization of the work, in particular, the choices made with regard to the mapping of sensor elements to various spatialization functions. Kinaesonics will be discussed in relation to the coding of real-time one-to-one mapping of sound to gesture and its expression in terms of hardware and software design. Four forms of expressivity arising out of interactive work with the Bodycoder system will be identified. How sonic (electro-acoustic), programmed, gestural (kinaesonic) and in terms of the V'Oct(Ritual) vocal expressivities are constructed as pragmatic and tangible elements within the compositional practice will be discussed and the subsequent importance of collaboration with a performer will be exposed.

Keywords: Bodycoder, Kinaesonics, Expressivity, Gestural Control, Interactive Performance Mechanisms, Collaboration

Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani, and Yuichi Sato.
First person shooters as collaborative multiprocess instruments.
(Pages 44-47). [ bib | pdf ]

Abstract: First Person Shooters are among the most played computer video games. They combine navigation, interaction and collaboration in 3D virtual environments using simple input devices, i.e. mouse and keyboard. In this paper, we study the possibilities brought by these games for musical interaction. We present the Couacs, a collaborative multiprocess instrument which relies on interaction techniques used in FPS together with new techniques adding the expressiveness required for musical interaction. In particular, the Faders For All game mode allows musicians to perform pattern-based electronic compositions.

Keywords: the couacs, fps, first person shooters, collaborative, 3D interaction, multiprocess instrument

Tilo Hähnel and Axel Berndt.
Studying interdependencies in music performance: An interactive tool.
(Pages 48-51). [ bib | pdf ]

Abstract: Musicians tend to model different performance parameters intuitively and listeners seem to perceive them, to a certain degree, unconsciously. This is a problem for the development of synthetic performance models, for they are built upon detailed assumptions of several parameters like timing, loudness, and duration-and of interdependencies as well. This paper describes an interactive performance synthesis tool, which allows to analyse listener's preferences of multiple performance features. Using the tool in a study of eighth notes inégalité, a relationship between timing and loudness was found.

Keywords: Synthetic Performance, Notes In ́egales, Timing, Articulation, Duration, Loudness, Dynamics

Sinan Bokesoy and Patrick Adler.
1city 1001vibrations: development of a interactive sound installation with robotic instrument performance.
(Pages 52-55). [ bib | pdf ]

Abstract: “1city1001vibrations” is a sound installation project of Sinan Bökesoy. It does continuous analysis of live sounds with the microphones installed on top of significant places at Bosphorus - Istanbul. The transmitted sounds are accompanied by an algorithmic composition derived from this content analysis for controlling two Kuka industrial robot arms performing the percussions installed around them while creating a metaphor through an intelligent composition/performance system. This paper aims to focus on the programming strategies taken for developing a musical instrument out of an industrial robot.

Keywords: Sound installation, robotic music, interactive systems

Tim Murray-Browne, Di Mainstone, Nick Bryan-Kinns, and Mark D. Plumbley.
The medium is the message: Composing instruments and performing mappings.
(Pages 56-59). [ bib | pdf ]

Abstract: Many performers of novel musical instruments find it difficult to engage audiences beyond those in the field. Previous research points to a failure to balance complexity with usability, and a loss of transparency due to the detachment of the controller and sound generator. The issue is often exacerbated by an audience's lack of prior exposure to the instrument and its workings. However, we argue that there is a conflict underlying many novel musical instruments in that they are intended to be both a tool for creative expression and a creative work of art in themselves, resulting in incompatible requirements. By considering the instrument, the composition and the performance together as a whole with careful consideration of the rate of learning demanded of the audience, we propose that a lack of transparency can become an asset rather than a hindrance. Our approach calls for not only controller and sound generator to be designed in sympathy with each other, but composition, performance and physical form too. Identifying three design principles, we illustrate this approach with the Serendiptichord, a wearable instrument for dancers created by the authors.

Keywords: Performance, composed instrument, transparency, constraint

Seunghun Kim, Luke Keunhyung Kim, Songhee Jeong, and Woon Seung Yeo.
Clothesline as a metaphor for a musical interface.
(Pages 60-63). [ bib | pdf ]

Abstract: In this paper, we discuss the use of the clothesline as a metaphor for designing a musical interface called Airer Choir. This interactive installation is based on the function of an ordinary object that is not a traditional instrument, and hanging articles of clothing is literally the gesture to use the interface. Based on this metaphor, a musical interface with high transparency was designed. Using the metaphor, we explored the possibilities for recognizing of input gestures and creating sonic events by mapping data to sound. Thus, four different types of Airer Choir were developed. By classifying the interfaces, we concluded that various musical expressions are possible by using the same metaphor.

Keywords: musical interface, metaphor, clothesline installation

Pietro Polotti and Maurizio Goina.
Eggs in action.
(Pages 64-67). [ bib | pdf ]

Abstract: In this paper, we discuss the results obtained by means of the EGGS (Elementary Gestalts for Gesture Sonification) system in terms of artistic realizations. EGGS was introduced in a previous edition of this conference. The works presented include interactive installations in the form of public art and interactive onstage performances. In all of the works, the EGGS principles of simplicity based on the correspondence between elementary sonic and movement units, and of organicity between sound and gesture are applied. Indeed, we study both sound as a means for gesture representation and gesture as embodiment of sound. These principles constitute our guidelines for the investigation of the bidirectional relationship between sound and body expression with various strategies involving both educated and non-educated executors.

Keywords: Gesture sonification, Interactive performance, Public art

Berit Janssen.
A reverberation instrument based on perceptual mapping.
(Pages 68-71). [ bib | pdf ]

Abstract: The present article describes a reverberation instrument which is based on cognitive categorization of reverberating spaces. Different techniques for artificial reverberation will be covered. A multidimensional scaling experiment was conducted on impulse responses in order to determine how humans acoustically perceive spatiality. This research seems to indicate that the perceptual dimensions are related to early energy decay and timbral qualities. These results are applied to a reverberation instrument based on delay lines. It can be contended that such an instrument can be controlled more intuitively than other delay line reverberation tools which often provide a confusing range of parameters which have a physical rather than perceptual meaning.

Keywords: Reverberation, perception, multidimensional scaling, mapping

Lauren Hayes.
Feedback-assisted performance.
(Pages 72-75). [ bib | pdf ]

Abstract: When performing digital music it is important to be able to acquire a comparable level of sensitivity and control to what can be achieved with acoustic instruments. By examining the links between sound and touch, new compositional and performance strategies start to emerge for performers using digital instruments. These involve technological implementations utilizing the haptic information channels, offering insight into how our tacit knowledge of the physical world can be introduced to the digital domain, enforcing the view that sound is a `species of touch'. This document illustrates reasons why vibrotactile interfaces, which offer physical feedback to the performer, may be viewed as an important approach in addressing the limitations of current physical dynamic systems used to mediate the digital performer's control of various sorts of musical information. It will examine one such method used for performing in two different settings: with piano and live electronics, and laptop alone, where in both cases, feedback is artificially introduced to the performer's hands offering different information about what is occurring musically. The successes of this heuristic research will be assessed, along with a discussion of future directions of experimentation.

Keywords: Vibrotactile feedback, human-computer interfaces, digital composition, real-time performance, augmented instruments

Daichi Ando.
Improving user-interface of interactive ec for composition-aid by means of shopping basket procedure.
(Pages 76-79). [ bib | pdf ]

Abstract: The use of Interactive Evolutionary Computation(IEC) is suitable to the development of art-creation aid system for beginners. This is because of important features of IEC, like the ability of optimizing with ambiguous evaluation measures, and not requiring special knowledge about art-creation. With the popularity of Consumer Generated Media, many beginners in term of art-creation are interested in creating their own original art works. Thus developing of useful IEC system for musical creation is an urgent task. However, user-assist functions for IEC proposed in past works decrease the possibility of getting good unexpected results, which is an important feature of art-creation with IEC. In this paper, The author proposes a new IEC evaluation process named “Shopping Basket” procedure IEC. In the procedure, an user-assist function called Similarity-Based Reasoning allows for natural evaluation by the user. The function reduces user's burden without reducing the possibility of unexpected results. The author performs an experiment where subjects use the new interface to validate it. As a result of the experiment, the author concludes the the new interface is better to motivate users to compose with IEC system than the old interface.

Keywords: Interactive Evolutionary Computation, User-Interface, Composition Aid

Ryan Mcgee, Yuan-Yi Fan, and Reza Ali.
Biorhythm: a biologically-inspired audio-visual installation.
(Pages 80-83). [ bib | pdf ]

Abstract: BioRhythm is an interactive bio-feedback installation controlled by the cardiovascular system. Data from a photoplethysmograph (PPG) sensor controls sonification and visualization parameters in real-time. Biological signals are obtained using the techniques of Resonance Theory in Hemodynamics and mapped to audiovisual cues via the Five Element Philosophy. The result is a new media interface utilizing sound synthesis and spatialization with advanced graphics rendering. BioRhythm serves as an artistic exploration of the harmonic spectra of pulse waves.

Keywords: bio-feedback, bio-sensing, sonification, spatial audio, spatialization, FM synthesis, Open Sound Control, visualization, parallel computing

Jon Pigott.
Vibration and volts and sonic art: A practice and theory of electromechanical sound.
(Pages 84-87). [ bib | pdf ]

Abstract: This paper explores the creative appropriation of loudspeakers and other electromechanical devices in sonic arts practice. It is proposed that this is an identifiable area of sonic art worthy of its own historical and theoretical account. A case study of an original work by the author titled Infinite Spring is presented within a context of works by other practitioners from the 1960's to present day. The notion of the `prepared speaker' is explored alongside theories of media archeology, cracked media and acoustic ecology.

Keywords: Electromechanical sonic art, kinetic sound art, prepared speakers, Infinite Spring

George Sioros and Carlos Guedes.
Automatic rhythmic performance in max/msp: the kin.rhythmicator.
(Pages 88-91). [ bib | pdf ]

Abstract: We introduce a novel algorithm for automatically generating rhythms in real time in a certain meter. The generated rhythms are "generic" in the sense that they are characteristic of each time signature without belonging to a specific musical style. The algorithm is based on a stochastic model in which various aspects and qualities of the generated rhythm can be controlled intuitively and in real time. Such qualities are the density of the generated events per bar, the amount of variation in generation, the amount of syncopation, the metrical strength, and of course the meter itself. The kin.rhythmicator software application was developed to implement this algorithm. During a performance with the kin.rhythmicator the user can control all aspects of the performance through descriptive and intuitive graphic controls.

Keywords: automatic music generation, generative, stochastic, metric indispensability, syncopation, Max/MSP, Max4Live

André Gonçalves.
Towards a voltage-controlled computer - control and interaction beyond an embedded system. . (Pages, Norway, 2011. [ bib | pdf ]

Abstract: The importance of embedded devices as new devices to the field of Voltage-Controlled Synthesizers is realized. Emphasis is directed towards understanding the importance of such devices in Voltage-Controlled Synthesizers. Introducing the Voltage-Controlled Computer as a new paradigm. Specifications for hardware interfacing and programming techniques are described based on real prototypes. Implementations and successful results are reported.

Keywords: Voltage-controlled synthesizer, embedded systems, voltage-controlled computer, computer driven control voltage generation

Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto, and Shigeki Sagayama.
Polyhymnia: An automatic piano performance system with statistical modeling of polyphonic expression and musical symbol interpretation.
(Pages 96-99). [ bib | pdf ]

Abstract: We developed an automatic piano performance system called Polyhymnia that is able to generate expressive polyphonic piano performances with music scores so that it can be used as a computer-based tool for an expressive performance. The system automatically renders expressive piano music by means of automatic musical symbol interpretation and statistical models of structure-expression relations regarding polyphonic features of piano performance. Experimental results indicate that the generated performances of various piano pieces with diverse trained models had polyphonic expression and sounded expressively. In addition, the models trained with different performance styles reflected the styles observed in the training performances, and they were well distinguishable by human listeners. Polyhymnia won the first prize in the autonomous section of the Performance Rendering Contest for Computer Systems (Rencon) 2010.

Keywords: performance rendering, polyphonic expression, statistical modeling, conditional random fields

Juan Pablo Carrascal and Sergi Jorda.
Multitouch interface for audio mixing.
(Pages 100-103). [ bib | pdf ]

Abstract: Audio mixing is the adjustment of relative volumes, panning and other parameters corresponding to different sound sources, in order to create a technically and aesthetically adequate sound sum. To do this, audio engineers employ “panpots” and faders, the standard controls in audio mixers. The design of such devices has remained practically unchanged for decades since their introduction. At the time, no usability studies seem to have been conducted on such devices, so one could question if they are really optimized for the task they are meant for. This paper proposes a new set of controls that might be used to simplify and/or improve the performance of audio mixing tasks, taking into account the spatial characteristics of modern mixing technologies such as surround and 3D audio and making use of multitouch interface technologies. A preliminary usability test has shown promising results.

Keywords: audio mixing, multitouch, control surface, touchscreen

Nate Derbinsky and Georg Essl.
Cognitive architecture in mobile music interactions.
(Pages 104-107). [ bib | pdf ]

Abstract: This paper explores how a general cognitive architecture can pragmatically facilitate the development and exploration of interactive music interfaces on a mobile platform. To this end we integrated the Soar cognitive architecture into the mobile music meta-environment urMus. We develop and demonstrate four artificial agents which use diverse learning mechanisms within two mobile music interfaces. We also include details of the computational performance of these agents, evincing that the architecture can support real-time interactivity on modern commodity hardware.

Keywords: mobile music, machine learning, cognitive architecture

Benjamin D. Smith and Guy E. Garnett.
The self-supervising machine.
(Pages 108-111). [ bib | pdf ]

Abstract: Supervised machine learning enables complex many-to-many mappings and control schemes needed in interactive performance systems. One of the persistent problems in these applications is generating, identifying and choosing input output pairings for training. This poses problems of scope (limiting the realm of potential control inputs), effort (requiring significant pre-performance training time), and cognitive load (forcing the performer to learn and remember the control areas). We discuss the creation and implementation of an automatic “supervisor,” using unsupervised machine learning algorithms to train a supervised neural network on the fly. This hierarchical arrangement enables network training in real time based on the musical or gestural control inputs employed in a performance, aiming at freeing the performer to operate in a creative, intuitive realm, making the machine control transparent and automatic. Three implementations of this self supervised model driven by iPod, iPad, and acoustic violin are described.

Keywords: NIME, machine learning, interactive computer music, machine listening, improvisation, adaptive resonance theory

Aaron Albin, Sertan Senturk, Akito Van Troyer, Brian Blosser, Oliver Jan, and Gil Weinberg.
Beatscape and a mixed virtual-physical environment for musical ensembles.
(Pages 112-115). [ bib | pdf ]

Abstract: A mixed media tool was created that promotes ensemble virtuosity through tight coordination and interdepence in musical performance. Two different types of performers interact with a virtual space using Wii remote and tangible interfaces using the reacTIVision toolkit [11]. One group of performers uses a tangible tabletop interface to place and move sound objects in a virtual environment. The sound objects are represented by visual avatars and have audio samples associated with them. A second set of performers make use of Wii remotes to create triggering waves that can collide with those sound objects. Sound is only produced upon collision of the waves with the sound objects. What results is a performance in which users must negotiate through a physical and virtual space and are positioned to work together to create musical pieces.

Keywords: reacTIVision, processing, ensemble, mixed media, virtualization, tangible, sample

Marco Fabiani, Gaà«l Dubus, and Roberto Bresin.
Moodifierlive: Interactive and collaborative expressive music performance on mobile devices.
(Pages 116-119). [ bib | pdf ]

Abstract: This paper presents MoodifierLive, a mobile phone application for interactive control of rule-based automatic music performance. Five different interaction modes are available, of which one allows for collaborative performances with up to four participants, and two let the user control the expressive performance using expressive hand gestures. Evaluations indicate that the application is interesting, fun to use, and that the gesture modes, especially the one based on data from free expressive gestures, allow for performances whose emotional content matches that of the gesture that produced them.

Keywords: Expressive performance, gesture, collaborative performance, mobile phone

Benjamin Schroeder, Marc Ainger, and Richard Parent.
A physically based sound space for procedural agents.
(Pages 120-123). [ bib | pdf ]

Abstract: Physically based sound models have a “natural” setting in dimensional space: a physical model has a shape and an extent and can be given a position relative to other models. In our experimental system, we place procedurally animated agents in a world of spatially situated physical models. The agents move in the same space as the models and can interact with them, playing the models and changing their configuration. The result is an ever-varying audiovisual landscape. This can be seen as purely generative - as a method for creating algorithmic music - or as a way to create instruments that change autonomously as a human plays them. A third perspective is in between these two: agents and humans can cooperate or compete to produce a gamelike or interactive experience.

Keywords: Physically based sound, behavioral animation, agents

Francisco Garcia, Leny Vinceslas, Esteban Maestre, and Josep Tubau.
Acquisition and study of blowing pressure profiles in recorder playing.
(Pages 124-127). [ bib | pdf ]

Abstract: This paper presents a study of blowing pressure profiles acquired from recorder playing. Blowing pressure signals are captured from real performance by means of a a lowintrusiveness acquisition system constructed around commercial pressure sensors based on piezoelectric transducers. An alto recorder was mechanically modified by a luthier to allow the measurement and connection of sensors while respecting playability and intrusiveness. A multi-modal database including aligned blowing pressure and sound signals is constructed from real practice, covering the performance space by considering different fundamental frequencies, dynamics, articulations and note durations. Once signals were pre-processed and segmented, a set of temporal envelope features were defined as a basis for studying and constructing a simplified model of blowing pressure profiles in different performance contexts.

Keywords: Instrumental gesture, recorder, wind instrument, blowing pressure, multi-modal data

Anders Friberg and Anna Källblad.
Experiences from video-controlled sound installations.
(Pages 128-131). [ bib | pdf ]

Abstract: This is an overview of the three installations Hoppsa Universum, CLOSE and Flying Carpet. They were all designed as choreographed sound and music installations controlled by the visitors movements. The perspective is from an artistic goal/vision intention in combination with the technical challenges and possibilities. All three installations were realized with video cameras in the ceiling registering the users' position or movement. The video analysis was then controlling different types of interactive software audio players. Different aspects like narrativity, user control, and technical limitations are discussed.

Keywords: Gestures, dance, choreography, music installation, interactive music

Nicolas d'Alessandro, Roberto Calderon, and Stefanie Müller.
Room#81 - agent-based instrument for experiencing architectural and vocal cues.
(Pages 132-135). [ bib | pdf ]

Abstract: ROOM#81 is a digital art installation which explores how visitors can interact with architectural and vocal cues to intimately collaborate. The main space is split into two distinct areas separated by a soft wall, i.e. a large piece of fabric tensed vertically. Movement within these spaces and interaction with the soft wall is captured by various kinds of sensors. People's activity is constantly used by an agent in order to predict their actions. Machine learning is then achieved by such agent to incrementally modify the nature of light in the room and some laryngeal aspects of synthesized vocal spasms. The combination of people closely collaborating together, light changes and vocal responses creates an intimate experience of touch, space and sound.

Keywords: Installation, instrument, architecture, interactive fabric, motion, light, voice synthesis, agent, collaboration

Yasuo Kuhara and Daiki Kobayashi.
Kinetic particles synthesizer using multi-touch screen interface of mobile devices.
(Pages 136-137). [ bib | pdf ]

Abstract: We developed a kinetic particles synthesizer for mobile devices having a multi-touch screen such as a tablet PC and a smart phone. This synthesizer generates music based on the kinetics of particles under a two-dimensional physics engine. The particles move in the screen to synthesize sounds according to their own physical properties, which are shape, size, mass, linear and angular velocity, friction, restitution, etc. If a particle collides with others, a percussive sound is generated. A player can play music by the simple operation of touching or dragging on the screen of the device. Using a three-axis acceleration sensor, a player can perform music by shuffling or tilting the device. Each particle sounds just a simple tone. However, a large amount of various particles play attractive music by aggregating their sounds. This concept has been inspired by natural sounds made from an assembly of simple components, for example, rustling leaves or falling rain. For a novice who has no experience of playing a musical instrument, it is easy to learn how to play instantly and enjoy performing music with intuitive operation. Our system is used for musical instruments for interactive music entertainment.

Keywords: Particle, Tablet PC, iPhone, iPod touch, iPad, Smart phone, Kinetics, Touch screen, Physics engine

Christopher Carlson, Eli Marschner, and Hunter Mccurry.
The sound flinger: A haptic spatializer.
(Pages 138-139). [ bib | pdf ]

Abstract: The Sound Flinger is an interactive sound spatialization instrument that allows users to touch and move sound. Users record audio loops from an mp3 player or other external source. By manipulating four motorized faders, users can control the locations of two virtual “sound objects” around a circle corresponding to the perimeter of a quadraphonic sound field. Physical models that simulate a spring-like interaction between each fader and the virtual sound objects generate haptic and aural feedback, allowing users to literally touch, wiggle, and fling sound around the room.

Keywords: NIME, CCRMA, haptics, force feedback, sound spatialization, multi-channel audio, linux audio, jack, Arduino, BeagleBoard, Pure Data (Pd), Satellite CCRMA

Ravi Kondapalli and Benzhen Sung.
Daft datum: an interface for producing music through foot-based interaction.
(Pages 140-141). [ bib | pdf ]

Abstract: Daft Datum is an autonomous new media artefact that takes input from movement of the feet (i.e. tapping/stomping/stamping) on a wooden surface, underneath which is a sensor sheet. The sensors in the sheet are mapped to various sound samples and synthesized sounds. Attributes of the synthesized sound, such as pitch and octave, can be controlled using the Nintendo Wii Remote. It also facilitates switching between modes of sound and recording/playing back a segment of audio. The result is music generated by dancing on the device that is further modulated by a hand-held controller.

Keywords: Daft Datum, Wii, Dance Pad, Feet, Controller, Bluetooth, Musical Interface, Dance, Sensor Sheet

Charles Martin and Chi-Hsia Lai.
Strike on stage: a percussion and media performance.
(Pages 142-143). [ bib | pdf ]

Abstract: This paper describes Strike on Stage, an interface and corresponding audio-visual performance work developed and performed in 2010 by percussionists and media artists ChiHsia Lai and Charles Martin. The concept of Strike on Stage is to integrate computer visuals and sound into an improvised percussion performance. A large projection surface is positioned directly behind the performers, while a computer vision system tracks their movements. The setup allows computer visualisation and sonification to be directly responsive and unified with the performers' gestures.

Keywords: percussion, media performance, computer vision

Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, Frédéric Bevilacqua, Olivier Houix, Norbert Schnell, and Nicolas Misdariis.
Gestural embodiment of environmental sounds: an experimental study.
(Pages 144-148). [ bib | pdf ]

Abstract: In this paper we present an experimental study concerning gestural embodiment of environmental sounds in a listening context. The presented work is part of a project aiming at modeling movement-sound relationships, with the end goal of proposing novel approaches for designing musical instruments and sounding objects. The experiment is based on sound stimuli corresponding to “causal” and “non-causal” sounds. It is divided into a performance phase and an interview. The experiment is designed to investigate possible correlation between the perception of the “causality” of environmental sounds and different gesture strategies for the sound embodiment. In analogy with the perception of the sounds' causality, we propose to distinguish gestures that “mimic” a sound's cause and gestures that “trace” a sound's morphology following temporal sound characteristics. Results from the interviews show that, first, our causal sounds database lead to consistent descriptions of the action at the origin of the sound and participants mimic this action. Second, non-causal sounds lead to inconsistent metaphoric descriptions of the sound and participants make gestures following sound “contours”. Quantitatively, the results show that gesture variability is higher for causal sounds that noncausal sounds.

Keywords: Embodiment, Environmental Sound Perception, Listening, Gesture Sound Interaction

Sebastian Mealla, Aleksander Valjamae, Mathieu Bosi, and Sergi Jorda.
Listening to your brain: Implicit interaction in collaborative music performances.
(Pages 149-154). [ bib | pdf ]

Abstract: The use of physiological signals in Human Computer Interaction (HCI) is becoming popular and widespread, mostly due to sensors miniaturization and advances in real-time processing. However, most of the studies that use physiologybased interaction focus on single-user paradigms, and its usage in collaborative scenarios is still in its beginning. In this paper we explore how interactive sonification of brain and heart signals, and its representation through physical objects (physiopucks) in a tabletop interface may enhance motivational and controlling aspects of music collaboration. A multimodal system is presented, based on an electrophysiology sensor system and the Reactable, a musical tabletop interface. Performance and motivation variables were assessed in an experiment involving a test “Physio” group (N=22) and a control “Placebo” group (N=10). Pairs of participants used two methods for sound creation: implicit interaction through physiological signals, and explicit interaction by means of gestural manipulation. The results showed that pairs in the Physio Group declared less difficulty, higher confidence and more symmetric control than the Placebo Group, where no real-time sonification was provided as subjects were using pre-recorded physiological signal being unaware of it. These results support the feasibility of introducing physiology-based interaction in multimodal interfaces for collaborative music generation.

Keywords: Music, Tabletops, Physiopucks, Physiological Computing, BCI, HCI, Collaboration, CSCW, Multimodal Interfaces

Dan Newton and Mark Marshall.
Examining how musicians create augmented musical instruments.
(Pages 155-160). [ bib | pdf ]

Abstract: This paper examines the creation of augmented musical instruments by a number of musicians. Equipped with a system called the Augmentalist, 10 musicians created new augmented instruments based on their traditional acoustic or electric instruments. This paper discusses the ways in which the musicians augmented their instruments, examines the similarities and differences between the resulting instruments and presents a number of interesting findings resulting from this process.

Keywords: Augmented Instruments, Instrument Design, Digital Musical Instruments, Performance

Zachary Seldess and Toshiro Yamada.
Tahakum: A multi-purpose audio control framework.
(Pages 161-166). [ bib | pdf ]

Abstract: We present “Tahakum”, an open source, extensible collection of software tools designed to enhance workflow on multichannel audio systems within complex multi-functional research and development environments. Tahakum aims to provide critical functionality required across a broad spectrum of audio systems usage scenarios, while at the same time remaining sufficiently open as to easily support modifications and extensions via 3rd party hardware and software. Features provided in the framework include software for custom mixing/routing and audio system preset automation, software for network message routing/redirection and protocol conversion, and software for dynamic audio asset management and control.

Keywords: Audio Control Systems, Audio for VR, Max/MSP, Spatial Audio

Dawen Liang, Guangyu Xia, and Roger Dannenberg.
A framework for coordination and synchronization of media.
(Pages 167-172). [ bib | pdf ]

Abstract: Computer music systems that coordinate or interact with human musicians exist in many forms. Often, coordination is at the level of gestures and phrases without synchronization at the beat level (or perhaps the notion of “beat” does not even exist). In music with beats, fine-grain synchronization can be achieved by having humans adapt to the computer (e.g. following a click track), or by computer accompaniment in which the computer follows a predetermined score. We consider an alternative scenario in which improvisation prevents traditional score following, but where synchronization is achieved at the level of beats, measures, and cues. To explore this new type of human-computer interaction, we have created new software abstractions for synchronization and coordination of music and interfaces in different modalities. We describe these new software structures, present examples, and introduce the idea of music notation as an interactive musical interface rather than a static document.

Keywords: Real-time, Interactive, Music Display, Popular Music, Automatic Accompaniment, Synchronization

Edgar Berdahl and Wendy Ju.
Satellite ccrma: A musical interaction and sound synthesis platform.
(Pages 173-178). [ bib | pdf ]

Abstract: This paper describes a new Beagle Board-based platform for teaching and practicing interaction design for musical applications. The migration from desktop and laptop computerbased sound synthesis to a compact and integrated control, computation and sound generation platform has enormous potential to widen the range of computer music instruments and installations that can be designed, and improves the portability, autonomy, extensibility and longevity of designed systems. We describe the technical features of the Satellite CCRMA platform and contrast it with personal computer-based systems used in the past as well as emerging smart phone-based platforms. The advantages and tradeoffs of the new platform are considered, and some project work is described.

Keywords: NIME, Microcontrollers, Music Controllers, Pedagogy, Texas Instruments OMAP, Beagle Board, Arduino, PD, Linux, open-source

Nicholas J. Bryan and Ge Wang.
Two turntables and a mobile phone.
(Pages 179-184). [ bib | pdf ]

Abstract: A novel method of digital scratching is presented as an alternative to currently available digital hardware interfaces and time-coded vinyl (TCV). Similar to TCV, the proposed method leverages existing analog turntables as a physical interface to manipulate the playback of digital audio. To do so, however, an accelerometer/gyroscope-equipped smart phone is firmly attached to a modified record, placed on a turntable, and used to sense a performers movement, resulting in a wireless sensing-based scratching method. The accelerometer and gyroscope data is wirelessly transmitted to a computer to manipulate the digital audio playback in real-time. The method provides the benefit of digital audio and storage, requires minimal additional hardware, accommodates familiar proprioceptive feedback, and allows a single interface to control both digital and analog audio. In addition, the proposed method provides numerous additional benefits including real-time graphical display, multi-touch interaction, and untethered performance (e.g “air-scratching”). Such a method turns a vinyl record into an interactive surface and enhances traditional scratching performance by affording new and creative musical interactions. Informal testing show this approach to be viable, responsive, and robust.

Keywords: Digital scratching, mobile music, digital DJ, smartphone, turntable, turntablism, record player, accelerometer, gyroscope, vinyl emulation software

Nick Kruge and Ge Wang.
Madpad: A crowdsourcing system for audiovisual sampling.
(Pages 185-190). [ bib | pdf ]

Abstract: MadPad is a networked audiovisual sample station for mobile devices. Twelve short video clips are loaded onto the screen in a grid and playback is triggered by tapping anywhere on the clip. This is similar to tapping the pads of an audio sample station, but extends that interaction to add visual sampling. Clips can be shot on-the-fly with a cameraenabled mobile device and loaded into the player instantly, giving the performer an ability to quickly transform his or her surroundings into a sample-based, audiovisual instrument. Samples can also be sourced from an online community in which users can post or download content. The recent ubiquity of multitouch mobile devices and advances in pervasive computing have made this system possible, providing for a vast amount of content only limited by the imagination of the performer and the community. This paper presents the core features of MadPad and the design explorations that inspired them.

Keywords: mobile music, networked music, social music, audiovisual, sampling, user-generated content, crowdsourcing, sample station, iPad, iPhone

Patrick O'Keefe and Georg Essl.
The visual in mobile music performance.
(Pages 191-196). [ bib | pdf ]

Abstract: Visual information integration in mobile music performance is an area that has not been thoroughly explored and current applications are often individually designed. From camera input to flexible output rendering, we discuss visual performance support in the context of urMus, a meta-environment for mobile interaction and performance development. The use of cameras, a set of image primitives, interactive visual content, projectors, and camera flashes can lead to visually intriguing performance possibilities.

Keywords: Mobile performance, visual interaction, camera phone, mobile collaboration

Ge Wang, Jieun Oh, and Tom Lieber.
Designing for the ipad: Magic fiddle.
(Pages 197-202). [ bib | pdf ]

Abstract: This paper describes the origin, design, and implementation of Smule's Magic Fiddle, an expressive musical instrument for the iPad. Magic Fiddle takes advantage of the physical aspects of the device to integrate game-like and pedagogical elements. We describe the origin of Magic Fiddle, chronicle its design process, discuss its integrated music education system, and evaluate the overall experience.

Keywords: Magic Fiddle, iPad, physical interaction design, experiential design, music education

Benjamin Knapp and Brennon Bortz.
Mobilemuse: Integral music control goes mobile.
(Pages 203-206). [ bib | pdf ]

Abstract: This paper describes a new interface for mobile music creation, the MobileMuse, that introduces the capability of using physiological indicators of emotion as a new mode of interaction. Combining both kinematic and physiological measurement in a mobile environment creates the possibility of integral music control-the use of both gesture and emotion to control sound creation-where it has never been possible before. This paper will review the concept of integral music control and describe the motivation for creating the MobileMuse, its design and future possibilities.

Keywords: Affective computing, physiological signal measurement, mobile music performance

Stephen Beck, Chris Branton, Sharath Maddineni, Brygg Ullmer, and Shantenu Jha.
Tangible performance management of grid-based laptop orchestras.
(Pages 207-210). [ bib | pdf ]

Abstract: Laptop Orchestras (LOs) have recently become a very popular mode of musical expression. They engage groups of performers to use ordinary laptop computers as instruments and sound sources in the performance of specially created music software. Perhaps the biggest challenge for LOs is the distribution, management and control of software across heterogeneous collections of networked computers. Software must be stored and distributed from a central repository, but launched on individual laptops immediately before performance. The GRENDL project leverages proven grid computing frameworks and approaches the Laptop Orchestra as a distributed computing platform for interactive computer music. This allows us to readily distribute software to each laptop in the orchestra depending on the laptop's internal configuration, its role in the composition, and the player assigned to that computer. Using the SAGA framework, GRENDL is able to distribute software and manage system and application environments for each composition. Our latest version includes tangible control of the GRENDL environment for a more natural and familiar user experience.

Keywords: laptop orchestra, tangible interaction, grid computing

Smilen Dimitrov and Stefania Serafin.
Audio arduino - an alsa (advanced linux sound architecture) audio driver for ftdi-based arduinos.
(Pages 211-216). [ bib | pdf ]

Abstract: A contemporary PC user, typically expects a sound card to be a piece of hardware, that: can be manipulated by 'audio' software (most typically exemplified by 'media players'); and allows interfacing of the PC to audio reproduction and/or recording equipment. As such, a 'sound card' can be considered to be a system, that encompasses design decisions on both hardware and software levels that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller and nothing more than headphones (and a couple of capacitors). The main contribution of this paper is to bring a holistic aspect to the discussion on the topic of implementation of soundcards also by referring to open-source driver, microcontroller code and test methods; and outline a complete implementation of an open yet functional soundcard system.

Keywords: Sound card, Arduino, audio, driver, ALSA, Linux

Seunghun Kim and Woon Seung Yeo.
Musical control of a pipe based on acoustic resonance.
(Pages 217-219). [ bib | pdf ]

Abstract: In this paper, we introduce a pipe interface that recognizes touch on tone holes by the resonances in the pipe instead of a touch sensor. This work was based on the acoustic principles of woodwind instruments without complex sensors and electronic circuits to develop a simple and durable interface. The measured signals were analyzed to show that different fingerings generate various sounds. The audible resonance signal in the pipe interface can be used as a sonic event for musical expression by itself and also as an input parameter for mapping different sounds.

Keywords: resonance, mapping, pipe

Anne-Marie Hansen, Hans Jørgen Andersen, and Pirkko Raudaskoski.
Play fluency in music improvisation games for novices.
(Pages 220-223). [ bib | pdf ]

Abstract: In this paper a collaborative music game for two pen tablets is studied in order to see how two people with no professional music background negotiated musical improvisation. In an initial study of what it is that constitutes play fluency in improvisation, a music game has been designed and evaluated through video analysis: A qualitative view of mutual action describes the social context of music improvisation: how two people with speech, laughter, gestures, postures and pauses negotiate individual and joint action. The objective behind the design of the game application was to support players in some aspects of their mutual play. Results show that even though players activated additional sound feedback as a result of their mutual play, players also engaged in forms of mutual play that the game engine did not account for. These ways of mutual play are descibed further along with some suggestions for how to direct future designs of collaborative music improvisation games towards ways of mutual play.

Keywords: Collaborative interfaces, improvisation, interactive music games, social interaction, play, novice

Izzi Ramkissoon.
The bass sleeve: A real-time multimedia gestural controller for augmented electric bass performance.
(Pages 224-227). [ bib | pdf ]

Abstract: The Bass Sleeve uses an Arduino board with a combination of buttons, switches, flex sensors, force sensing resistors, and an accelerometer to map the ancillary movements of a performer to sampling, real-time audio and video processing including pitch shifting, delay, low pass filtering, and onscreen video movement. The device was created to augment the existing functions of the electric bass and explore the use of ancillary gestures to control the laptop in a live performance. In this research it was found that incorporating ancillary gestures into a live performance could be useful when controlling the parameters of audio processing, sound synthesis and video manipulation. These ancillary motions can be a practical solution to gestural multitasking allowing independent control of computer music parameters while performing with the electric bass. The process of performing with the Bass Sleeve resulted in a greater amount of laptop control, an increase in the amount of expressiveness using the electric bass in combination with the laptop, and an improvement in the interactivity on both the electric bass and laptop during a live performance. The design uses various gesture-to-sound mapping strategies to accomplish a compositional task during an electro acoustic multimedia musical performance piece.

Keywords: Interactive Music, Interactive Performance Systems, Gesture Controllers, Augmented Instruments, Electric Bass, Video Tracking

Ajay Kapur, Michael Darling, James Murphy, Jordan Hochenbaum, Dimitri Diakopoulos, and Trimpin.
The karmetik notomoton: A new breed of musical robot for teaching and performance.
(Pages 228-231). [ bib | pdf ]

Abstract: This paper describes the KarmetiK NotomotoN, a new musical robotic system for performance and education. A long time goal of the authors has been to provide users with plug-and-play, highly expressive musical robot system with a high degree of portability. This paper describes the technical details of the NotomotoN, and discusses its use in performance and educational scenarios. Detailed tests performed to optimize technical aspects of the NotomotoN are described to highlight usability and performance specifications for electronic musicians and educators.

Keywords: Musical Robotics, Music Technology, Robotic Performance, NotomotoN, KarmetiK

Adrian Barenca and Giuseppe Torre.
The manipuller: Strings manipulation and multi-dimensional force sensing.
(Pages 232-235). [ bib | pdf ]

Abstract: The Manipuller is a novel Gestural Controller based on strings manipulation and multi-dimensional force sensing technology. This paper describes its motivation, design and operational principles along with some of its musical applications. Finally the results of a preliminary usability test are presented and discussed.

Keywords: Gestural Controller, Strings, Manipulation, Force Sensing.

Alain Crevoisier and Cécile Picard-Limpens.
Mapping objects with the surface editor.
(Pages 236-239). [ bib | pdf ]

Abstract: The Surface Editor is a software tool for creating control interfaces and mapping input actions to OSC or MIDI actions very easily and intuitively. Originally conceived to be used with a tactile interface, the Surface Editor has been extended to support the creation of graspable interfaces as well. This paper presents a new framework for the generic mapping of user actions with graspable objects on a surface. We also present a system for detecting touch on thin objects, allowing for extended interactive possibilities. The Surface Editor is not limited to a particular tracking system though, and the generic mapping approach for objects can have a broader use with various input interfaces supporting touch and/or objects.

Keywords: NIME, mapping, interaction, user-defined interfaces, tangibles, graspable interfaces

Jordan Hochenbaum and Ajay Kapur.
Adding z-depth and pressure expressivity to tangible tabletop surfaces.
(Pages 240-243). [ bib | pdf ]

Abstract: This paper presents the SmartFiducial, a wireless tangible object that facilitates additional modes of expressivity for vision-based tabletop surfaces. Using infrared proximity sensing and resistive based force-sensors, the SmartFiducial affords users unique, and highly gestural inputs. Furthermore, the SmartFiducial incorporates additional customizable pushbutton switches. Using XBee radio frequency (RF) wireless transmission, the SmartFiducial establishes bipolar communication with a host computer. This paper describes the design and implementation of the SmartFiducial, as well as an exploratory use in a musical context.

Keywords: Fiducial, Tangible Interface, Multi-touch, Sensors, Gesture, Haptics, Bricktable, Proximity Sensing

Andrew Milne, Anna Xambó, Robin Laney, David B. Sharp, Anthony Prechtl, and Simon Holland.
Hex player: A virtual musical controller.
(Pages 244-247). [ bib | pdf ]

Abstract: In this paper, we describe a playable musical interface for tablets and multi-touch tables. The interface is a generalized keyboard, inspired by the Thummer, and consists of an array of virtual buttons. On a generalized keyboard, any given interval always has the same shape (and therefore fingering); furthermore, the fingering is consistent over a broad range of tunings. Compared to a physical generalized keyboard, a virtual version has some advantages-notably, that the spatial location of the buttons can be transformed by shears and rotations, and their colouring can be changed to reflect their musical function in different scales. We exploit these flexibilities to facilitate the playing not just of conventional Western scales but also a wide variety of microtonal generalized diatonic scales known as moment of symmetry, or well-formed, scales. A user can choose such a scale, and the buttons are automatically arranged so their spatial height corresponds to their pitch, and buttons an octave apart are always vertically above each other. Furthermore, the most numerous scale steps run along rows, while buttons within the scale are light-coloured, and those outside are dark or removed. These features can aid beginners; for example, the chosen scale might be the diatonic, in which case the piano's familiar white and black colouring of the seven diatonic and five chromatic notes is used, but only one scale fingering need ever be learned (unlike a piano where every key needs a different fingering). Alternatively, it can assist advanced composers and musicians seeking to explore the universe of unfamiliar microtonal scales.

Keywords: generalized keyboard, isomorphic layout, multi-touch surface, tablet, musical interface design, iPad, microtonality

Carl Haakon Waadeland.
Rhythm performance from a spectral point of view.
(Pages 248-251). [ bib | pdf ]

Abstract: Basic to both performance and experience of rhythm in music is a connection between musical rhythm and patterns of body movements. A main focus in this study is to investigate possible relations between movement categories and rhythmic expression. An analytical approach to this task is to regard a musician ́s various ways of moving when playing an instrument as an expression of timbral aspects of rhythm, and to apply FFT to empirical data of the musician ́s movements in order to detect spectral components that are characteristic of the performance. In the present paper we exemplify this approach by reporting some findings from empirical investigations of jazz drummers ́ movements in performances of swing groove. In particular we show that performances of the groove in three different tempi (60, 120, 300 bpm) yield quite different spectral characteristics of the movements. This spectral approach to rhythm performance might suggest alternative ways of constructing syntheses and models of rhythm production, and could also be of interest for the construction of interfaces based on detecting spectral properties of body movements.

Keywords: Rhythm performance, movement, gesture, spectral analysis, swing

Josep M Comajuncosas, Enric Guaus, Alex Barrachina, and John O'Connell.
Nuvolet : 3d gesture-driven collaborative audio mosaicing.
(Pages 252-255). [ bib | pdf ]

Abstract: This research presents a 3D gestural interface for collaborative concatenative sound synthesis and audio mosaicing. Our goal is to improve the communication between the audience and performers by means of an enhanced correlation between gestures and musical outcome. Nuvolet consists of a 3D motion controller coupled to a concatenative synthesis engine. The interface detects and tracks the performers hands in four dimensions (x,y,z,t) and allows them to concurrently explore two or three-dimensional sound cloud representations of the units from the sound corpus, as well as to perform collaborative target-based audio mosaicing. Nuvolet is included in the Esmuc Laptop Orchestra catalog for forthcoming performances.

Keywords: concatenative synthesis, audio mosaicing, open-air interface, gestural controller, musical instrument, 3D

Erwin Schoonderwaldt and Alexander Refsum Jensenius.
Effective and expressive movements in a french-canadian fiddler's performance.
(Pages 256-259). [ bib | pdf ]

Abstract: We report on a performance study of a French-Canadian fiddler. The fiddling tradition forms an interesting contrast to classical violin performance in several ways. Distinguishing features include special elements in the bowing technique and the presence of an accompanying foot clogging pattern. These two characteristics are described, visualized and analyzed using video and motion capture recordings as source material.

Keywords: fiddler, violin, French-Canadian, bowing, feet, clogging, motion capture, video, motiongram, kinematics, sonification

Daniel Bisig, Jan Schacher, and Martin Neukom.
Flowspace: A hybrid ecosystem.
(Pages 260-263). [ bib | pdf ]

Abstract: In this paper an audio-visual installation is discussed, which combines interactive, immersive and generative elements. After introducing some of the challenges in the field of Generative Art and placing the work within its research context, conceptual reflections are made about the spatial, behavioural, perceptual and social issues that are raised within the entire installation. A discussion about the artistic content follows, focussing on the scenography and on working with flocking algorithms in general, before addressing three specific pieces realised for the exhibition. Next the technical implementation for both hardand software are detailed before the idea of a hybrid ecosystem gets discussed and further developments outlined.

Keywords: Generative Art, Interactive Environment,Immersive Installation, Swarm Simulation, Hybrid Ecosystem

Marc Sosnick and William Hsu.
Implementing a finite difference-based real-time sound synthesizer using gpus.
(Pages 264-267). [ bib | pdf ]

Abstract: In this paper, we describe an implementation of a real-time sound synthesizer using Finite Difference-based simulation of a two-dimensional membrane. Finite Difference (FD) methods can be the basis for physics-based music instrument models that generate realistic audio output. However, such methods are compute-intensive; large simulations cannot run in real time on current CPUs. Many current systems now include powerful Graphics Processing Units (GPUs), which are a good fit for FD methods. We demonstrate that it is possible to use this method to create a usable real-time audio synthesizer.

Keywords: Finite Difference, GPU, CUDA, Synthesis

Axel Tidemann.
An artificial intelligence architecture for musical expressiveness that learns by imitation.
(Pages 268-271). [ bib | pdf ]

Abstract: Interacting with musical avatars have been increasingly popular over the years, with the introduction of games like Guitar Hero and Rock Band. These games provide MIDIequipped controllers that look like their real-world counterparts (e.g. MIDI guitar, MIDI drumkit) that the users play to control their designated avatar in the game. The performance of the user is measured against a score that needs to be followed. However, the avatar does not move in response to how the user plays, it follows some predefined movement pattern. If the user plays badly, the game ends with the avatar ending the performance (i.e. throwing the guitar on the floor). The gaming experience would increase if the avatar would move in accordance with user input. This paper presents an architecture that couples musical input with body movement. Using imitation learning, a simulated human robot learns to play the drums like human drummers do, both visually and auditory. Learning data is recorded using MIDI and motion tracking. The system uses an artificial intelligence approach to implement imitation learning, employing artificial neural networks.

Keywords: Modeling Human Behaviour, Drumming, Artificial Intelligence

Luke Dahl, Jorge Herrera, and Carr Wilkerson.
Tweetdreams: Making music with the audience and the world using real-time twitter data.
(Pages 272-275). [ bib | pdf ]

Abstract: TweetDreams is an instrument and musical composition which creates real-time sonification and visualization of tweets. Tweet data containing specified search terms is retrieved from Twitter and used to build networks of associated tweets. These networks govern the creation of melodies associated with each tweet and are displayed graphically. Audience members participate in the piece by tweeting, and their tweets are given special musical and visual prominence.

Keywords: Twitter, audience participation, sonification, data visualization, text processing, interaction, multi-user instrument

Lawrence Fyfe, Adam Tindale, and Sheelagh Carpendale.
Junctionbox: A toolkit for creating multi-touch sound control interfaces.
(Pages 276-279). [ bib | pdf ]

Abstract: JunctionBox is a new software toolkit for creating multitouch interfaces for controlling sound and music. More specifically, the toolkit has special features which make it easy to create TUIO-based touch interfaces for controlling sound engines via Open Sound Control. Programmers using the toolkit have a great deal of freedom to create highly customized interfaces that work on a variety of hardware.

Keywords: Multi-touch, Open Sound Control, Toolkit, TUIO

Andrew Johnston.
Beyond evaluation: Linking practice and theory in new musical interface design.
(Pages 280-283). [ bib | pdf ]

Abstract: This paper presents an approach to practice-based research in new musical instrument design. At a high level, the process involves drawing on relevant theories and aesthetic approaches to design new instruments, attempting to identify relevant applied design criteria, and then examining the experiences of performers who use the instruments with particular reference to these criteria. Outcomes of this process include new instruments, theories relating to musicianinstrument interaction and a set of design criteria informed by practice and research.

Keywords: practice-based research, evaluation, Human-Computer Interaction, research methods, user studies

Phillip Popp and Matthew Wright.
Intuitive real-time control of spectral model synthesis.
(Pages 284-287). [ bib | pdf ]

Abstract: Several methods exist for manipulating spectral models either by applying transformations via higher level features or by providing in-depth offline editing capabilities. In contrast, our system aims for direct, full, intuitive, real-time control without exposing any spectral model features to the user. The system extends upon previous machine learning work in gesture-synthesis mapping by applying it to spectral models; these are a unique and interesting use case in that they are capable of reproducing real world recordings, due to their relatively high data rate and complex, intertwined and synergetic structure. To achieve a direct and intuitive control of a spectral model, a method to extract an individualized mapping between Wacom Pen parameters and Spectral Model Synthesis frames is described and implemented as a standalone application. The method works by capturing tablet parameters as the user pantomimes to synthesized spectral model. A transformation from Wacom Pen parameters to gestures is obtained by extracting features from the pen and then transforming those features using Principal Component Analysis. Then a linear model maps between gestures and higher level features of the spectral model frames while a k-nearest neighbor algorithm maps between gestures and normalized spectral model frames.

Keywords: Spectral Model Synthesis, Gesture Recognition, Synthesis Control, Wacom Tablet, Machine Learning

Pablo Molina, Martin Haro, and Sergi Jordà .
Beatjockey: A new tool for enhancing dj skills.
(Pages 288-291). [ bib | pdf ]

Abstract: We present BeatJockey, a prototype interface which makes use of Audio Mosaicing (AM), beat-tracking and machine learning techniques, for supporting Diskjockeys (DJs) by proposing them new ways of interaction with the songs on the DJ's playlist. This prototype introduces a new paradigm to DJing in which the user has the capability to mix songs interacting with beat-units that accompany the DJ's mix. For this type of interaction, the system suggests song slices taken from songs selected from a playlist, which could go well with the beats of whatever master song is being played. In addition the system allows the synchronization of multiple songs, thus permitting flexible, coherent and rapid progressions in the DJ's mix. BeatJockey uses the Reactable, a musical tangible user interface (TUI), and it has been designed to be used by all DJs regardless of their level of expertise, as the system helps the novice while bringing new creative opportunities to the expert.

Keywords: DJ, music information retrieval, audio mosaicing, percussion, turntable, beat-mash, interactive music interfaces, realtime, tabletop interaction, reactable

Jan Schacher and Angela Stoecklin.
Traces: Body and motion and sound.
(Pages 292-295). [ bib | pdf ]

Abstract: In this paper the relationship between body, motion and sound is addressed. The comparison with traditional instruments and dance is shown with regards to basic types of motion. The difference between gesture and movement is outlined and some of the models used in dance for structuring motion sequences are described. In order to identify expressive aspects of motion sequences a test scenario is devised. After the description of the methods and tools used in a series of measurements, two types of data-display are shown and the applied in the interpretation. One salient feature is recognized and put into perspective with regards to movement and gestalt perception. Finally the merits of the technical means that were applied are compared and a model-based approach to motion-sound mapping is proposed.

Keywords: Interactive Dance, Motion and Gesture, Sonification, Motion Perception, Mapping

Grace Leslie and Tim Mullen.
Moodmixer: Eeg-based collaborative sonification.
(Pages 296-299). [ bib | pdf ]

Abstract: MoodMixer is an interactive installation in which participants collaboratively navigate a two-dimensional music space by manipulating their cognitive state and conveying this state via wearable Electroencephalography (EEG) technology. The participants can choose to actively manipulate or passively convey their cognitive state depending on their desired approach and experience level. A four-channel electronic music mixture continuously conveys the participants' expressed cognitive states while a colored visualization of their locations on a two-dimensional projection of cognitive state attributes aids their navigation through the space. MoodMixer is a collaborative experience that incorporates aspects of both passive and active EEG sonification and performance art. We discuss the technical design of the installation and place its collaborative sonification aesthetic design within the context of existing EEG-based music and art.

Keywords: EEG, BCMI, collaboration, sonification, visualization

Ståle A. Skogstad, Kristian Nymoen, Yago De Quay, and Alexander Refsum Jensenius.
Osc implementation and evaluation of the xsens mvn suit.
(Pages 300-303). [ bib | pdf ]

Abstract: The paper presents research about implementing a full body inertial motion capture system, the Xsens MVN suit, for musical interaction. Three different approaches for streaming real time and prerecorded motion capture data with Open Sound Control have been implemented. Furthermore, we present technical performance details and our experience with the motion capture system in realistic practice.

Lonce Wyse, Norikazu Mitani, and Suranga Nanayakkara.
The effect of visualizing audio targets in a musical listening and performance task.
(Pages 304-307). [ bib | pdf ]

Abstract: The goal of our research is to find ways of supporting and encouraging musical behavior by non-musicians in shared public performance environments. Previous studies indicated simultaneous music listening and performance is difficult for non-musicians, and that visual support for the task might be helpful. This paper presents results from a preliminary user study conducted to evaluate the effect of visual feedback on a musical tracking task. Participants generated a musical signal by manipulating a hand-held device with two dimensions of control over two parameters, pitch and density of note events, and were given the task of following a target pattern as closely as possible. The target pattern was a machine-generated musical signal comprising of variation over the same two parameters. Visual feedback provided participants with information about the control parameters of the musical signal generated by the machine. We measured the task performance under different visual feedback strategies. Results show that single parameter visualizations tend to improve the tracking performance with respect to the visualized parameter, but not the non-visualized parameter. Visualizing two independent parameters simultaneously decreases performance in both dimensions.

Keywords: Mobile phone, Interactive music performance, Listening, Group music play, Visual support

Adrian Freed, John Maccallum, and Andrew Schmeder.
Composability for musical gesture signal processing using new osc-based object and functional programming extensions to max/msp.
(Pages 308-311). [ bib | pdf ]

Abstract: An effective programming style for gesture signal processing is described using a new library that brings efficient run-time polymorphism, functional and instance-based object-oriented programming to Max/MSP. By introducing better support for generic programming and composability Max/MSP becomes a more productive environment for managing the growing scale and complexity of gesture sensing systems for musical instruments and interactive installations.

Keywords: Composability, object, Open Sound Control, Gesture Signal Processing,Max/MSP ,FunctionalProgramming,ObjectOriented Programming, Delegation

Kristian Nymoen, Ståle A. Skogstad, and Alexander Refsum Jensenius.
Soundsaber - a motion capture instrument.
(Pages 312-315). [ bib | pdf ]

Abstract: The paper presents the SoundSaber a musical instrument based on motion capture technology. We present technical details of the instrument and discuss the design development process. The SoundSaber may be used as an example of how high-fidelity motion capture equipment can be used for prototyping musical instruments, and we illustrate this with an example of a low-cost implementation of our motion capture instrument.

Øyvind Brandtsegg, Sigurd Saue, and Thom Johansen.
A modulation matrix for complex parameter sets.
(Pages 316-319). [ bib | pdf ]

Abstract: The article describes a flexible mapping technique realized as a many-to-many dynamic mapping matrix. Digital sound generation is typically controlled by a large number of parameters and efficient and flexible mapping is necessary to provide expressive control over the instrument. The proposed modulation matrix technique may be seen as a generic and selfmodifying mapping mechanism integrated in a dynamic interpolation scheme. It is implemented efficiently by taking advantage of its inherent sparse matrix structure. The modulation matrix is used within the Hadron Particle Synthesizer, a complex granular module with 200 synthesis parameters and a simplified performance control structure with 4 expression parameters.

Keywords: Mapping, granular synthesis, modulation, live performance

Yu-Chung Tseng, Che-Wei Liu, Tzu-Heng Chi, and Hui-Yu Wang.
Sound low fun.
(Pages 320-321). [ bib | pdf ]

Abstract: Sound Low Fun, a large sphere, is an interactive sound installation. The installation could produce low-frequency sound (Low Sound) to make people feel relax, to have "Fun"effect (“Fun” also pronounced close to Chinese word “放”, which also means relax. This is our main concern and fundamental idea of the project. Our work present a sense of technology, and then we follow the structure by "C60" to divide into 32 blocks; Regarding the part of internal circuit design, we employed the force sensor and ADXL335 three-axis accelerometer connect with Arduino I/O and The Mux (Multiplexer) Shield, then it can produce different music with different lighting effects through Max/MSP programming. As music was concerned, we make use a type of meditative long-sustained low-frequency sound, accompanied by some transparency high-frequency sounds as sphere was shaked. When user presses, hugs and pushes the sphere, it trigger the soft low sound and lighting effects generated, as a result, user relieve his/her pressure eventually.

Keywords: Large-scale, interactive installation, low-frequency sounds, stress relief, Max/MSP computer music programming, Arduino

Edgar Berdahl and Chris Chafe.
Autonomous new media artefacts (autonma).
(Pages 322-323). [ bib | pdf ]

Abstract: The purpose of this brief paper is to revisit the question of longevity in present experimental practice and coin the term autonomous new media artefacts (AutoNMA), which are complete and independent of external computer systems, so they can be operable for a longer period of time and can be demonstrated at a moment's notice. We argue that platforms for prototyping should promote the creation of AutoNMA to make extant the devices which will be a part of the future history of new media.

Keywords: autonomous, standalone, Satellite CCRMA, Arduino

Min-Joon Yoo, Jin-Wook Beak, and In-Kwon Lee.
Creating musical expression using kinect.
(Pages 324-325). [ bib | pdf ]

Abstract: Recently, Microsoft introduced a game interface called Kinect for the Xbox 360 video game platform. This interface enables users to control and interact with the game console without the need to touch a controller. It largely increases the users' degree of freedom to express their emotion. In this paper, we first describe the system we developed to use this interface for sound generation and controlling musical expression. The skeleton data are extracted from users' motions and the data are translated to pre-defined MIDI data. We then use the MIDI data to control several applications. To allow the translation between the data, we implemented a simple Kinect-to-MIDI data convertor, which is introduced in this paper. We describe two applications to make music with Kinect: we first generate sound with Max/MSP, and then control the adlib with our own adlib generating system by the body movements of the users.

Keywords: Kinect, gaming interface, sound generation, adlib generation

Staas De Jong.
Making grains tangible: microtouch for microsound.
(Pages 326-328). [ bib | pdf ]

Abstract: This paper proposes a new research direction for the large family of instrumental musical interfaces where sound is generated using digital granular synthesis, and where interaction and control involve the (fine) operation of stiff, flat contact surfaces. First, within a historical context, a general absence of, and clear need for, tangible output that is dynamically instantiated by the grain-generating process itself is identified. Second, to fill this gap, a concrete general approach is proposed based on the careful construction of non-vibratory and vibratory force pulses, in a one-to-one relationship with sonic grains. An informal pilot psychophysics experiment initiating the approach was conducted, which took into account the two main cases for applying forces to the human skin: perpendicular, and lateral. Initial results indicate that the force pulse approach can enable perceivably multidimensional, tangible display of the ongoing grain-generating process. Moreover, it was found that this can be made to meaningfully happen (in real time) in the same timescale of basic sonic grain generation. This is not a trivial property, and provides an important and positive fundament for further developing this type of enhanced display. It also leads to the exciting prospect of making arbitrary sonic grains actual physical manipulanda.

Keywords: instrumental control, tangible display, tangible manipulation, granular sound synthesis

Baptiste Caramiaux, Frederic Bevilacqua, and Norbert Schnell.
Sound selection by gestures.
(Pages 329-330). [ bib | pdf ]

Abstract: This paper presents a prototypical tool for sound selection driven by users' gestures. Sound selection by gestures is a particular case of ”query by content” in multimedia databases. Gesture-to-Sound matching is based on computing the similarity between both gesture and sound parameters' temporal evolution. The tool presents three algorithms for matching gesture query to sound target. The system leads to several applications in sound design, virtual instrument design and interactive installation.

Keywords: Query by Gesture, Time Series Analysis, Sonic Interaction

Hernán Kerlleñevich, Manuel C. Eguía, and Pablo E. Riera.
An open source interface based on biological neural networks for interactive music performance.
(Pages 331-336). [ bib | pdf ]

Abstract: We propose and discuss an open source real-time interface that focuses in the vast potential for interactive sound art creation emerging from biological neural networks, as paradigmatic complex systems for musical exploration. In particular, we focus on networks that are responsible for the generation of rhythmic patterns.The interface relies upon the idea of relating metaphorically neural behaviors to electronic and acoustic instruments notes, by means of flexible mapping strategies. The user can intuitively design network configurations by dynamically creating neurons and configuring their inter-connectivity. The core of the system is based in events emerging from his network design, which functions in a similar way to what happens in real small neural networks. Having multiple signal and data inputs and outputs, as well as standard communications protocols such as MIDI, OSC and TCP/IP, it becomes and unique tool for composers and performers, suitable for different performance scenarios, like live electronics, sound installations and telematic concerts.

Keywords: rhythm generation, biological neural networks, complex patterns, musical interface, network performance

Nicholas Gillian, R. Benjamin Knapp, and Sile O'Modhrain.
Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping.
(Pages 337-342). [ bib | pdf ]

Abstract: This paper presents a novel algorithm that has been specifically designed for the recognition of multivariate temporal musical gestures. The algorithm is based on Dynamic Time Warping and has been extended to classify any Ndimensional signal, automatically compute a classification threshold to reject any data that is not a valid gesture and be quickly trained with a low number of training examples. The algorithm is evaluated using a database of 10 temporal gestures performed by 10 participants achieving an average cross-validation result of 99%.

Keywords: Dynamic Time Warping, Gesture Recognition, Musician-Computer Interaction, Multivariate Temporal Gestures

Nicholas Gillian, R. Benjamin Knapp, and Sile O'Modhrain.
A machine learning toolbox for musician computer interaction.
(Pages 343-348). [ bib | pdf ]

Abstract: This paper presents the SARC EyesWeb Catalog, (SEC), a machine learning toolbox that has been specifically developed for musician-computer interaction. The SEC features a large number of machine learning algorithms that can be used in real-time to recognise static postures, perform regression and classify multivariate temporal gestures. The algorithms within the toolbox have been designed to work with any N-dimensional signal and can be quickly trained with a small number of training examples. We also provide the motivation for the algorithms used for the recognition of musical gestures to achieve a low intra-personal generalisation error, as opposed to the inter-personal generalisation error that is more common in other areas of humancomputer interaction.

Keywords: Machine learning, gesture recognition, musician-computer interaction, SEC

Elena Jessop, Peter Torpey, and Benjamin Bloomberg.
Music and technology in death and the powers.
(Pages 349-354). [ bib | pdf ]

Abstract: In composer Tod Machover's new opera Death and the Powers, the main character uploads his consciousness into an elaborate computer system to preserve his essence and agency after his corporeal death. Consequently, for much of the opera, the stage and the environment itself come alive as the main character. This creative need brings with it a host of technical challenges and opportunities. In order to satisfy the needs of this storyline, Machover's Opera of the Future group at the MIT Media Lab has developed a suite of new performance technologies, including robot characters, interactive performance capture systems, mapping systems for authoring interactive multimedia performances, new musical instruments, unique spatialized sound controls, and a unified control system for all these technological components. While developed for a particular theatrical production, many of the concepts and design procedures remain relevant to broader contexts including performance, robotics, and interaction design.

Keywords: opera, Death and the Powers, Tod Machover, gestural interfaces, Disembodied Performance, ambisonics

Victor Zappi, Dario Mazzanti, Andrea Brogni, and Darwin Caldwell.
Design and evaluation of a hybrid reality performance.
(Pages 355-360). [ bib | pdf ]

Abstract: In this paper we introduce a multimodal platform for Hybrid Reality live performances: by means of non-invasive Virtual Reality technology, we developed a system to present artists and interactive virtual objects in audio/visual choreographies on the same real stage. These choreographies could include spectators too, providing them with the possibility to directly modify the scene and its audio/visual features. We also introduce the first interactive performance staged with this technology, in which an electronic musician played live five tracks manipulating the 3D projected visuals. As questionnaires have been distributed after the show, in the last part of this work we discuss the analysis of collected data, underlining positive and negative aspects of the proposed experience. This paper belongs together with a performance proposal called Dissonance, in which two performers exploit the platform to create a progressive soundtrack along with the exploration of an interactive virtual environment.

Keywords: Interactive Performance, Hybrid Choreographies, Virtual Reality, Music Control

Jérémie Garcia, Theophanis Tsandilas, Carlos Agon, and Wendy Mackay.
Inksplorer : Exploring musical ideas on paper and computer.
(Pages 361-366). [ bib | pdf ]

Abstract: We conducted three studies with contemporary music composers at IRCAM. We found that even highly computer-literate composers use an iterative process that begins with expressing musical ideas on paper, followed by active parallel exploration on paper and in software, prior to final execution of their ideas as an original score. We conducted a participatory design study that focused on the creative exploration phase, to design tools that help composers better integrate their paper-based and electronic activities. We then developed InkSplorer as a technology probe that connects users' hand-written gestures on paper to Max/MSP and OpenMusic. Composers appropriated InkSplorer according to their preferred composition styles, emphasizing its ability to help them quickly explore musical ideas on paper as they interact with the computer. We conclude with recommendations for designing interactive paper tools that support the creative process, letting users explore musical ideas both on paper and electronically.

Keywords: Composer, Creativity, Design Exploration, InkSplorer, Interactive Paper, OpenMusic, Technology Probes

Pedro Lopes, Alfredo Ferreira, and J. A. Madeiras Pereira.
Battle of the djs: an hci perspective of traditional and virtual and hybrid and multitouch djing. . (Pages, Norway, 2011. [ bib | pdf ]

Abstract: The DJ culture uses a gesture lexicon strongly rooted in the traditional setup of turntables and a mixer. As novel tools are introduced in the DJ community, this lexicon is adapted to the features they provide. In particular, multitouch technologies can offer a new syntax while still supporting the old lexicon, which is desired by DJs. We present a classification of DJ tools, from an interaction point of view, that divides the previous work into Traditional, Virtual and Hybrid setups. Moreover, we present a multitouch tabletop application, developed with a group of DJ consultants to ensure an adequate implementation of the traditional gesture lexicon. To conclude, we conduct an expert evaluation, with ten DJ users in which we compare the three DJ setups with our prototype. The study revealed that our proposal suits expectations of Club/Radio-DJs, but fails against the mental model of Scratch-DJs, due to the lack of haptic feedback to represent the record's physical rotation. Furthermore, tests show that our multitouch DJ setup, reduces task duration when compared with Virtual setups.

Keywords: DJing, Multitouch Interaction, Expert User evaluation, HCI

Adnan Marquez-Borbon, Michael Gurevich, A. Cavan Fyans, and Paul Stapleton.
Designing digital musical interactions in experimental contexts.
(Pages 373-376). [ bib | pdf ]

Abstract: As NIME's focus has expanded beyond the design reports which were pervasive in the early days to include studies and experiments involving music control devices, we report on a particular area of activity that has been overlooked: designs of music devices in experimental contexts. We demonstrate this is distinct from designing for artistic performances, with a unique set of novel challenges. A survey of methodological approaches to experiments in NIME reveals a tendency to rely on existing instruments or evaluations of new devices designed for broader creative application. We present two examples from our own studies that reveal the merits of designing purpose-built devices for experimental contexts.

Keywords: Experiment, Methodology, Instrument Design, DMIs

Jonathan Reus.
Crackle: A mobile multitouch topology for exploratory sound interaction.
(Pages 377-380). [ bib | pdf ]

Abstract: This paper describes the design of Crackle, a interactive sound and touch experience inspired by the CrackleBox. We begin by describing a ruleset for Crackle's interaction derived from the salient interactive qualities of the CrackleBox. An implementation strategy is then described for realizing the ruleset as an application for the iPhone. The paper goes on to consider the potential of using Crackle as an encapsulated interaction paradigm for exploring arbitrary sound spaces, and concludes with lessons learned on designing for multitouch surfaces as expressive input sensors.

Keywords: touchscreen, interface topology, mobile music, interaction paradigm, dynamic mapping, CrackleBox, iPhone

Samuel Aaron, Alan F. Blackwell, Richard Hoadley, and Tim Regan.
A principled approach to developing new languages for live coding.
(Pages 381-386). [ bib | pdf ]

Abstract: This paper introduces Improcess, a novel cross-disciplinary collaborative project focussed on the design and development of tools to structure the communication between performer and musical process. We describe a 3-tiered architecture centering around the notion of a Common Music Runtime, a shared platform on top of which inter-operating client interfaces may be combined to form new musical instruments. This approach allows hardware devices such as the monome to act as an extended hardware interface with the same power to initiate and control musical processes as a bespoke programming language. Finally, we reflect on the structure of the collaborative project itself, which offers an opportunity to discuss general research strategy for conducting highly sophisticated technical research within a performing arts environment such as the development of a personal regime of preparation for performance.

Keywords: Improvisation, live coding, controllers, monome, collaboration, concurrency, abstractions

Jamie Bullock, Daniel Beattie, and Jerome Turner.
Integra live: a new graphical user interface for live electronic music.
(Pages 387-392). [ bib | pdf ]

Abstract: In this paper we describe a new application, Integra Live, designed to address the problems associated with software usability in live electronic music. We begin by outlining the primary usability and user-experience issues relating to the predominance of graphical dataflow languages for the composition and performance of live electronics. We then discuss the specific development methodologies chosen to address these issues, and illustrate how adopting a usercentred approach has resulted in a more usable and humane interface design. The main components and workflows of the user interface are discussed, giving a rationale for key design decisions. User testing processes and results are presented. Finally, a critical evaluation application usability is given based on user-testing processes, with key findings presented for future consideration.

Keywords: software, live electronics, usability, user experience

Jung-Sim Roh, Yotam Mann, Adrian Freed, and David Wessel.
Robust and reliable fabric and piezoresistive multitouch sensing surfaces for musical controllers.
(Pages 393-398). [ bib | pdf ]

Abstract: The design space of fabric multitouch surface interaction is explored with emphasis on novel materials and construction techniques aimed towards reliable, repairable pressure sensing surfaces for musical applications.

Keywords: Multitouch, surface interaction, piezoresistive, fabric sensor, e-textiles, tangible computing, drum controller

Mark Marshall and Marcelo Wanderley.
Examining the effects of embedded vibrotactile feedback on the feel of a digital musical instrument.
(Pages 399-404). [ bib | pdf ]

Abstract: This paper deals with the effects of integrated vibrotactile feedback on the “feel” of a digital musical instrument (DMI). Building on previous work developing a DMI with integrated vibrotactile feedback actuators, we discuss how to produce instrument-like vibrations, compare these simulated vibrations with those produced by an acoustic instrument and examine how the integration of this feedback effects performer ratings of the instrument. We found that integrated vibrotactile feedback resulted in an increase in performer engagement with the instrument, but resulted in a reduction in the perceived control of the instrument. We discuss these results and their implications for the design of new digital musical instruments.

Keywords: Vibrotactile Feedback, Digital Musical Instruments, Feel, Loudspeakers

Dimitri Diakopoulos and Ajay Kapur.
Hiduino: A firmware for building driverless usb-midi devices using the arduino microcontroller.
(Pages 405-408). [ bib | pdf ]

Abstract: This paper presents a series of open-source firmwares for the latest iteration of the popular Arduino microcontroller platform. A portmanteau of Human Interface Device and Arduino, the HIDUINO project tackles a major problem in designing NIMEs: easily and reliably communicating with a host computer using standard MIDI over USB. HIDUINO was developed in conjunction with a class at the California Institute of the Arts intended to teach introductory-level human-computer and human-robot interaction within the context of musical controllers. We describe our frustration with existing microcontroller platforms and our experiences using the new firmware to facilitate the development and prototyping of new music controllers.

Keywords: Arduino, USB, HID, MIDI, HCI, controllers, microcontrollers

Emmanuel Fléty and Côme Maestracci.
Latency improvement in sensor wireless transmission using ieee 802.15.4.
(Pages 409-412). [ bib | pdf ]

Abstract: We present a strategy for the improvement of wireless sensor data transmission latency, implemented in two current projects involving gesture/control sound interaction. Our platform was designed to be capable of accepting accessories using a digital bus. The receiver features a IEEE 802.15.4 microcontroller associated to a TCP/IP stack integrated circuit that transmits the received wireless data to a host computer using the Open Sound Control protocol. This paper details how we improved the latency and sample rate of the said technology while keeping the device small and scalable.

Keywords: Embedded sensors, gesture recognition, wireless, sound and music computing, interaction, 802.15.4, Zigbee

Jeff Snyder.
The snyderphonics manta and a novel usb touch controller.
(Pages 413-416). [ bib | pdf ]

Abstract: The Snyderphonics Manta controller is a USB touch controller for music and video. It features 48 capacitive touch sensors, arranged in a hexagonal grid, with bi-color LEDs that are programmable from the computer. The sensors send continuous data proportional to surface area touched, and a velocitydetection algorithm has been implemented to estimate attack velocity based on this touch data. In addition to these hexagonal sensors, the Manta has two high-dimension touch sliders (giving 12-bit values), and four assignable function buttons. In this paper, I outline the features of the controller, the available methods for communicating between the device and a computer, and some current uses for the controller.

Keywords: Snyderphonics, Manta, controller, USB, capacitive, touch, sensor, decoupled LED, hexagon, grid, touch slider, HID, portable, wood, live music, live video

William Hsu.
On movement and structure and abstraction in generative audiovisual improvisation.
(Pages 417-420). [ bib | pdf ]

Abstract: This paper overviews audiovisual performance systems that form the basis for my recent collaborations with improvising musicians. Simulations of natural processes, such as fluid dynamics and flocking, provide the foundations for “organic”looking movement and evolution of abstract visual components. In addition, visual components can morph between abstract non-referential configurations and pre-defined images, symbols or shapes. High-level behavioral characteristics of the visual components are influenced by realtime gestural or audio input; each system constitutes a responsive environment that participating musicians interact with during a performance.

Keywords: Improvisation, interactive, generative, animation, audio-visual

Claudia Robles Angel.
Creating interactive multimedia works with bio-data.
(Pages 421-424). [ bib | pdf ]

Abstract: This paper deals with the usage of bio-data from performers to create interactive multimedia performances or installations. It presents this type of research in some art works produced in the last fifty years (such as Lucier's Music for a Solo Performance, from 1965), including two interactive performances of my authorship, which use two different types of bio-interfaces: on the one hand, an EMG (Electromyography) and on the other hand, an EEG (electroencephalography). The paper explores the interaction between the human body and real-time media (audio and visual) by the usage of bio-interfaces. This research is based on biofeedback investigations pursued by the psychologist Neal E. Miller in the 1960s, mainly based on finding new methods to reduce stress. However, this article explains and shows examples in which biofeedback research is used for artistic purposes only.

Keywords: Live electronics, Butoh, performance, biofeedback, interactive sound and video

Paula Ustarroz.
Tresnanet: musical generation based on network protocols.
(Pages 425-428). [ bib | pdf ]

Abstract: TresnaNet explores the potential of Telematics as a generator of musical expressions. I pretend to sound the silent flow of information from the network. This is realized through the fabrication of a prototype following the intention of giving substance to the intangible parameters of our communication. The result may have educational, commercial and artistic applications because it is a physical and perceptible representation of the transfer of information over the network. This paper describes the design, implementation and conclusions about TresnaNet.

Keywords: Interface, musical generation, telematics, network, musical instrument, network sniffer

Matti Luhtala, Tiina Kymäläinen, and Johan Plomp.
Designing a music performance space for persons with intellectual learning disabilities.
(Pages 429-432). [ bib | pdf ]

Abstract: This paper outlines the design and development process of the `DIYSE Music Creation Tool' concept, by presenting key questions, the used methodology, the music instrument prototype development process and user research activities. The aim of this research is to study how music therapists (or instructors) can utilize novel technologies and study new performing opportunities in the music therapy context, with people who have intellectual learning disabilities. The research applies an action research approach to develop new music technologies by co-designing with the music therapists, in order to develop in situ and improve the adoption of novel technologies. The proof-of-concept software utilizes Guitar Hero guitar controllers, and the software allows the music therapist to personalize interaction mappings between the physical and digital instrument components. By means of the guitars, the users are able to participate in various musical activities; they are able to play prepared musical compositions without extensive training, play together and perform for others. User research studies included the evaluation of the tool and research for performance opportunities.

Keywords: Music interfaces, music therapy, modifiable interfaces, design tools, Human-Technology Interaction (HTI), User-Centred Design (UCD), design for all (DfA), prototyping, performance

Tom Ahola, Teemu Ahmaniemi, Koray Tahiroglu, Fabio Belloni, and Ville Ranki.
Raja - a multidisciplinary artistic performance.
(Pages 433-436). [ bib | pdf ]

Abstract: Motion-based interactive systems have long been utilized in contemporary dance performances. These performances bring new insight to sound-action experiences in multidisciplinary art forms. This paper discusses the related technology within the framework of the dance piece, Raja. The performance set up of Raja gives a possibility to use two complementary tracking systems and two alternative choices for motion sensors in real-time audio-visual synthesis.

Keywords: raja, performance, dance, motion sensor, accelerometer, gyro, positioning, sonification, pure data, visualization, Qt

Emmanuelle Gallin and Marc Sirguy.
Eobody3: A ready-to-use pre-mapped amp; multi-protocol sensor interface.
(Pages 437-440). [ bib | pdf ]

Abstract: Away from the DIY world of Arduino programmers, Eowave has been developing Eobody interfaces, a range of ready-to-use sensor interfaces designed for meta-instruments, music control, and interactive installations... With Eobody3, we wanted to create this missing link between the analogue and digital worlds, make it possible to control analogue devices with a digital device and vice versa: for example, to control a modular synthesizer with an IPad with no computer and vice versa. With its compatibility with USB, MIDI, OSC, CV and DMX protocols, Eobody3 is a two-way bridge between the analogue and digital worlds... This paper describes the challenge of designing a ready-to-use, pre-mapped, multi-protocol interface for all types of applications.

Keywords: Controller, Sensor, MIDI, USB, Computer Music, USB, OSC, CV, MIDI, DMX, A/D Converter, Interface

Rasmus Bååth, Thomas Strandberg, and Christian Balkenius.
Eye tapping: How to beat out an accurate rhythm using eye movements.
(Pages 441-444). [ bib | pdf ]

Abstract: The aim of this study was to investigate how well subjects beat out a rhythm using eye movements and to establish the most accurate method of doing this. Eighteen subjects participated in an experiment were five different methods were evaluated. A fixation based method was found to be the most accurate. All subjects were able to synchronize their eye movements with a given beat but the accuracy was much lower than usually found in finger tapping studies. Many parts of the body are used to make music but so far, with a few exceptions, the eyes have been silent. The research presented here provides guidelines for implementing eye controlled musical interfaces. Such interfaces would enable performers and artists to use eye movement for musical expression and would open up new, exiting possibilities.

Keywords: Rhythm, Eye tracking, Sensorimotor synchronization, Eye tapping

Eric Rosenbaum.
Melodymorph: A reconfigurable musical instrument.
(Pages 445-447). [ bib | pdf ]

Abstract: I present MelodyMorph, a reconfigurable musical instrument designed with a focus on melodic improvisation. It is designed for a touch-screen interface, and allows the user to create “bells” which can be tapped to play a note, and dragged around on a pannable and zoomable canvas. Colors, textures and shapes of the bells represent pitch and timbre properties. “Recorder bells” can store and play back performances. Users can construct instruments that are modifiable as they play, and build up complex melodies hierarchically from simple parts.

Keywords: Melody, improvisation, representation, multi-touch, iPad

Karmen Franinovic.
Flo)(ps: Between habitual and explorative action-sound relationships.
(Pages 448-452). [ bib | pdf ]

Abstract: The perceived affordances of an everyday object guide its user toward habitual movements and experiences. Physical actions that are not immediately associated with established body techniques often remain neglected. Can sound activate those potentials for action that remain latent in the physicality of an object? How can the exploration of underused and unusual bodily movements be fostered? This paper presents the Flo)(ps project, a series of interactive sounding glasses, which aim to foster social interaction by means of habitual and explorative sonic gestures within everyday contexts. We discuss the design process and the qualitative evaluation of collaborative and individual user experience. The results show that social interaction and personal use require different ways of transitioning from habitual to explorative gestures, and point toward possible solutions to be further explored.

Keywords: sonic interaction design, gesture, habit, exploration

Margaret Schedel, Rebecca Fiebrink, and Phoenix Perry.
Wekinating 000000swan: Using machine learning to create and control complex artistic systems.
(Pages 453-456). [ bib | pdf ]

Abstract: In this paper we discuss how the band 000000Swan uses machine learning to parse complex sensor data and create intricate artistic systems for live performance. Using the Wekinator software for interactive machine learning, we have created discrete and continuous models for controlling audio and visual environments using human gestures sensed by a commercially-available sensor bow and the Microsoft Kinect. In particular, we have employed machine learning to quickly and easily prototype complex relationships between performer gesture and performative outcome.

Keywords: Wekinator, K-Bow, Machine Learning, Interactive, Multimedia, Kinect, Motion-Tracking, Bow Articulation, Animation

Carles F. Julià, Daniel Gallardo, and Sergi Jordà.
Mtcf: A framework for designing and coding musical tabletop applications directly in pure data.
(Pages 457-460). [ bib | pdf ]

Abstract: In the past decade we have seen a growing presence of tabletop systems applied to music, lately with even some products becoming commercially available and being used by professional musicians in concerts. The development of this type of applications requires several demanding technical expertises such as input processing, graphical design, real time sound generation or interaction design, and because of this complexity they are usually developed by a multidisciplinary group. In this paper we present the Musical Tabletop Coding Framework (MTCF) a framework for designing and coding musical tabletop applications by using the graphical programming language for digital sound processing Pure Data (Pd). With this framework we try to simplify the creation process of such type of interfaces, by removing the need of any programming skills other than those of Pd.

Keywords: Pure Data, tabletop, tangible, framework

David Pirrò and Gerhard Eckel.
Physical modelling enabling enaction: an example.
(Pages 461-464). [ bib | pdf ]

Abstract: In this paper we present research which can be placed in the context of performance-oriented computer music. Our research aims at finding new strategies for the realization of enactive interfaces for performers. We present an approach developed in experimental processes and we clarify it by introducing a concrete example. Our method involves physical modelling as an intermediate layer between bodily movement and sound synthesis. The historical and technological context in which this research takes place is outlined. We describe our approach and the hypotheses on which our investigations ground. The technological frame in which our research took place is briefly described. The piece cornerghostaxis#1 is presented as an example of this approach. The observations made during the rehearsals and the performance of this piece are outlined. Grounding on ours and the performers' experiences, we indicate the most valuable qualities of this approach, sketch the direction our future experimentation and development will take, pointing out the issues we will concentrate on.

Keywords: Interaction, Physical Modelling, Motion Tracking, Embodiment, Enactive interfaces

Thomas Mitchell and Imogen Heap.
Soundgrasp: A gestural interface for the performance of live music.
(Pages 465-468). [ bib | pdf ]

Abstract: This paper documents the first developmental phase of an interface that enables the performance of live music using gestures and body movements. The work included focuses on the first step of this project: the composition and performance of live music using hand gestures captured using a single data glove. The paper provides a background to the field, the aim of the project and a technical description of the work completed so far. This includes the development of a robust posture vocabulary, an artificial neural networkbased posture identification process and a state-based system to map identified postures onto a set of performance processes. The paper is closed with qualitative usage observations and a projection of future plans.

Keywords: Music Controller, Gestural Music, Data Glove, Neural Network, Live Music Composition, Looping, Imogen Heap

Tim Mullen, Richard Warp, and Adam Jansch.
Minding the (transatlantic) gap: An internet-enabled acoustic brain-computer music interface.
(Pages 469-472). [ bib | pdf ]

Abstract: The use of non-invasive electroencephalography (EEG) in the experimental arts is not a novel concept. Since 1965, EEG has been used in a large number of, sometimes highly sophisticated, systems for musical and artistic expression. However, since the advent of the synthesizer, most such systems have utilized digital and/or synthesized media in sonifying the EEG signals. There have been relatively few attempts to create interfaces for musical expression that allow one to mechanically manipulate acoustic instruments by modulating one's mental state. Secondly, few such systems afford a distributed performance medium, with data transfer and audience participation occurring over the Internet. The use of acoustic instruments and Internet-enabled communication expands the realm of possibilities for musical expression in Brain-Computer Music Interfaces (BCMI), while also introducing additional challenges. In this paper we report and examine a first demonstration (Music for Online Performer) of a novel system for Internet-enabled manipulation of robotic acoustic instruments, with feedback, using a non-invasive EEG-based BCI and low-cost, commercially available robotics hardware.

Keywords: EEG, Brain-Computer Music Interface, Internet, Arduino

Stefano Papetti, Marco Civolani, and Federico Fontana.
Rhythm'n'shoes: a wearable foot tapping interface with audio-tactile feedback.
(Pages 473-476). [ bib | pdf ]

Abstract: A shoe-based interface is presented, which enables users to play percussive virtual instruments by tapping their feet. The wearable interface consists of a pair of sandals equipped with four force sensors and four actuators affording audiotactile feedback. The sensors provide data via wireless transmission to a host computer, where they are processed and mapped to a physics-based sound synthesis engine. Since the system provides OSC and MIDI compatibility, alternative electronic instruments can be used as well. The audio signals are then sent back wirelessly to audio-tactile exciters embedded in the sandals' sole, and optionally to headphones and external loudspeakers. The round-trip wireless communication only introduces very small latency, thus guaranteeing coherence and unity in the multimodal percept and allowing tight timing while playing.

Keywords: interface, audio, tactile, foot tapping, embodiment, footwear, wireless, wearable, mobile

Cumhur Erkut, Antti Jylhä, and Reha Disçioğlu.
A structured design and evaluation model with application to rhythmic interaction displays.
(Pages 477-480). [ bib | pdf ]

Abstract: We present a generic, structured model for design and evaluation of musical interfaces. This model is development oriented, and it is based on the fundamental function of the musical interfaces, i.e., to coordinate the human action and perception for musical expression, subject to human capabilities and skills. To illustrate the particulars of this model and present it in operation, we consider the previous design and evaluation phase of iPalmas, our testbed for exploring rhythmic interaction. Our findings inform the current design phase of iPalmas visual and auditory displays, where we build on what has resonated with the test users, and explore further possibilities based on the evaluation results.

Keywords: Rhythmic interaction, multimodal displays, sonification, UML

Marco Marchini, Panos Papiotis, Alfonso Pérez, and Esteban Maestre.
A hair ribbon deflection model for low-intrusiveness measurement of bow force in violin performance.
(Pages 481-486). [ bib | pdf ]

Abstract: This paper introduces and evaluates a novel methodology for the estimation of bow pressing force in violin performance, aiming at a reduced intrusiveness while maintaining high accuracy. The technique is based on using a simplified physical model of the hair ribbon deflection, and feeding this model solely with position and orientation measurements of the bow and violin spatial coordinates. The physical model is both calibrated and evaluated using real force data acquired by means of a load cell.

Keywords: bow pressing force, bow force, pressing force, force, violin playing, bow simplified physical model, 6DOF, hair ribbon ends, string ends

Jonathan Forsyth, Aron Glennon, and Juan Bello.
Random access remixing on the ipad.
(Pages 487-490). [ bib | pdf ]

Abstract: Remixing audio samples is a common technique for the creation of electronic music, and there are a wide variety of tools available to edit, process, and recombine pre-recorded audio into new compositions. However, all of these tools conceive of the timeline of the pre-recorded audio and the playback timeline as identical. In this paper, we introduce a dual time axis representation in which these two timelines are described explicitly. We also discuss the random access remix application for the iPad, an audio sample editor based on this representation. We describe an initial user study with 15 high school students that indicates that the random access remix application has the potential to develop into a useful and interesting tool for composers and performers of electronic music.

Keywords: interactive systems, sample editor, remix, iPad, multi-touch

Erika Donald, Ben Duinker, and Eliot Britton.
Designing the ep trio: Instrument identities and control and performance practice in an electronic chamber music ensemble.
(Pages 491-494). [ bib | pdf ]

Abstract: This paper outlines the formation of the Expanded Performance (EP) trio, a chamber ensemble comprised of electric cello with sensor bow, augmented digital percussion, and digital turntable with mixer. Decisions relating to physical set-ups and control capabilities, sonic identities, and mappings of each instrument, as well as their roles within the ensemble, are explored. The contributions of these factors to the design of a coherent, expressive ensemble and its emerging performance practice are considered. The trio proposes solutions to creation, rehearsal and performance issues in ensemble live electronics.

Keywords: Live electronics, digital performance, mapping, chamber music, ensemble, instrument identity

A. Cavan Fyans and Michael Gurevich.
Perceptions of skill in performances with acoustic and electronic instruments.
(Pages 495-498). [ bib | pdf ]

Abstract: We present observations from two separate studies of spectators' perceptions of musical performances, one involving two acoustic instruments, the other two electronic instruments. Both studies followed the same qualitative method, using structured interviews to ascertain and compare spectators' experiences. In this paper, we focus on outcomes pertaining to perceptions of the performers' skill, relating to concepts of embodiment and communities of practice.

Keywords: skill, embodiment, perception, effort, control, spectator

Hiroki Nishino.
Cognitive issues in computer music programming.
(Pages 499-502). [ bib | pdf ]

Abstract: Programming Languages are the oldest `new interface for music expression' in computer music history. Both composers and researchers in computer music still have considerable interests in computer music programming environments. However, while many researchers focus on such issues as efficiency, new paradigm, or new features in computer music programming, cognitive aspects of computer music programming has been rarely discussed. Such `cognitive issues' are of importance when design or usability in computer music programming must be considered. By contextualizing computer music programming in the psychology of programming, it is made possible to borrow the technical terms and theoretical framework from the previous research in the field, which would be helpful to clarify the problems related to cognitive ergonomics and also beneficial to design a new programming environment with better usability in computer music.

Keywords: Computer music, programming language, the psychology of programming, usability

Roland Lamb and Andrew Robertson.
Seaboard: a new piano keyboard-related interface combining discrete and continuous control.
(Pages 503-506). [ bib | pdf ]

Abstract: This paper introduces the Seaboard, a new tangible musical instrument which aims to provide musicians with significant capability to manipulate sound in real-time in a musically intuitive way. It introduces the core design features which make the Seaboard unique, and describes the motivation and rationale behind the design. The fundamental approach to dealing with problems associated with discrete and continuous inputs is summarized.

Keywords: Piano keyboard-related interface, continuous and discrete control, haptic feedback, Human-Computer Interaction (HCI)

Gilbert Beyer and Max Meier.
Music interfaces for novice users: Composing music on a public display with hand gestures.
(Pages 507-510). [ bib | pdf ]

Abstract: In this paper we report on a public display where the audience is able to interact not only with visuals, but also with music. The interaction with music in a public setting involves some challenges, such as that passers-by as `novice users' engage only momentarily with public displays and often don't have any musical knowledge. We present a system that allows users to create harmonic melodies without being in need of a previous training period. Our software solution enables users to control melodies by the interaction, utilizing a novel technique of algorithmic composition based on soft constraints. The proposed algorithm does not generate music randomly, but makes sure that the interactive music is perceived as harmonic at any time. Since a certain amount of control over the music is assigned to the user and to ensure the music can be controlled in an intuitive way, the algorithm further includes preferences derived from user interaction that can be competing with generating a harmonic melody. To test our concept of controlling music, we developed a prototype of a large public display and conducted a user study, exploring how people would control melodies on such a display with hand gestures.

Keywords: Interactive music, public displays, user experience, out-of-home media, algorithmic composition, soft constraints

Birgitta Cappelen and Anders-Petter Andersson.
Expanding the role of the instrument.
(Pages 511-514). [ bib | pdf ]

Abstract: The traditional role of the musical instrument is to be the working tool of the professional musician. On the instrument the musician performs music for the audience to listen to. In this paper we present an interactive installation, where we expand the role of the instrument to motivate musicking and cocreation between diverse users. We have made an open installation, where users can perform a variety of actions in several situations. By using the abilities of the computer, we have made an installation, which can be interpreted to have many roles. It can both be an instrument, a co-musician, a communication partner, a toy, a meeting place and an ambient musical landscape. The users can dynamically shift between roles, based on their abilities, knowledge and motivation.

Keywords: Role, music instrument, genre, narrative, open, interaction design, musicking, interactive installation, sound art

Todor Todoroff.
Wireless digital/analog sensors for music and dance performances.
(Pages 515-518). [ bib | pdf ]

Abstract: We developed very small and light sensors, each equipped with 3-axes accelerometers, magnetometers and gyroscopes. Those MARG (Magnetic, Angular Rate, and Gravity) sensors allow for a drift-free attitude computation which in turn leads to the possibility of recovering the skeleton of body parts that are of interest for the performance, improving the results of gesture recognition and allowing to get relative position between the extremities of the limbs and the torso of the performer. This opens new possibilities in terms of mapping. We kept our previous approach developed at ARTeM [2]: wireless from the body to the host computer, but wired through a 4-wire digital bus on the body. By relieving the need for a transmitter on each sensing node, we could built very light and flat sensor nodes that can be made invisible under the clothes. Smaller sensors, coupled with flexible wires on the body, give more freedom of movement to dancers despite the need for cables on the body. And as the weight of each sensor node, box included, is only 5 grams (Figure 1), they can also be put on the upper and lower arm and hand of a violin or viola player, to retrieve the skeleton from the torso to the hand, without adding any weight that would disturb the performer. We used those sensors in several performances with a dancing viola player and in one where she was simultaneously controlling gas flames interactively. We are currently applying them to other types of musical performances.

Keywords: wireless, MARG, sensors

Trond Engum.
Real-time control and creative convolution - exchanging techniques between distinct genres.
(Pages 519-522). [ bib | pdf ]

Abstract: This paper covers and also describes an ongoing research project focusing on new artistic possibilities by exchanging music technological methods and techniques between two distinct musical genres. Through my background as a guitarist and composer in an experimental metal band I have experienced a vast development in music technology during the last 20 years. This development has made a great impact in changing the procedures for composing and producing music within my genre without necessarily changing the strategies of how the technology is used. The transition from analogue to digital sound technology not only opened up new ways of manipulating and manoeuvring sound, it also opened up challenges in how to integrate and control the digital sound technology as a seamless part of my musical genre. By using techniques and methods known from electro-acoustic/computer music, and adapting them for use within my tradition, this research aims to find new strategies for composing and producing music within my genre.

Keywords: Artistic research, strategies for composition and production, convolution, environmental sounds, real time control

Andreas Bergsland.
The six fantasies machine: an instrument modelling phrases from paul lansky's six fantasies.
(Pages 523-526). [ bib | pdf ]

Abstract: The Six Fantasies Machine (SFM) is a software instrument that simulates sounds from Paul Lansky's classic computer music piece from 1979, Six Fantasies on a Poem by Thomas Campion. The paper describes the design of the instrument and its user interface and how it can be used in a methodological approach called the epistemology of simulations by Godøy. In imitating phrases from Lansky's piece and enabling the creation of variants of these phrases, the user can get an experience of the essential traits of the phrases. Moreover, the instrument will give the user hands-on experience with processing techniques that the composer applied, albeit with a user-friendly interface.

Keywords: LPC, software instrument, analysis, modeling, csound

Jan Trutzschler.
Gliss: An intuitive sequencer for the iphone and ipad.
(Pages 527-528). [ bib | pdf ]

Abstract: Gliss is an application for iOS that lets the user sequence five separate instruments and play them back in various ways. Sequences can be created by drawing onto the screen while the sequencer is running. The playhead of the sequencer can be set to randomly deviate from the drawings or can be controlled via the accelerometer of the device. This makes Gliss a hybrid of a sequencer, an instrument and a generative music system.

Keywords: Gliss, iOS, iPhone, iPad, interface, UPIC, music, sequencer, accelerometer, drawing

Jiffer Harriman, Locky Casey, Linden Melvin, and Mike Repper.
Quadrofeelia - a new instrument for sliding into notes.
(Pages 529-530). [ bib | pdf ]

Abstract: This paper describes a new musical instrument inspired by the pedal-steel guitar, along with its motivations and other considerations. Creating a multi-dimensional, expressive instrument was the primary driving force. For these criteria the pedal steel guitar proved an apt model as it allows control over several instrument parameters simultaneously and continuously. The parameters we wanted control over were volume, timbre, release time and pitch. The Quadrofeelia is played with two hands on a horizontal surface. Single notes and melodies are easily played as well as chordal accompaniment with a variety of timbres and release times enabling a range of legato and staccato notes in an intuitive manner with a new yet familiar interface.

Keywords: NIME, pedal-steel, electronic, slide, demonstration, membrane, continuous, ribbon, instrument, polyphony, lead

Johnty Wang, Nicolas D'Alessandro, Sidney Fels, and Bob Pritchard.
Squeezy: Extending a multi-touch screen with force sensing objects for controlling articulatory synthesis.
(Pages 531-532). [ bib | pdf ]

Abstract: This paper describes Squeezy: a low-cost, tangible input device that adds multi-dimensional input to capacitive multi-touch tablet devices. Force input is implemented through force sensing resistors mounted on a rubber ball, which also provides passive haptic feedback. A microcontroller samples and transmits the measured pressure information. Conductive fabric attached to the finger contact area translates the touch to the bottom of the ball which allows the touchscreen to detect the position and orientation. The addition of a tangible, pressuresensitive input to a portable multimedia device opens up new possibilities for expressive musical interfaces and Squeezy is used as a controller for real-time gesture controlled voice synthesis research.

Keywords: Musical controllers, tangible interfaces, force sensor, multitouch, voice synthesis.

Souhwan Choe and Kyogu Lee.
Swaf: Towards a web application framework for composition and documentation of soundscape.
(Pages 533-534). [ bib | pdf ]

Abstract: In this paper, we suggest a conceptual model of a Web application framework for the composition and documentation of soundscape and introduce corresponding prototype projects, SeoulSoundMap and SoundScape Composer. We also survey the current Web-based sound projects in terms of soundscape documentation.

Keywords: soundscape, web application framework, sound archive, sound map, soundscape composition, soundscape documentation

Norbert Schnell, Frederic Bevilacqua, Nicolas Rasamimana, Julien Blois, Fabrice Guedy, and Emmanuel Flety.
Playing the "mo" - gestural control and re-embodiment of recorded sound and music.
(Pages 535-536). [ bib | pdf ]

Abstract: We are presenting a set of applications that have been realized with the MO modular wireless motion capture device and a set of software components integrated into Max/MSP. These applications, created in the context of artistic projects, music pedagogy, and research, allow for the gestural reembodiment of recorded sound and music. They demonstrate a large variety of different ”playing techniques” in musical performance using wireless motion sensor modules in conjunction with gesture analysis and real-time audio processing components.

Keywords: Music, Gesture, Interface, Wireless Sensors, Gesture Recognition, Audio Processing, Design, Interaction

Bruno Zamborlin, Marco Liuni, and Giorgio Partesana.
(land)moves.
(Pages 537-538). [ bib | pdf ]

Abstract: (land)moves is an interactive installation: the user's gestures control the multimedia processing with a total synergy between audio and video synthesis and treatment. Keywords

Keywords: mapping gesture-audio-video, gesture recognition, landscape, soundscape

Bill Verplank and Francesco Georg.
Can haptics make new music? - fader and plank demos.
(Pages 539-540). [ bib | pdf ]

Abstract: Haptic interfaces using active force-feedback have mostly been used for emulating existing instruments and making conventional music. With the right speed, force, precision and software they can also be used to make new sounds and perhaps new music. The requirements are local microprocessors (for low-latency and high update rates), strategic sensors (for force as well as position), and non-linear dynamics (that make for rich overtones and chaotic music).

Keywords: NIME, Haptics, Music Controllers, Microprocessors