Tutorials

 

There will be a number of tutorials and workshops in the days leading up to the conference. These are available to both NIME participants and other interested people. Please see below for details about the different workshops and where they take place.

SATURDAY 28 MAY – 09.00 – 12.00

  • Hardware Hacking Workshop Nicolas Collins. info
  • Soft Controller and Synthesizer Workshop (sold out) Lara Grant and Sarah Grant. info
  • Designing Mobile Instruments and Performances in urMus Georg Essl and Patrick O’Keefe. info
  • Optical Motion Capture Technology (part I) Birgitta Burger, Kristian Nymoen, Arve Voldsund and Ståle A. Skogstad. info
  • Integra Live software for performers and composers (part I) Dag Henning Kalvøy and Henrik Sundt. info

SATURDAY 28 MAY – 13.00 – 16.00

  • Hardware Hacking Workshop continued Nicolas Collins.
  • Soft Controller and Synthesizer Workshop continued Lara Grant and Sarah Grant.
  • Designing Mobile Instruments and Performances in urMus continued Georg Essl and Patrick O’Keefe
  • Optical Motion Capture Technology (part II) Birgitta Burger, Kristian Nymoen, Arve Voldsund and Ståle A. Skogstad.
  • Integra Live software for performers and composers (part II) Dag Henning Kalvøy and Henrik Sundt.
  • Introduction course to some alternative electronic instruments Daniel Schorno and Haraldur Karlsson. info
  • New Interfaces for Live Looping Simon Morris and Richard Wilderberg. info

SUNDAY 29 MAY – 09.00 – 12.00

  • NIME Primer: A Gentle Introduction to Creating New Interfaces for Musical Expression Sidney Fels and Michael Lyons. info
  • Mapping Everything Else Workshop Georgios Papadakis, Berit Janssen and Jonathan Reus. info
  • Basic Training for Group Improvisation Luis Alejandro Olarte. info
  • NEXUS: Using Ruby on Rails and HTML5 to Distribute Browser Based Interfaces Jesse Allison. info
  • Auditory Augmentation of Everyday Objects with Near Real-time Data (sold out) Till Bovermann, René Tünnermann and Thomas Hermann. info
  • A Workshop on NIME Education Michael Gurevich, Ben Knapp and Sergi Jordà. info

SUNDAY 29 MAY – 13.00 – 16.00

  • A Workshop on NIME Education continued Michael Gurevich, Ben Knapp and Sergi Jordà.
  • Musical performance with the Karlax controller Tom Mays and Rémi Dury. info
  • Hyperimprovisation Victoria Johnson and Alex Gunia.info
  • Audio-graphic Modeling and Interaction Workshop Roland Cahen, Christian Jacquemin, Diemo Schwarz and Hui Ding. info
  • Mapping Digital Musical Instruments with libmapper Joseph Malloch, Stephen Sinclair and Marcelo Wanderley. info
  • Workshop on Multi-Modal Data Acquisition for Musical Research (sold out) Javier Jaimovich, Nick Gillian, Miguel Angel Ortiz, Paolo Coletta and Esteban Maestre. info

 

 

Hardware Hacking Workshop

A hands-on workshop in “handmade electronic music”, tailored for the NIME audience. Assuming no technical background whatsoever, this workshop guides the participants through a series of sound-producing electronic construction projects, including: a) The “Victorian synthesizer” (making an oscillator with just a speaker and a battery). b) “Laying of hands” on a radio circuit board to make the poor man’s Cracklebox. c) Basic, versatile oscillator circuit controlled by a wide range of sensors, including potentiometers, photoresistors, homemade pressure sensors, corroded metal, vegetables, electrodes, etc.  This leads to discussion of techniques for interfacing sensors to microcontrollers.

About the Workshop Leader:

Nicolas Collins studied with Alvin Lucier, worked with David Tudor, and was Artistic Director of STEIM (Amsterdam).  He is a Professor in the Department of Sound at the School of the Art Institute of Chicago, Editor-in-Chief of the Leonardo Music Journal, and author of Handmade Electronic Music – The Art of Hardware Hacking (Routledge 2009)

Participation Information:

Open for everyone, no background needed, NIME attendees working with sensor interfacing; Circuit Benders; electronic novices looking for an introduction to hardware. Suitable for children as well as adults.

I bring all of the specialized parts, The participants are expected to bring some materials as well.

Maximum number of participants: 25 people

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 1

back

 

Soft Controller and Synthesizer Workshop (sold out)

Our workshop will begin with a discussion of soft circuitry including recent research projects completed by FSP. We will demonstrate fabrication techniques for creating soft circuit controllers, including carding, dry felting and machine and hand sewing with conductive and resistive wools, fabrics, threads and methods of attaching to hardware. The group will be lead through building a simple synthesizer circuit followed by building the soft circuit controller to interface with the hardware. Participants should also feel free to bring projects they have already built and would like to potentially make a soft controller for. While we will have some digital multimeters and basic tools on-site, please come prepared with diagonal wire cutters and wire strippers. We will have kits available for participants with all the electronic parts and materials needed to create the synthesizer circuit and interface. These kits will include a small raw loudspeaker, 9V battery + battery clip, alligator test leads, solid wire, breadboard, electronic components, 1 dry felting needle, 1 sewing needle, wool and conductive fabrics and threads.  We will also have some additional materials on-hand in case anyone needs extra supplies.

About the Workshop Leaders:

Sisters Sarah and Lara Grant are both alumnae of NYU’s Interactive Telecommunications Program. Sarah has a background in fiber arts, physical computing and experimental sound and works as a creative technologist in NYC.  Lara has a background in interactive fashion and electronic textile design and has been working in the medium of felt for over 7 years.  She has taught many workshops on felting and soft circuitry and recently joined the Gray Area Foundation For the Arts (GAFFTA) in San Francisco as their Soft Circuit instructor. Together as Felted Signal Processing they design textile-based custom controllers and soft sensors to interface with audio hardware.

Prerequisites and Participation Information:

It will be expected that each participant will have a basic understanding of working with electronic circuits, identifying and using components and reading schematics. We are assuming no prior experience working with soft circuits. Musicians, developers, designers, artists and engineers are all welcome.

Maximum number of participants is set to 12

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 2

back

 

Designing Mobile Instruments and Performances in urMus

Mobile smart devices have become widely used and are becoming platforms for interactive music performance. In this workshop we teach the process of building new musical instruments on mobile devices using the meta-environment urMus. This allows performers with modest programming and graphical patching knowledge to learn to quickly and with minimal technical distraction, realize their ideas on iPhones, iPads or Android devices. The goal of the workshop is to show the whole process so that participants end having built their own first mobile instrument, including interface look&feel, interaction modeling and sound design and be ready to play it!

About the Workshop Leaders:

Georg Essl is an Assistant Professor in Electrical Engineering & Computer Science as well as Music at the University of Michigan. He is a long-time participant at the NIME conference and has spend much of the last 7 years on making mobile devices viable generic platforms for musical expression. He is the primary author and achitect of urMus.

Patrick O’Keefe is a Doctoral Student in Electrical Engineering at the University of Michigan. He is recipient of an NSF Graduate Fellowship after receiving a degree from the University of Miami. His current interest is computer vision on mobile devices. He has lead the camera support and integration in urMus.

Prerequisites and Participation Information: Basic programming, familiarity with sound synthesis. The workshop is designed to be accessible for musician with a technical background such as working knowledge with Max/MSP, pd, supercollider, processing or Chuck. Familiarity with UI design and OSC a plus, but not required.

Maximum number of participants: 20

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Salen

back

 


Optical Motion Capture Technology (3+3-hour workshop)

This workshop will explain and demonstrate the current state of the art in optical motion capture technology. It will be divided into two three-hour parts. The first part of this workshop will participants familiarize with the Qualisys Oqus motion capture system and demonstrate the steps from starting the system to annotating recordings. The second part will cover how this motion capture system can be used in artistic applications. Examples of musical instruments based on real-time streaming of mocap data in combination with the Xsens Mocap suit will be presented. Participants will also get the opportunity to develop their own mocap instrument.

About the Workshop Leaders

Birgitta Burger is a PhD student at the Finnish Center of Excellence in Interdisciplinary Music Research at the University of Jyväskylä, Finland, investigating music-induced movement, such as relationships between acoustical features and movement, synchronization and periodicity.

Kristian Nymoen is a PhD student in the fourMs group at the Department of Informatics at the University of Oslo. His research involves using optical motion capture for studying perceptual relationships between sound and motion.

Ståle A. Skogstad is a PhD student in the fourMs group at the Department of Informatics at the University of Oslo. His research is focused on using real-time full-body motion capture technology for musical interaction. He is currently working with the Xsens MVN inertial sensor suit.

Arve Voldsund holds a combined engineer and research position in the fourMs group at the University of Oslo. He has experience in real-time signal processing, and is currently developing a database solution for the Gesture Description Interchange Format.

Prerequisites and Participation Information:

Composers, researchers, performers, interface designers, and anyone else who is interested in using advanced motion capture technology in musical contexts.

Maximal number of participants: 10-12

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: MoCap Lab

back

Integra Live Workshop for performers and composers

Introduce the Integra Live software for performers and composers. The software is the result of a six-­‐year international collaborative project headed by Birmingham Conservatoire, UK, and supported by EU’s Culture 2000 program. NOTAM is one of the six research centers that have participated in the project. The software is a work in progress (beta version has been released), and participants will be invited to provide the workshop leaders with feedback from a user perspective.

About the Workshop Leaders

NOTAM – Norwegian Center for Technology in Music and the Arts – supports composers, musicians, artists and students in related fields. Activities include artistic and scientific research, education, technical and theoretical assistance, as well as support in use of technical resources and production of events for the general public.

Participation Information:

The software is designed especially for musicians and composers, and requires no programming experience. Learning how to set up patches in Integra Live is fairly simple, and participants can expect to have their own patches up and running during the course of the workshop.  There will be time for an improvised jam at the end of the day.

For more information about the workshop : http://www.notam02.no

For more information about the INTEGRA project please visit the Integra website at www.integralive.org

Maximum number of participants: 20.

Location : NOTAM

back

Introduction course to some alternative electronic instruments

This workshop offers a short tutorial on eponymous instrument & interface design and related instrumental methods and strategies for articulation, proliferation, expression and response shaping (based on STEIM technology) for live music and sound art performance, together with a parallel investigation into  their application in the modalities of 3d video and sound diffusion. The goal of the workshop is for participants to gain hands on knowledge and/or deepen their understanding of the above mentioned issues. The provided instruments help to explore some fundamental questions and to address a set of given tasks.

About the Workshop Leaders

Daniel Schorno studied composition in London with Melanie Daiken and electronic and computer music in The Hague with Joel Ryan and Clarence Barlow. Invited by Michel Waisvisz he lead STEIM as Artistic Director until 2005 and is currently STEIM’s composer-in-research and creative project advisor.

Haraldur Karlsson studied Multi-media in the art academy in Iceland, Media- art in AKI in Enschede and Sonology in the Royal conservatories The Hague. Haraldur is mainly focused on interactive installations and performances and instrumental computer controllers. Haraldur is the head of the Video and Sound Studio -Department of Fine Arts , Iceland Academy of the Arts (IAA).

Prerequisites and Participation Information:

Keywords: Musicians, composers, video-artists, interaction designers, educators, general public. No maximum number of participants, except for limitations of provided room.

Location: Norwegian Academy of Music

back

 

New Interfaces for Live Looping

This workshop will explore new interfaces for live looping. We will examine various software and hardware devices for looping in live settings. Techniques and methods will be discussed and new interfaces for looping will be put in practice among participants. We will present customized looping visualization software and a variety of experimental controllers designed for the live looping performer.

About the Workshop Leaders

Richard Widerberg is a media artist and musician living in Gothenburg, Sweden. He plays in different constellations ranging from pop to noise, including the experimental pop duo ‘Trapped in a loop.’ Richard develops interfaces for musical expression and is a visiting teacher at Valand school of Fine Arts at University of Gothenburg and in the Media Lab at the School of Art and Design of Aalto University in Helsinki. www.riwid.net

Simon Morris is a sound artist and programmer living in Gothenburg, Sweden. www.mapletone.net

Prerequisites and Participation Information:

This workshop is open to musicians, sound artists, composers, interaction designers, and programmers. We encourage you to bring any looping devices (software or hardware) with you to the workshop.

Maximum number of participants is set to: 20

Location: Norwegian Academy of Music

back

 


NIME Primer: A Gentle Introduction to Creating New Interfaces for Musical Expression

This workshop provides a general and gentle introduction to the theory and practice of the design of interactive system for music creationand performance. Our intended audience is newcomers to the field who are interested in starting research projects or artistic activity in this area, as well as members of the public with a more generalinterest. Participants will learn key aspects of the theory and practice of musical interface design by studying case studies taken from the first ten years of the NIME conference.

PDF of slides from the tutorial

About the Workshop Leaders:

Sidney Fels is a Professor of Electrical and Computer Engineering atthe University of British Columba, in Vancouver, Canada. He was a co-founder of the New Interfaces for Musical Expression conference.

Michael Lyons is a Professor of Image Arts and Sciences at Ritsumeikan University in Kyoto, Japan. He was a co-founder of the New Interfaces for Musical Expression conference.

Prerequisites and Participation Information:

No specific technical prerequisites or background in music or computer audio is assumed. The workshop is open to researchers and artists whomay be new to NIME as well as interested members of the public, who might not be planning to attend the main academic conference. This workshop may also be of interest to those planning to attend the concert program and related events.

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Salen

back

 


Mapping Everything Else

The goal of this workshop is to envision musical instruments in a different way. Rather than starting from technical or musical aspects, this workshop provides a chance to investigate what musical mapping means in terms of navigation, exploration and experience through different modalities. Up to 15 participants make metaphorical and tangible models of instruments, which represent the transformation of human interaction to sound in ways which cannot be normally experienced. Other than a profound interest in musical expression, there are no pre-requisites for participation.

 

About the Workshop Leaders :

The STEIM Research Group is an internal research group formed in early 2010 consisting of international researchers, developers, designers and artists who meet weekly to investigate new directions in live electronic music.

Prerequisites and Participation Information:

Up to 15 participants. No particular knowledge needed.

Location: Norwegian Academy of Music


 

back

Basic Training for Group Improvisation – Unlocking Dionysus to awaken Apollo.

This workshop is about musical tasks, sound exercises and performing games intended to point out the skills desired in collective improvisation. We will explore, with and without instruments, a set of activities concerning awakeness, forbearance, memory, reactivity and risk. These activities can be used in pedagogical or entertainment contexts.

 

About the Workshop Leader

Luis Alejandro OLARTE (Colombia) studied electroacoustic music, guitar, generative improvisation, musical acoustics and computer music at National University in Colombia, National Conservatory in Paris, Escola Superior de Musica de Cataluña, and at Université de Paris VIII. He is currently preparing doctoral studies in live electronics and pedagogy at the Centre for Music & Technology of the Sibelius Academy, Finland.

Prerequisites and Participation Information:

The workshop is open to anybody playing any musical interface or vocals. Participants don’t need a background in improvisation but experienced improvisers are of course also welcome.

Location: Norwegian Academy of Music


 

back

Mapping Digital Musical Instruments with libmapper

This workshop will introduce libmapper, an open-source software library for representing input and output signals on a local network, enabling collaborative development of mapping connections between the signals exposed by gestural interfaces and sound synthesis software. The library comes with plugins from various common audio development environments. We will introduce the functionality of the library and demonstrate its usage in several programming languages and environments. We will show examples of simple and complex mapping as well as machine learning-based connections, using a variety of platforms and controllers. We encourage participants to bring along their laptops and musical interfaces.

 

About the Workshop Leaders

The organisers work in the Input Devices and Music Interaction Laboratory (IDMIL) at McGill University, where DMI mapping design and evaluation is a main research focus. In 2005 they started work on the Mapping Tools project, which resulted in the public release of the open-source libmapper in early 2011. Further development of libmapper supporting interaction design with large sensor networks is currently taking place as part of the EMERGE project, a collaboration between the IDMIL (www.idmil.org), labXmodal at Concordia University (xmodal.hexagram.ca), and commercial partners Moment Factory (www.momentfactory.com) and gsmprjct (www.gsmprjct.com).

Prerequisites and Participation Information: Instrument/installation designers, programmers, performers, composers, artists No maximum number of participants, except for limitations of provided room.

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 1

back

 

Auditory Augmentation of Everyday Objects with Near Real-time Data

Auditory augmentation is a design and development paradigm for the creation of unobtrusive data representation layers that enhance the sonic characteristic of arbitrary physical objects. Its principal idea is to unobtrusively alter the auditory characteristics of object interactions by digital-born data, rather than introducing a completely new soundscape as it is commonly done in sonification environments. After a short introduction into the paradigm, we will conduct a co-design session in which we plan to come up with several alternative augmentation filter designs. These will be implemented just-in-time by one of the workshop leaders.

 

About the Workshop Leaders

Till Bovermann is a researcher and media artist on tangible and auditory interfaces at the Media Lab Helsinki. He received a PhD on Tangible Auditory Interfaces from Bielefeld University, where he also studied informatics, majoring in robotics. His artistic and scientific work deals with the relationship between digital and physical environments.

René Tünnermann is a member of the Ambient Intelligence Group at the Center of Excellence in Cognitive Interaction Technology, Bielefeld University. In 2009, he graduated with a degree in science informatics with a major in robotics. His research fields include tangible interfaces, interactive surfaces and physical computing.

Thomas Hermann is head of the Ambient Intelligence Group at CITEC, the Center of Excellence in Cognitive Interaction Technology, Bielefeld University. He received a PhD in computer science (thesis: Sonification for Exploratory Data Analysis). His research fields include sonification, data mining, ambient intelligence, augmented reality and cognitive interaction technology.

Prerequisites and Participation Information:

Sound designers, developers, musicians, interaction designers. Basic knowledge in audio signal processing is appreciated but not required.

Maximum number of participants

The optimal number of participants is 10, but should not exceed 15.

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 3

back

 

A Workshop on NIME Education

As NIME has grown over the years, numerous NIME courses have recently sprung up at universities around the world. This workshop will provide a structured forum for NIME educators to share their approaches, experiences and perspectives on teaching NIME curricula. It will be focused on identifying the major challenges that NIME educators face and on developing innovative ideas to address them. The workshop is organized around three themes: materials – the technological platforms employed in teaching a NIME course; methods – the teaching and learning activities that constitute NIME courses; and matters – thetopics and issues that our courses address.

 

About the Workshop Leaders

Michael Gurevich and Ben Knapp are researchers and lecturers at SARC, Queen’s University Belfast. Sergi Jordà is a researcher and lecturer in the Music Technology Group at Universitat Pompeu Fabra in Barcelona. Among them they have many years of experience teaching NIME courses and workshops for students of all backgrounds and levels.

Prerequisites and Participation Information

This workshop is intended for 3 groups of participants:

1. NIME educators who are currently instructors of NIME-related classes at any university level. We welcome as many participants from a given institution as want to join, but ask for only one submission/presentation per institution (see below) in order to accommodate as many institutions and viewpoints as possible.

2. Aspiring NIME educators: those who do not currently teach a NIME course but are planning to do so in the future.

3. Students in NIME courses. We encourage participation by students to provide a different perspective on NIME education, but ask that student participation is in conjunction and coordination with a faculty member from the institution they attend.

Note: Participants who are current NIME educations will be asked to prepare and a deliver 5-minute presentation outlining the NIME course(s) they teach. In advance of the workshop, these participants will be asked to submit a short “position paper” describing their courses that will be disseminated to all workshop attendees. Further details will be provided upon registration.

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 2

back

 

Musical performance with the Karlax controller

This is a hands-on workshop allowing each participant to explore some of the gestural possibilities of the Karlax controller by playing various synthesis and processing instruments designed for it by composer/performer Tom Mays. Several different approaches will be presented, musically and technically: some of the instruments explored with the Karlax will be “generative” based on synthesis, samples or sound files, while others will be built on real time processing of acoustic input (instrument or voice). Two Karlax instruments will be available so that duos can be possible. The workshop will take place within an immersive 4 channel sound system allowing spatial manipulation with the instrumental gestures.

About the Workshop Leader

Tom Mays: Composer, computer musician and teacher – Associate Professor of New Technologies Applied to Composition at the National Superior Conservatory of Music in Paris, and currently working on a PhD at the University of Paris 8 with Horacio Vaggione. In addition to performing with the Karlax, Tom Mays develops and performs with camera motion tracking control as well as wiimotes – exploring and practicing various methods of creating and performing wireless gestural instruments.

Prerequisites and Participation Information:

This workshop is aimed at (but not restricted to) musicians, composers and performers interested in wireless gestural control. No prior experience with the Karlax is expected, and no specific knowledge of any particular software environment is necessary. Some rudimentary knowledge of MaxMSP might be helpful in understanding the inner workings of the proposed instruments, but this is not central to the workshop.

Any number can observe, but about 10 to 15 will be able to practice the instruments.

Location: Norwegian Academy of Music


 

back

 

Hyperimprovisation

Live performances and open discussion based on relevant artistic questions:

How do we interact musically with augmented instruments and electronic? How do we create meaningful musical content in computer/electronic-based improvisations? Man vs. machine: Who is taking control? For what musical reasons are musicians using controllers, software and general electronic equipment? Musical presentations by:

Victoria Johnson, Alex Gunia , tba. , tba.

About The Workshop Leaders

Victoria Johnson, Trained as a classical violinist in Oslo, Vienna and London. She has established herself internationally as a soloist, chamber musician and improviser in the field of contemporary, improvised and experimental, cross-disciplinary music and art. Spring 2011 she finishes her artistic PhD project on electric violin and live electronics at the Norwegian Academy of Music.

Alex Gunia, guitarist, sound designer, producer, label manager and sound engineer in the open field between jazz and electronic music. Teaches LIVE ELECTRONICS at the Norwegian State Academy of Music. In 2011 he will start an artistic research project exploring new artistic and practical considerations in creating acting spaces in modern improvised music.

Prerequisites and Participation Information:

Performers, composers, artists, researchers and other scientists working with live electronics, hyperinstruments, improvisation and interfaces.

Location: Norwegian Academy of Music


 

back

Audio-graphic Modeling and Interaction

This workshop focuses on recent advances and future prospects in modelling and rendering audio-graphic scenes. The convergence of the audio and graphic communities is fostered by the increase in computational resources, cognitive studies on cross-modal perception, and industrial needs for realistic audio scenes.  Audio-graphic research is spreading in areas such as games, architecture, urbanism, information visualization, or interactive artistic digital media. We will focus on the representation, interaction, rendering, and perception of scenes in which the audio and graphical components are clearly identified and combined (in contrast to standard multimedia video streams). Accepted papers will invited to submit an extended version to a special issue of the Springer Journal on Multimodal User Interfaces (JMUI).

 

About the Workshop Leaders

Roland Cahen, ENSCI-les Ateliers, Electroacoustic music composer, sound designer, researcher, professor in charge of the sound design studio of the ENSCI-les Ateliers (Paris France). Research in sound navigation, virtual reality audio, sound space, interaction and gesture control.

Christian Jacquemin, LIMSI-CNRS & University Paris Sud 11 is particularly interested in computer graphics for virtual and augmented reality applied to performing arts, design, and architecture. He coordinates the arts and science theme Virtuality, Interaction, Design, and Art at LIMSI-CNRS and has collaborated on many scientific projects involving sound researchers.

Diemo Schwarz is researcher–developer at the Real-Time Music Applications (IMTR) team at Ircam, working on sound analysis and interactive corpus-based concatenative synthesis in multiple research and musical projects at the intersection between computer science, music technology, and audio-visual creation.

Hui Ding is a PhD student of second year at Paris-Sud XI University, and working at LIMSI (Computer Science for Mechanics and Engineer Sciences Laboratory). She is interested in audio-graphic rendering driven by cross-modal perception, and presently working on the cross-modal integration of graphical LOD and sound LOD based on human audio-visual perception.

Prerequisites and Participation Information:

Knowledge about the modeling and the rendering of an audio-graphical scene. Interest for cross-modal issues in cognition. Keywords: researchers in HCI, cross-modality, perception, sound interfaces, multimodal interfaces, architects and urbanists, designers, cartographers, teachers, game designers, web designers, game developers automotive industry.

Call for Abstracts Workshop Program

For more informationhttp://www.topophonie.fr/events

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 3

back

NEXUS: Using Ruby on Rails and HTML5 to Distribute Browser Based Interfaces

NEXUS is a project to leverage the power of canvas-based user interface objects and a Ruby on Rails web application to handle distribution of user interfaces, passing interactions via OSC to and from realtime audio/video processing software. In this way, browser based interactions can be utilized for distribution across a variety of static and mobile devices – making worldwide collaborative creative arts a distinct possibility. This workshop will be centered around creating a functional distributed performance environment in Rails, adding in canvas based graphical UI objects, and connecting it to an audio rendering engine.

 

About the Workshop Leader

Jesse Allison is a sonic artist and an inventor. Developing technology to expand what is possible in the arts, his artistic work encompasses sonic performance art, interactive installations, virtual and hybrid world interventions, and social art systems. An assistant professor at LSU in Experimental Music & Digital Media, co-founder of Electrotap, and founding member of the Institute of Digital Intermedia Art at BSU, he is actively engaged in furthering this burgeoning field. His work has been exhibited around the globe.

Prerequisites and Participation Information:

Composers/Artists, Programmers, Developers. The workshop is ideal for those interested in collaborative and distributed performance systems and wanting to explore networked performance possibilities in a manageable way. Participants familiar with basic programming ideas and html markup will get a good feel for the possibilities inherent in this approach, those with more programming experience will find valuable tools for creating and managing a distributed performance system. Bring your own laptop if you’d like to work along on your own machine.

Maximum number of participants: 20

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Sem 1

back

Workshop on Multi-Modal Data Acquisition for Musical Research (sold out)

We present a workshop on multi-modal measurement and recording of musical performance. This workshop lasts 3 hours, and it will include a presentation of the current technologies and new techniques for data acquisition and synchronization of music and performing arts research experiments using Qualisys (motion capture), BioControl/Infusion (physiological sensors), Arduino (general data acquisition), Motu (audio), and Polhemus (motion sensing) systems among others. We will demonstrate validated techniques developed by our researchers in order to obtain reliable data for the SIEMPRE (Social Interaction using Music PeRformance Experimentation) European project. This workshop will first describe a specific scenario where multi-modal measurement of musical performance is required and outline the various problems raised by the scenario. We will then present some of the solutions developed by the SIEMPRE project and open a collaborative discussion with the participants about their particular needs and possible solutions.

About the Workshop Leaders:

Javier Jaimovich, Nick Gillian and Miguel Ortiz are members of the researcher staff at the Sonic Arts Research Centre working with the Music,Sensors and Emotion group. Their research topics range from real-time gesture recognition for Musician-Computer Interaction to analysis of physiological signals for interactive performance and cinema.

Paolo Coletta joined InfoMus Lab in 1997 and is currently the main software project manager to support the EyesWeb research project at InfoMus Lab.

Esteban Maestre pursues research in expressive music performance, instrumental gesture analysis, instrumental sound synthesis, gesture-sound relationship, and audio and voice analysis and transformation at the Music Technology Group and CCRMA.

Prerequisites and Participation Information:

The workshop is intended for anyone interested in understanding the technical challenges of multi-modal measuring of musical performance. Whether it is for implementing multi-modal systems for recording and analysis or for live performance practice. It is also intended for researchers in different areas such as: synchronization protocols, file systems, gestural descriptors, etc. to know about the current challenges faced by their peers and discuss what further developments might yield. It is open to both the novice as well as those that are already experienced in specific techniques but might be interested in learning more about working with large multi-modal systems.

Maximum number of participants: 10-12

LocationUniversity of Oslo, Department of Musicology,  ZEB building, room: Salen

 

back