NIME 2011 ProceedingsProceedings of the International

Conference on New Interfaces for

Musical Expression

 

30 May - 1 June 2011, Oslo, Norway

 

Editors: Alexander Refsum Jensenius, Anders Tveit, Rolf Inge Godøy, Dan Overholt

Publishers: University of Oslo and Norwegian Academy of Music

ISSN: 2220-4792 (Print), 2220-4806 (Online), 2220-4814 (USB)

ISBN: 978-82-991841-7-5 (Print), 978-82-991841-6-8 (Online)

All copyrights remain with the authors.

 

Paper session A - Monday 30 May 11:00-12:30

Dan Overholt.
The overtone fiddle: an actuated acoustic instrument.
(Pages 4-7). [ bib | pdf | Abstract ]

Matthew Montag, Stefan Sullivan, Scott Dickey, and Colby Leider.
A low-cost and low-latency multi-touch table with haptic feedback for musical applications.
(Pages 8-13). [ bib | pdf | Abstract ]

Greg Shear and Matthew Wright.
The electromagnetically sustained rhodes piano.
(Pages 14-17). [ bib | pdf | Abstract ]

Laurel Pardue, Christine Southworth, Andrew Boch, Matt Boch, and Alex Rigopulos.
Gamelan elek trika: An electronic balinese gamelan.
(Pages 18-23). [ bib | pdf | Abstract ]

Jeong-Seob Lee and Woon Seung Yeo.
Sonicstrument: A musical interface with stereotypical acoustic transducers.
(Pages 24-27). [ bib | pdf | Abstract ]

 

Poster session B - Monday 30 May 13:30-14:30

Scott Smallwood.
Solar sound arts: Creating instruments and devices powered by photovoltaic technologies.
(Pages 28-31). [ bib | pdf | Abstract ]

Niklas Klügel, Marc René Frieß, Georg Groh, and Florian Echtler.
An approach to collaborative music composition.
(Pages 32-35). [ bib | pdf | Abstract ]

Nicolas Gold and Roger Dannenberg.
A reference architecture and score representation for popular music human-computer music performance systems.
(Pages 36-39). [ bib | pdf | Abstract ]

Mark Bokowiec.
V'oct (ritual): An interactive vocal work for bodycoder system and 8 channel spatialization.
(Pages 40-43). [ bib | pdf | Abstract ]

Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani, and Yuichi Sato.
First person shooters as collaborative multiprocess instruments.
(Pages 44-47). [ bib | pdf | Abstract ]

Tilo Hähnel and Axel Berndt.
Studying interdependencies in music performance: An interactive tool.
(Pages 48-51). [ bib | pdf | Abstract ]

Sinan Bokesoy and Patrick Adler.
1city 1001vibrations: development of a interactive sound installation with robotic instrument performance.
(Pages 52-55). [ bib | pdf | Abstract ]

Tim Murray-Browne, Di Mainstone, Nick Bryan-Kinns, and Mark D. Plumbley.
The medium is the message: Composing instruments and performing mappings.
(Pages 56-59). [ bib | pdf | Abstract ]

Seunghun Kim, Luke Keunhyung Kim, Songhee Jeong, and Woon Seung Yeo.
Clothesline as a metaphor for a musical interface.
(Pages 60-63). [ bib | pdf | Abstract ]

Pietro Polotti and Maurizio Goina.
Eggs in action.
(Pages 64-67). [ bib | pdf | Abstract ]

Berit Janssen.
A reverberation instrument based on perceptual mapping.
(Pages 68-71). [ bib | pdf | Abstract ]

Lauren Hayes.
Feedback-assisted performance.
(Pages 72-75). [ bib | pdf | Abstract ]

Daichi Ando.
Improving user-interface of interactive ec for composition-aid by means of shopping basket procedure.
(Pages 76-79). [ bib | pdf | Abstract ]

Ryan Mcgee, Yuan-Yi Fan, and Reza Ali.
Biorhythm: a biologically-inspired audio-visual installation.
(Pages 80-83). [ bib | pdf | Abstract ]

Jon Pigott.
Vibration and volts and sonic art: A practice and theory of electromechanical sound.
(Pages 84-87). [ bib | pdf | Abstract ]

George Sioros and Carlos Guedes.
Automatic rhythmic performance in max/msp: the kin.rhythmicator.
(Pages 88-91). [ bib | pdf | Abstract ]

André Gonçalves.
Towards a voltage-controlled computer - control and interaction beyond an embedded system. [ bib | pdf | Abstract ]

Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto, and Shigeki Sagayama.
Polyhymnia: An automatic piano performance system with statistical modeling of polyphonic expression and musical symbol interpretation.
(Pages 96-99). [ bib | pdf | Abstract ]

Juan Pablo Carrascal and Sergi Jorda.
Multitouch interface for audio mixing.
(Pages 100-103). [ bib | pdf | Abstract ]

Nate Derbinsky and Georg Essl.
Cognitive architecture in mobile music interactions.
(Pages 104-107). [ bib | pdf | Abstract ]

Benjamin D. Smith and Guy E. Garnett.
The self-supervising machine.
(Pages 108-111). [ bib | pdf | Abstract ]

Aaron Albin, Sertan Senturk, Akito Van Troyer, Brian Blosser, Oliver Jan, and Gil Weinberg.
Beatscape and a mixed virtual-physical environment for musical ensembles.
(Pages 112-115). [ bib | pdf | Abstract ]

Marco Fabiani, Gaà«l Dubus, and Roberto Bresin.
Moodifierlive: Interactive and collaborative expressive music performance on mobile devices.
(Pages 116-119). [ bib | pdf | Abstract ]

Benjamin Schroeder, Marc Ainger, and Richard Parent.
A physically based sound space for procedural agents.
(Pages 120-123). [ bib | pdf | Abstract ]

Francisco Garcia, Leny Vinceslas, Esteban Maestre, and Josep Tubau.
Acquisition and study of blowing pressure profiles in recorder playing.
(Pages 124-127). [ bib | pdf | Abstract ]

Anders Friberg and Anna Källblad.
Experiences from video-controlled sound installations.
(Pages 128-131). [ bib | pdf | Abstract ]

Nicolas d'Alessandro, Roberto Calderon, and Stefanie Müller.
Room#81 - agent-based instrument for experiencing architectural and vocal cues.
(Pages 132-135). [ bib | pdf | Abstract ]

 

Demo session C - Monday 30 May 13:30-14:30

Yasuo Kuhara and Daiki Kobayashi.
Kinetic particles synthesizer using multi-touch screen interface of mobile devices.
(Pages 136-137). [ bib | pdf | Abstract ]

Christopher Carlson, Eli Marschner, and Hunter Mccurry.
The sound flinger: A haptic spatializer.
(Pages 138-139). [ bib | pdf | Abstract ]

Ravi Kondapalli and Benzhen Sung.
Daft datum: an interface for producing music through foot-based interaction.
(Pages 140-141). [ bib | pdf | Abstract ]

Charles Martin and Chi-Hsia Lai.
Strike on stage: a percussion and media performance.
(Pages 142-143). [ bib | pdf | Abstract ]

 

Paper session D - Monday 30 May 14:30-15:30

Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, Frédéric Bevilacqua, Olivier Houix, Norbert Schnell, and Nicolas Misdariis.
Gestural embodiment of environmental sounds: an experimental study.
(Pages 144-148). [ bib | pdf | Abstract ]

Sebastian Mealla, Aleksander Valjamae, Mathieu Bosi, and Sergi Jorda.
Listening to your brain: Implicit interaction in collaborative music performances.
(Pages 149-154). [ bib | pdf | Abstract ]

Dan Newton and Mark Marshall.
Examining how musicians create augmented musical instruments.
(Pages 155-160). [ bib | pdf | Abstract ]

 

Paper session E - Monday 30 May 16:00-17:00

Zachary Seldess and Toshiro Yamada.
Tahakum: A multi-purpose audio control framework.
(Pages 161-166). [ bib | pdf | Abstract ]

Dawen Liang, Guangyu Xia, and Roger Dannenberg.
A framework for coordination and synchronization of media.
(Pages 167-172). [ bib | pdf | Abstract ]

Edgar Berdahl and Wendy Ju.
Satellite ccrma: A musical interaction and sound synthesis platform.
(Pages 173-178). [ bib | pdf | Abstract ]

 

Paper session F - Tuesday 31 May 09:00-10:50

Nicholas J. Bryan and Ge Wang.
Two turntables and a mobile phone.
(Pages 179-184). [ bib | pdf | Abstract ]

Nick Kruge and Ge Wang.
Madpad: A crowdsourcing system for audiovisual sampling.
(Pages 185-190). [ bib | pdf | Abstract ]

Patrick O'Keefe and Georg Essl.
The visual in mobile music performance.
(Pages 191-196). [ bib | pdf | Abstract ]

Ge Wang, Jieun Oh, and Tom Lieber.
Designing for the ipad: Magic fiddle.
(Pages 197-202). [ bib | pdf | Abstract ]

Benjamin Knapp and Brennon Bortz.
Mobilemuse: Integral music control goes mobile.
(Pages 203-206). [ bib | pdf | Abstract ]

Stephen Beck, Chris Branton, Sharath Maddineni, Brygg Ullmer, and Shantenu Jha.
Tangible performance management of grid-based laptop orchestras.
(Pages 207-210). [ bib | pdf | Abstract ]

 

Poster session G - Tuesday 31 May 13:30-14:30

Smilen Dimitrov and Stefania Serafin.
Audio arduino - an alsa (advanced linux sound architecture) audio driver for ftdi-based arduinos.
(Pages 211-216). [ bib | pdf | Abstract ]

Seunghun Kim and Woon Seung Yeo.
Musical control of a pipe based on acoustic resonance.
(Pages 217-219). [ bib | pdf | Abstract ]

Anne-Marie Hansen, Hans Jørgen Andersen, and Pirkko Raudaskoski.
Play fluency in music improvisation games for novices.
(Pages 220-223). [ bib | pdf | Abstract ]

Izzi Ramkissoon.
The bass sleeve: A real-time multimedia gestural controller for augmented electric bass performance.
(Pages 224-227). [ bib | pdf | Abstract ]

Ajay Kapur, Michael Darling, James Murphy, Jordan Hochenbaum, Dimitri Diakopoulos, and Trimpin.
The karmetik notomoton: A new breed of musical robot for teaching and performance.
(Pages 228-231). [ bib | pdf | Abstract ]

Adrian Barenca and Giuseppe Torre.
The manipuller: Strings manipulation and multi-dimensional force sensing.
(Pages 232-235). [ bib | pdf | Abstract ]

Alain Crevoisier and Cécile Picard-Limpens.
Mapping objects with the surface editor.
(Pages 236-239). [ bib | pdf | Abstract ]

Jordan Hochenbaum and Ajay Kapur.
Adding z-depth and pressure expressivity to tangible tabletop surfaces.
(Pages 240-243). [ bib | pdf | Abstract ]

Andrew Milne, Anna Xambó, Robin Laney, David B. Sharp, Anthony Prechtl, and Simon Holland.
Hex player: A virtual musical controller.
(Pages 244-247). [ bib | pdf | Abstract ]

Carl Haakon Waadeland.
Rhythm performance from a spectral point of view.
(Pages 248-251). [ bib | pdf | Abstract ]

Josep M Comajuncosas, Enric Guaus, Alex Barrachina, and John O'Connell.
Nuvolet : 3d gesture-driven collaborative audio mosaicing.
(Pages 252-255). [ bib | pdf | Abstract ]

Erwin Schoonderwaldt and Alexander Refsum Jensenius.
Effective and expressive movements in a french-canadian fiddler's performance.
(Pages 256-259). [ bib | pdf | Abstract ]

Daniel Bisig, Jan Schacher, and Martin Neukom.
Flowspace: A hybrid ecosystem.
(Pages 260-263). [ bib | pdf | Abstract ]

Marc Sosnick and William Hsu.
Implementing a finite difference-based real-time sound synthesizer using gpus.
(Pages 264-267). [ bib | pdf | Abstract ]

Axel Tidemann.
An artificial intelligence architecture for musical expressiveness that learns by imitation.
(Pages 268-271). [ bib | pdf | Abstract ]

Luke Dahl, Jorge Herrera, and Carr Wilkerson.
Tweetdreams: Making music with the audience and the world using real-time twitter data.
(Pages 272-275). [ bib | pdf | Abstract ]

Lawrence Fyfe, Adam Tindale, and Sheelagh Carpendale.
Junctionbox: A toolkit for creating multi-touch sound control interfaces.
(Pages 276-279). [ bib | pdf | Abstract ]

Andrew Johnston.
Beyond evaluation: Linking practice and theory in new musical interface design.
(Pages 280-283). [ bib | pdf | Abstract ]

Phillip Popp and Matthew Wright.
Intuitive real-time control of spectral model synthesis.
(Pages 284-287). [ bib | pdf | Abstract ]

Pablo Molina, Martin Haro, and Sergi Jordà .
Beatjockey: A new tool for enhancing dj skills.
(Pages 288-291). [ bib | pdf | Abstract ]

Jan Schacher and Angela Stoecklin.
Traces: Body and motion and sound.
(Pages 292-295). [ bib | pdf | Abstract ]

Grace Leslie and Tim Mullen.
Moodmixer: Eeg-based collaborative sonification.
(Pages 296-299). [ bib | pdf | Abstract ]

Ståle A. Skogstad, Kristian Nymoen, Yago De Quay, and Alexander Refsum Jensenius.
Osc implementation and evaluation of the xsens mvn suit.
(Pages 300-303). [ bib | pdf | Abstract ]

Lonce Wyse, Norikazu Mitani, and Suranga Nanayakkara.
The effect of visualizing audio targets in a musical listening and performance task.
(Pages 304-307). [ bib | pdf | Abstract ]

Adrian Freed, John Maccallum, and Andrew Schmeder.
Composability for musical gesture signal processing using new osc-based object and functional programming extensions to max/msp.
(Pages 308-311). [ bib | pdf | Abstract ]

Kristian Nymoen, Ståle A. Skogstad, and Alexander Refsum Jensenius.
Soundsaber - a motion capture instrument.
(Pages 312-315). [ bib | pdf | Abstract ]

Øyvind Brandtsegg, Sigurd Saue, and Thom Johansen.
A modulation matrix for complex parameter sets.
(Pages 316-319). [ bib | pdf | Abstract ]

 

Demo session H - Tuesday 31 May 13:30-14:30

Yu-Chung Tseng, Che-Wei Liu, Tzu-Heng Chi, and Hui-Yu Wang.
Sound low fun.
(Pages 320-321). [ bib | pdf | Abstract ]

Edgar Berdahl and Chris Chafe.
Autonomous new media artefacts (autonma).
(Pages 322-323). [ bib | pdf | Abstract ]

Min-Joon Yoo, Jin-Wook Beak, and In-Kwon Lee.
Creating musical expression using kinect.
(Pages 324-325). [ bib | pdf | Abstract ]

Staas De Jong.
Making grains tangible: microtouch for microsound.
(Pages 326-328). [ bib | pdf | Abstract ]

Baptiste Caramiaux, Frederic Bevilacqua, and Norbert Schnell.
Sound selection by gestures.
(Pages 329-330). [ bib | pdf | Abstract ]

 

Paper session I - Tuesday 31 May 14:30-15:30

Hernán Kerlleñevich, Manuel C. Eguía, and Pablo E. Riera.
An open source interface based on biological neural networks for interactive music performance.
(Pages 331-336). [ bib | pdf | Abstract ]

Nicholas Gillian, R. Benjamin Knapp, and Sile O'Modhrain.
Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping.
(Pages 337-342). [ bib | pdf | Abstract ]

Nicholas Gillian, R. Benjamin Knapp, and Sile O'Modhrain.
A machine learning toolbox for musician computer interaction.
(Pages 343-348). [ bib | pdf | Abstract ]

 

Paper session J - Tuesday 31 May 16:00-17:00

Elena Jessop, Peter Torpey, and Benjamin Bloomberg.
Music and technology in death and the powers.
(Pages 349-354). [ bib | pdf | Abstract ]

Victor Zappi, Dario Mazzanti, Andrea Brogni, and Darwin Caldwell.
Design and evaluation of a hybrid reality performance.
(Pages 355-360). [ bib | pdf | Abstract ]

Jérémie Garcia, Theophanis Tsandilas, Carlos Agon, and Wendy Mackay.
Inksplorer : Exploring musical ideas on paper and computer.
(Pages 361-366). [ bib | pdf | Abstract ]

 

Paper session K - Wednesday 1 June 09:00-10:30

Pedro Lopes, Alfredo Ferreira, and J. A. Madeiras Pereira.
Battle of the djs: an hci perspective of traditional and virtual and hybrid and multitouch djing. . (Pages, Norway, 2011. [ bib | pdf | Abstract ]

Adnan Marquez-Borbon, Michael Gurevich, A. Cavan Fyans, and Paul Stapleton.
Designing digital musical interactions in experimental contexts.
(Pages 373-376). [ bib | pdf | Abstract ]

Jonathan Reus.
Crackle: A mobile multitouch topology for exploratory sound interaction.
(Pages 377-380). [ bib | pdf | Abstract ]

Samuel Aaron, Alan F. Blackwell, Richard Hoadley, and Tim Regan.
A principled approach to developing new languages for live coding.
(Pages 381-386). [ bib | pdf | Abstract ]

Jamie Bullock, Daniel Beattie, and Jerome Turner.
Integra live: a new graphical user interface for live electronic music.
(Pages 387-392). [ bib | pdf | Abstract ]

 

Paper session L - Wednesday 1 June 11:00-12:30

Jung-Sim Roh, Yotam Mann, Adrian Freed, and David Wessel.
Robust and reliable fabric and piezoresistive multitouch sensing surfaces for musical controllers.
(Pages 393-398). [ bib | pdf | Abstract ]

Mark Marshall and Marcelo Wanderley.
Examining the effects of embedded vibrotactile feedback on the feel of a digital musical instrument.
(Pages 399-404). [ bib | pdf | Abstract ]

Dimitri Diakopoulos and Ajay Kapur.
Hiduino: A firmware for building driverless usb-midi devices using the arduino microcontroller.
(Pages 405-408). [ bib | pdf | Abstract ]

Emmanuel Fléty and Côme Maestracci.
Latency improvement in sensor wireless transmission using ieee 802.15.4.
(Pages 409-412). [ bib | pdf | Abstract ]

Jeff Snyder.
The snyderphonics manta and a novel usb touch controller.
(Pages 413-416). [ bib | pdf | Abstract ]

 

Poster session M - Wednesday 1 June 13:30-14:30

William Hsu.
On movement and structure and abstraction in generative audiovisual improvisation.
(Pages 417-420). [ bib | pdf | Abstract ]

Claudia Robles Angel.
Creating interactive multimedia works with bio-data.
(Pages 421-424). [ bib | pdf | Abstract ]

Paula Ustarroz.
Tresnanet: musical generation based on network protocols.
(Pages 425-428). [ bib | pdf | Abstract ]

Matti Luhtala, Tiina Kymäläinen, and Johan Plomp.
Designing a music performance space for persons with intellectual learning disabilities.
(Pages 429-432). [ bib | pdf | Abstract ]

Tom Ahola, Teemu Ahmaniemi, Koray Tahiroglu, Fabio Belloni, and Ville Ranki.
Raja - a multidisciplinary artistic performance.
(Pages 433-436). [ bib | pdf | Abstract ]

Emmanuelle Gallin and Marc Sirguy.
Eobody3: A ready-to-use pre-mapped amp; multi-protocol sensor interface.
(Pages 437-440). [ bib | pdf | Abstract ]

Rasmus Bååth, Thomas Strandberg, and Christian Balkenius.
Eye tapping: How to beat out an accurate rhythm using eye movements.
(Pages 441-444). [ bib | pdf | Abstract ]

Eric Rosenbaum.
Melodymorph: A reconfigurable musical instrument.
(Pages 445-447). [ bib | pdf | Abstract ]

Karmen Franinovic.
Flo)(ps: Between habitual and explorative action-sound relationships.
(Pages 448-452). [ bib | pdf | Abstract ]

Margaret Schedel, Rebecca Fiebrink, and Phoenix Perry.
Wekinating 000000swan: Using machine learning to create and control complex artistic systems.
(Pages 453-456). [ bib | pdf | Abstract ]

Carles F. Julià, Daniel Gallardo, and Sergi Jordà.
Mtcf: A framework for designing and coding musical tabletop applications directly in pure data.
(Pages 457-460). [ bib | pdf | Abstract ]

David Pirrò and Gerhard Eckel.
Physical modelling enabling enaction: an example.
(Pages 461-464). [ bib | pdf | Abstract ]

Thomas Mitchell and Imogen Heap.
Soundgrasp: A gestural interface for the performance of live music.
(Pages 465-468). [ bib | pdf | Abstract ]

Tim Mullen, Richard Warp, and Adam Jansch.
Minding the (transatlantic) gap: An internet-enabled acoustic brain-computer music interface.
(Pages 469-472). [ bib | pdf | Abstract ]

Stefano Papetti, Marco Civolani, and Federico Fontana.
Rhythm'n'shoes: a wearable foot tapping interface with audio-tactile feedback.
(Pages 473-476). [ bib | pdf | Abstract ]

Cumhur Erkut, Antti Jylhä, and Reha Disçioğlu.
A structured design and evaluation model with application to rhythmic interaction displays.
(Pages 477-480). [ bib | pdf | Abstract ]

Marco Marchini, Panos Papiotis, Alfonso Pérez, and Esteban Maestre.
A hair ribbon deflection model for low-intrusiveness measurement of bow force in violin performance.
(Pages 481-486). [ bib | pdf | Abstract ]

Jonathan Forsyth, Aron Glennon, and Juan Bello.
Random access remixing on the ipad.
(Pages 487-490). [ bib | pdf | Abstract ]

Erika Donald, Ben Duinker, and Eliot Britton.
Designing the ep trio: Instrument identities and control and performance practice in an electronic chamber music ensemble.
(Pages 491-494). [ bib | pdf | Abstract ]

A. Cavan Fyans and Michael Gurevich.
Perceptions of skill in performances with acoustic and electronic instruments.
(Pages 495-498). [ bib | pdf | Abstract ]

Hiroki Nishino.
Cognitive issues in computer music programming.
(Pages 499-502). [ bib | pdf | Abstract ]

Roland Lamb and Andrew Robertson.
Seaboard: a new piano keyboard-related interface combining discrete and continuous control.
(Pages 503-506). [ bib | pdf | Abstract ]

Gilbert Beyer and Max Meier.
Music interfaces for novice users: Composing music on a public display with hand gestures.
(Pages 507-510). [ bib | pdf | Abstract ]

Birgitta Cappelen and Anders-Petter Andersson.
Expanding the role of the instrument.
(Pages 511-514). [ bib | pdf | Abstract ]

Todor Todoroff.
Wireless digital/analog sensors for music and dance performances.
(Pages 515-518). [ bib | pdf | Abstract ]

Trond Engum.
Real-time control and creative convolution - exchanging techniques between distinct genres.
(Pages 519-522). [ bib | pdf | Abstract ]

Andreas Bergsland.
The six fantasies machine: an instrument modelling phrases from paul lansky's six fantasies.
(Pages 523-526). [ bib | pdf | Abstract ]

 

Demo session N - Wednesday 1 June 13:30-14:30

Jan Trutzschler.
Gliss: An intuitive sequencer for the iphone and ipad.
(Pages 527-528). [ bib | pdf | Abstract ]

Jiffer Harriman, Locky Casey, Linden Melvin, and Mike Repper.
Quadrofeelia - a new instrument for sliding into notes.
(Pages 529-530). [ bib | pdf | Abstract ]

Johnty Wang, Nicolas D'Alessandro, Sidney Fels, and Bob Pritchard.
Squeezy: Extending a multi-touch screen with force sensing objects for controlling articulatory synthesis.
(Pages 531-532). [ bib | pdf | Abstract ]

Souhwan Choe and Kyogu Lee.
Swaf: Towards a web application framework for composition and documentation of soundscape.
(Pages 533-534). [ bib | pdf | Abstract ]

Norbert Schnell, Frederic Bevilacqua, Nicolas Rasamimana, Julien Blois, Fabrice Guedy, and Emmanuel Flety.
Playing the "mo" - gestural control and re-embodiment of recorded sound and music.
(Pages 535-536). [ bib | pdf | Abstract ]

Bruno Zamborlin, Marco Liuni, and Giorgio Partesana.
(land)moves.
(Pages 537-538). [ bib | pdf | Abstract ]

Bill Verplank and Francesco Georg.
Can haptics make new music? - fader and plank demos.
(Pages 539-540). [ bib | pdf | Abstract ]