issue3 - · pdf fileissue3 mad about music technology. ... for giving expression to a guitar...

16
AUDIO ! Inside this issue Music made of fire Sonified zebrafish Musical keys to romance Issue3 MAD about music technology MAD about music technology

Upload: vuongkhanh

Post on 14-Mar-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

AUDIO!

Inside this issueMusic made of fireSonified zebrafishMusical keys to romance

Issue3

MAD aboutmusictechnology

MAD aboutmusictechnology

Page 2: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

If the audio engineers can come up withprograms that make it easy to analysemusic, to search in new ways, to visualise it, then professional musicologists, hobbymusicians and music lovers alike canbenefit. That is what Queen Mary’s‘Musicology for the masses’ project isexploring.

Musicology forthe masses

Some people listen to music, some want to play music, others – ‘musicologists’ – study it too. They areinterested in all sorts of questions. What makes goodmusic and why? What is the history of music? Howdoes it vary across cultures? One branch of musicologyis about analysing music, and that is where technologycan help...and even spin off whole new industries.

Mad about music technology

In this issue of Audio! we explore researchthat is applying musicology to help themasses: technology that will help bandsmake better recordings; how to create newformats for storing music; how analysing theacoustics of music venues has led to toolsfor recreating the sounds of any building; anew way to visualise music collections; andhow musicologists are even studying whatgoes on in schools.

2

Ukulele HeroMastered the entire "Rock Band" series?Achieved global fame touring the world on"Guitar Hero"? If so, perhaps it's time foryou to crank the volume and wow thoseimaginary crowds with your fret-meltingchops on ... "Ukulele Hero"!

The ukulele is a small Hawaiian guitar-likeinstrument with four strings that originatedin the 19th century. Okay, so Ukulele Heroisn’t quite as rock 'n' roll as its guitar-basedcounterparts. It's also not strictly a game,but rather an interactive virtual instrumentapplication that simulates the strings andfrets of a real ukulele. It was created byDuncan Menzies a graduate student on theMedia and Arts Technology programme atQueen Mary. Using Duncan’s custom-builthardware controller, the player can strumactual chords as though they were playing agenuine acoustic instrument. Byconnecting the controller to a computer, thesoftware produces both the audio, and anon-screen visualisation of the frets andstrings being played.

Notes are played by pushing 16 buttonsthat correspond to the positions behind thefrets of each string. The button in the topright corner corresponds to the positionbehind the first fret of the highest string,and so on. You ‘pluck’ strings by moving aslider up and down. It is made from a‘potentiometer’; which is just a variableresistor that changes the input voltage asyou slide the slider.

Ukulele Hero was written using an opensource "creative coding" environment called"openFrameworks", which is essentially aset of libraries for the programminglanguage C++. OpenFrameworks is a greattoolkit for creating interesting applicationsquickly, even with little or no programmingexperience, as it provides an easy way touse powerful libraries of code that otherprogrammers have already written. So ifyou are into both electronics and music,why not have a go yourself – not at playingUkulele Hero, but at creating your owndigital instrument. For a videodemonstration of Ukulele Hero, follow thelinks from www.audio4fn.org.

Musicology in schoolOne place where people talk about theirdifferent ideas of music is secondary schoolmusic classes so that is where themusicology researchers from Queen Maryare going. They are applying a researchtechnique known as ‘ethnography’:observing classes in depth. The aim is to provide a rich understanding of howpeople’s common-sense understanding ofmusic relates to the concepts taught inclass. They will also look at how the ideas

that emerge relate to the information thatcan be analysed and visualised using digitalmusic technologies. The results will thendrive the development of new technologiesto support music education.

If you want to make a start looking intomusic files rather than listening to themthen why not download Sonic Visualiserfrom www.sonicvisualiser.org? It is beingextended as part of the Musicology for theMasses project.

Audio! ActionFormer Beatle George Harrison was aukulele fan and had a big collection ofukes.

Page 3: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

Many studies have shown that media canaffect our behaviour. Violent video games or music with aggressive lyrics can increaseaggressive feelings, thoughts, andbehaviours. Do we also act differently whenlistening to romantic songs? That is what theteam from the Bretagne-Sud and Paris-SudUniversities decided to find out. In a previousexperiment, they had shown how romanticsongs played as background music in aflower shop encouraged men to spend moremoney. They decided to design a new

experiment to test whether songs withromantic lyrics would influence the datingbehaviour of young single women.

First they had to set up the experiment.Based on the answers of participants tomusic questionnaires, they selected a‘neutral’ song (“L’heure du thé” from VincentDelerm) and a romantic song (“Je l’aime àmourir” from Francis Cabrel) to use in theexperiment. They also asked a group ofwomen to rate the physical attractiveness oftwelve men. For the next stage, they wantedsomeone not too attractive, but not toorepulsive either, so they chose ‘Antoine’, theman whose score was nearest the averagefor the twelve.

Over 80 women aged between 18 and 20took part in the main experiment. First, eachwoman listened to background music in awaiting room. She was then invited into aseparate room to discuss the differencesbetween two food products with a young

man (actually Antoine). This was done in the presence of the experimenters but theythen left asking the man and woman to waitfor a few minutes. This gave Antoine theopportunity to try a straight-forward chat-upline: “My name is Antoine, as you know. I find you very good looking and I waswondering if you’d agree to give me yourtelephone number. I’ll call you and we could have a drink together next week.” (It presumably sounds very seductive in French!)

Now the intriguing bit. It turned out thatwhen the romantic song had been playing inthe waiting room Antoine’s chances of gettinga telephone number doubled. 52% of thewomen who heard the Cabrel song agreed togive their number compared to 28% of thosewho heard the ‘neutral’ song from Delerm.

These findings fit a general model proposedback in 2006 that aims to explain the effectsof media exposure. It suggests that mediaexposure in general (not only violent oraggressive media) affects people’s internalstates. ‘Pro-social’ media (as opposite toantisocial ones) should favour pro-socialresults.

Why can our behaviours be influenced bymusic? Certain types of music may inducepositive ‘affects’. In psychology, an ‘affect’ is the experience of having a feeling oremotion. Our better receptivity to declarationsof love is because romantic songs inducepositive affects – they give us a positiveexperience and that makes us susceptible to positive behaviour. Is it the romanticmeaning of the lyrics or the way theinstrumental music was composed thatmatters most? Answering that will take somenew studies. In the meantime if you aresingle, then pay attention to the backgroundmusic and it may well be love at first sight…

Parts of this article were translated fromFrench from: www.insoliscience.fr/?Les-chansons-romantiques-le-truc

Romantic songs and the dating game

If you struggle to get dates it might just mean that you play the wrong music! A team of French researchers has shown that playing the right music before asking for a date can increase a person’s seductive powers.

www.audio4fn.org

3

Audio! ActionMozart once helpfully composed a canonwhere the second melody was the sameas the first but backward and upsidedown. That meant it could be read by twoplayers from the opposite sides of themusic sheet.

Women gave out theirtelephone numbersmore often if they’dbeen listening toromantic music.

“”

Page 4: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

A microphone is really easy to use. It willreproduce any sound in a specific areaaround it as an electrical signal, butanything from elsewhere isn’t picked up.That means to record an acoustical musicalinstrument (say, a saxophone) you point themicrophone directly at it. If it’s anelectrical instrument (like an electricguitar) you point the microphone at theamplifier. If the sound source falls within

the sensitive area of the microphone, thesound of the instrument can be reproducedand so played back through speakers orrecorded to hard disk.

There isn’t that much that can go wrongwith this, but there can be problems. Otherinstruments could be picked up in themicrophone’s signal (that’s ‘microphonebleed’), or there might be some

reverberation or random noise on themicrophone signal. You solve problems likethis using soundproofing and being carefulabout where you place the microphones.The idea is to make sure the interferingsounds end up in those areas that themicrophone can’t ‘hear’.

Often you want to record more than oneinstrument at the same time or severalmicrophones need to be placed closetogether to record one instrument like adrum kit. This leads to a specific problemwhen more than one microphone picks upthe same sound. When the microphonesignals are mixed together, copies of thatsound can be added together. The trouble isthat they may have taken different amountsof time to get to the different microphones,so may not be perfectly synchronised.

This is where ‘comb filtering’ comes in.Why that name? It has nothing to do withbad hair days, even though that’s what itmakes your music sound like! It is becausethe electronic signal involved looks like acomb. In signal processing, a ‘filter’ is a

Your band has played some gigs and now you are going to record ademo. You set up the equipment, scatter microphones around, hitrecord and off you go. You have a great time and it felt really good.You play back the track and...it sounds horrible! The bass has allbut disappeared and what are those other weird sounds? Surely you weren’t playing that badly. Probably not, but maybe you need tolearn a bit more about microphones and effects called ‘microphonebleed’ and ‘comb filtering’. With any luck though, some time soonAlice Clifford from the Centre for Digital Music at Queen Mary,University of London will have made the problem a thing of thepast.

Combing for

Mad about music technology

4

Page 5: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

device or process that modifies features ofa signal. In this case the process ofcombining the out of time sounds can bethought of as a filter. The sounds interferewith each other in a way that cuts out somegroups of frequencies whilst boostingothers. Draw a picture of the signal’sinterference with its spikes and gaps atdifferent frequencies and it looks like theteeth of a comb.

As well as other weird sounding effects,comb filtering can lead to large groups offrequencies being cut out that should havebeen kept. For example, you may find yourbass guitar recording suddenly has no bassfrequencies in it. The effects can be greatfor giving expression to a guitar solo, but itisn’t what you want on a classical harprecording.

You can reduce the problem by payingattention to the positions of microphones inrelation to each other and to otherinstruments. Ideally all microphones wouldbe placed exactly the same distance fromeach instrument but that is impractical. A

standard rule used by sound engineers isthe ‘3 to 1’ rule: always place a secondmicrophone or instrument at least 3 timesthe distance from the first instrument to thefirst microphone. The idea is to make thestrength of the sound in the second one lowenough that it doesn’t affect the first one.Basically, the aim is to reduce microphonebleed simply by placing the instruments andmicrophones a long way from each other.

So next time you are recording your musicband, you should pay close attention towhere you place the microphones. Soundengineers painstakingly positionmicrophones for good reasons! If the audiothat comes out sounds different to what youexpect, try moving microphones as it mightimprove things. The only trouble is perfectpositioning is not always possible. It wouldbe much better if you could just do someclever audio processing after the soundshave been picked up by the microphone.

This is where Alice Clifford’s work comesin. She has set herself the problem of howto analyse audio signals and apply

processing to remove the comb filteringeffect. One way this can be done is by“nudging” audio regions in a digitalrecorder so visually they line up and so willbe played back at the same time. Often, thecomb filtering may not be apparent, but assoon as the audio is nudged in line, thekick drum has more of a thump or a guitarhas more presence. Of course, nudgingregions may not be very accurate. Alice isaiming to come up with a way that solvesthe problem completely. If successful it willmean that in future sound engineers wouldnot have to move the microphones at all.

So at the moment you have to be extracareful over where you place themicrophones when recording your band.With any luck though Alice will solve theproblem, and her signal processingsoftware will mean messing about with theposition of microphones before recordingwill be a thing of the past.

sound

www.audio4fn.org

5

Page 6: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

The original digital music revolution wasn’t just on the back of the idea of recording and storing musicdigitally but because everyone agreed on a commonmusic format - the MP3 format. MP3 is just aparticular code for storing music digitally – as stringsof 1s and 0s. There are lots of different ways ofdoing that, and if different record companies wereto release music using different formats – adifferent code – it would bechaos. Everyone would needlots of different players andworse need to knowwhich one wasneeded to play whichtracks. Confusionwould reign.Having acommonformatmeans thatanyonewho hasan

MP3playercan

play anymusic.

Better still, itmeans anyone with

a great new idea cancreate a new service

based on it and everyone elsewill have the capability to use it.

The trouble is there is a lot you can’teasily do with MP3 so some great ideas just

won’t get off the ground if we stick with it.

The thing about MP3 is that it follows the tradition of CDsand records, and only stores the music in its final mixedform. When music is recorded lots of different tracks arerecorded separately. In the simplest situation, for examplethe vocals and the music might be recorded and storedseparately. In reality many more tracks than that are

recorded – each separate instrument might be recordedseparately for example. The producer then mixes thetracks together to give the final music that you download.

Because all the music has been integrated into mixedtracks before it is distributed means digital music isn’tvery flexible at the moment. If instead all the originaltracks were separately available in the stored music filelots of new applications would be possible. That is whatthe new ‘Interactive Music Application Format’ known as‘IM AF’ does. It is the brainchild of an organisation knownas MPEG - the Moving Pictures and Associated AudioExperts Group – the same group that came up with MP3in the first place.

IM AF allows you to store and share music as lots ofaudio tracks and combine it with other information too.That creates lots of exciting possibilities for new ways ofexperiencing music. Listeners can listen to differentmusic producer mixes of the same music, for example –just store the mix settings in the IM AF file. This kind ofmulti-track interactive music format allows you to domore than listen passively though. You can also interactwith it. For example, why not experiment with your ownmixes? Lots of things become simple to do. To create aKaraoke version of a single to sing along to you just pickout the music tracks and drop the vocals. To practiceyour musical instrument skills isolate the instruments youwant to play along with – either the one you are playingor some others to accompany you. You could alsoemphasize the melody, harmony, or rhythm of aparticular piece of music depending upon your personaltastes. If you don’t want to do all this yourself, don’t worry,as services will spring up to do it at the touch of a button.Ways to legally share your modified versions with yourfriends also become a possibility if some buddingentrepreneur wants to set up a service to support it.

This simple idea of a change in the way music is storedmeans not only that music listening becomes moreflexible, but we could all become creative musicproducers if we want to. Better still if you want to be adigital music entrepreneur, a whole new space of ideasfor services is just around the corner.

The music industry has gone through a massive upheaval as a result of digitalmusic, and more and more interactive music services, from iTunes and spotifyto sonarflow, appear all the time. If the revolution is to continue at the samepace though something’s gotta change!

MP3: something’sgotta change!

Mad about music technology

6

Page 7: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

In the early days of music recording,producers would find the best soundinghalls for bands to play in to get the mostinteresting acoustics on the record. Butthen the budgets for hiring interestingvenues disappeared. Instead vocalists wereput in acoustically 'dry' spaces in the studiothat had no character. That led to attemptsto add character to recordings using‘artificial reverb’. Early reverb systems usedmetal plates or springs to emulate thesound of multiple echoes from walls. Thesedidn’t give the fine control needed to copythe sound of a particular room though so‘digital reverb’ was born.

Digital reverb involves creating thoseinteresting acoustics on a computer andapplying them to the recording afterwards.Building a reverb that sounds exactly like areal room is very difficult so it’s taken someclever engineering to design a way tocapture a room's sound. The first step issimple. You just play a known soundthrough a speaker in the room and recordit. The sound used is usually what is calleda ‘swept sine wave’ – a sound wave thatgoes from the lowest rumbling frequenciesup to the high frequencies only dogs canhear. That kind of sound is easy to identifyso we can take it away from what ended upbeing recorded to obtain a sort of acousticalfingerprint of the room – technically it’scalled the ‘room impulse response’. Doingthis slowly isn’t too difficult, but some

complicated mathematical trickery is usedto do it as the recording is happeningwithout a noticeable delay. This process ofremoving the sweeping sine wave, known as‘deconvolution’, is a neat trick. It means wecan then add (‘convolution’) any dry audiorecording to the fingerprint of any roomeven if the room is on the other side of theworld. Our up and coming vocalist’s newalbum can sound as if it was recorded inany room we like even though she stays inthat boringly dry studio. This basic idea hasalso been used by researchers at theHelsinki University of Technology to create a'Rehearsal Hall with Virtual Acoustic forSymphony Orchestras': a system formusicians to do real-time performance andpractising.

On the other side of the Atlantic,researchers at McGill University in Canadahave used the same approach with adifferent purpose. They’ve created a systemfor measuring the musicality of musiciansas they rehearse. First the room where themusicians will play is acoustically treated sothat it doesn't have any big echoes or soundcharacter of its own anymore. Then, lots ofloudspeakers are set up around themusicians. Each instrument or instrumentgroup has a microphone and its sound iscombined with the acoustics of a differentroom. That combined sound is fed back tothe individual musician through theloudspeakers. The overall effect is that each

musician can play listening to the acousticsfrom their favourite venue. Experimentshave shown that this improves musicality inrehearsals, and a happy musician leads to amore exciting performance.

The technology can also be used to pre-practice as if in a venue where you aregoing to perform, help composersunderstand what their piece will sound likeat its premier and let orchestras choose adifferent venue for each piece. Have youever wondered what your voice would soundlike in the Royal Albert Hall? Install thetechnology at home and that is where youcould practice your singing from the comfortof your bedroom.

Sing in the Royal Albert Hall while staying at home

www.audio4fn.org

7

And the beat goes on... forever? Hip-hop music is usually around 80 beats per minute (bpm). Pop music is faster,around 120 bpm, and dance music can be anything from 120 up to around 200bpm. But what if you had a beat that kept speeding up forever? How fast couldyou possibly get without it just becoming a blur?

The composer and computer musician Jean-Claude Risset came up with away that a rhythm could keep speeding up forever, and yet however much itspeeds up it sounds the same. It's actually an audio illusion as even though itsounds like the music is speeding up, you are just listening to the same musicon a loop. It's done by combining different versions of the rhythm at differentrelative speeds (for example, 30 bpm as well as 60 bpm, 120 bpm, 240bpm...) so that as the beat speeds up, different "layers" can take over as the main rhythm we perceive.

Music researcher Dan Stowell came up with a way to apply Risset's effect to anybreakbeat rhythm. Follow the links from www.audio4fn.org and listen to the beat - tap along and see how fast you can get...

Page 8: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

Ruben's tube: a six-foot tube with flamespouring out, dancing in response to sound.Music has never been so hot.

The normal way to visualise sound is using anEQ meter. It is just a sound meter with bars oflights driven by the sounds. Each bar is linkedto ranges of frequencies (essentially groups ofnotes). The more energy from thosefrequencies in a sound the more of the lightsin the bar that light up.

Ruben’s tube does a similar thing. A direct,physical reaction to sound is created byplacing a loudspeaker at one end of ahorizontal gas-filled tube. Gas comes out of200 holes in the tube. With no sound it wouldcome out of each hole at the same rate, butsound from the speaker creates standingwaves down the tube. They are just wavesthat stay in the same place. It’s a bit likebeing on a beach where the waves don’t rushtowards the shore but all just stay put. Nowimagine the waves are in a tube. Where thecrests are, more water has to compress intothe same space than where the troughs are.Having a standing wave in a tube means thepressure is greatest where the crests of thewave are, and least in the troughs. Thegreater the pressure, the higher the flame isforced out at that point. There's no digitaltrickery involved, it's driven entirely by thephysics of acoustic waves in tubes. The effectis that 200 flames dance around in responseto changing sounds made into a microphone.It’s an EQ meter made out of a gas fire!

Music is all about strumming strings, hitting things orhuffing and puffing into tubes. Fire and smoke doesn’thave much to do with it - well, unless you are at a rock concert where they have decided to set off somepyrotechnic bangs maybe. So it may come as a surprisethat people have been making music with fire for at leasta century, that smoke may help make better microphones,and that flames give a new way to watch music. DanStowell tells us more:

Mad about music technology

Sound Sculpting with fire

8

Music, Fire and SmokeMusic has neverbeen so hot.“

”Photo CC-by-nc-saStephen J Johnson

Page 9: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

www.audio4fn.org

9

Fire Music: the PyrophoneFire Music: the Pyrophone

What good would a microphone made ofsmoke be? Potentially incredibly sensitive itturns out. Microphones turn sound – waves ofmovement of the air – into electrical signals.Most microphones have a moving part insidethem, a diaphragm whose movement is usedto detect the moving air and so pick up theaudio signal. Rather than use a diaphragm,father-and-son team David and DanielSchwartz have invented a microphone thatuses a single plume of smoke instead.

Why? The physical mass of the diaphragm ina normal microphone limits how responsive itcan be. Soft sounds may not be strong enoughto push the diaphragm so would not bedetected. David Schwartz noticed that smokerising from an oil lamp moved in response tohis voice, and wondered if that movementcould be detected using a laser, to create anultra-lightweight way of sensing sound. SonDaniel wondered if he could actually makeone. A few months later and the Schwartzeshad made a prototype of their newmicrophone.

The device shines a laser pointer through athin plume of smoke rising through a tube.Any sound disturbs the smoke and makes itvibrate, and that changes the amount of laserlight that can pass through. A sensitive lightdetector on the other side detects the lightand turns it into an electrical signal. Out ofthe smoke you have a microphone!

To see a video of the smoke microphone,follow the link from the Audio! website:www.audio4fn.org

A Microphonemade of smoke

The idea ofmaking music withflames might seempretty steampunk, but it's been around a while. The Germancomposer WendelinWeissheimer wasplaying a Pyrophone in the late 19thcentury.

”In Ruben’s tube fire is used to make sound visible. Flames aren't just affected by sound,they can be used to make sound too. A pyrophone is a flame organ that uses smallexplosions or combustion to generate musical notes.

A "normal" organ uses air blown across large cylindrical pipes to start the air columnsvibrating and so create sounds, just as blowing across the top of a glass bottle creates anote. For some people that's not exciting enough though. A pyrophone usually involvesconnecting a supply of gas to each pipe. A burst of flame in the bottom of a pipe is used toset the air vibrating and create each note.

To see and hear a pyrophone in action, follow the links from www.audio4fn.org

Photo CC-by-sa ThomasGoller

Page 10: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

Spot that tuneAt the moment, the main way of searchingcollections is based on textual informationadded to the tracks that describe them. Thistext is known as ‘metadata’. It means youcan search for keywords in that textualinformation and get Google-like lists of

results back – you are justdoing text searches. Thetrouble is the searches canonly be as good as themetadata that has beenadded.

A more interestingapproach is toactually searchthe soundsbased onanalysing theaudio data

itself – search the music not the text thathas been written about it. Over the lastdecade, algorithms have been developedthat can automatically detect the rhythm,tempo, genre or mood of a song, forexample. This is known as ‘semantic audiofeature extraction’. Once you can detectthese features of music you can use them towork out how similar two pieces of musicare. That can then be the basis of a searchsystem where you search for tracks similarto a sample you have already, browsingthrough the results. This approach can alsobe used in music recommendation systemsto suggest new tracks similar to one youalready like.

Exploring bubblesHaving broken away from searching textcomments, why not get rid of those lists ofresults too? That is what the companySpectralmind’s ‘sonarflow’ does.Spectralmind is a spinout company ofVienna University of Technology turning theuniversity’s research into useful products bycombining work on the analysis of musicwith innovative ways of visualising it.

In particular what sonarflow does is providea way to browse tracks of music visuallybased on how similar they sound. When youstart to browse you are given a visualoverview of the available content shown as

"bubbles" that roughly cluster the tracks intogroups reflecting musical genres. Similarmusic is automatically clustered into groupsbut those groups are clustered into groupsthemselves, and so on, so once you havechosen a genre you can zoom into it toreveal ever more bubbles that group thetracks with finer and finer distinctions untileventually you get to the individual tracks.Any available textual information (i.e.,metadata) about the music is also clusteredwith the bubbles to enrich the browsingexperience and provide navigation hints.

At any time immediate playback of theobjects is possible at any level. This meansthat you can choose between playingindividual tracks or entire clusters of musicwith a similar sound or mood. Sonarflow canalso be used to recommend new musicbecause it is easy to explore theneighbourhood of well-known songs todiscover new titles that have a similar sound.

All the problems may not have been solvedas searching through millions of versions ofanything isn’t easy, never mind when youcan’t see the thing you are looking for, butby combining fun ways to view music withclever ways of recognizing the audiofeatures, searching through those endlesslyfrustrating lists may soon be a thing of thepast.

Digital music collections, both personal and professional, are constantlygrowing. It’s getting increasingly difficult to find the track or sample youwant amongst the tens of thousands you own, never mind the millions oftitles in online music stores. And how on earth do you browse music forparticular sounds? Are we doomed to constant frustration, poringthrough lists and lists of tracks failing to find songs we know are inthere somewhere? Perhaps not.

What’s all this ‘metadata’? The Greek prefix meta is used to mean‘about’. So, metadata is ‘data aboutdata’: who produced it, when, why, andso on. Metadata is commonly used indigital audio file formats like mp3 todescribe the tunes. A song’s name andthe artist or band that composed it, areexamples of metadata typicallyembedded in music files to identify thetracks. When you browse throughalphabetical lists in online musiccollections you are reading metadata.

Zooming in on music

Mad about music technology

10

There are currently sonarflow apps for theweb, iPhone and iPad - you can try aversion of sonarflow for free from theApple App Store. See alsowww.sonarflow.com

Photo court

esy of S

pectralm

ind OG

Page 11: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

Audio Action!Recordings of piano music playedbackwards sound like they areplayed on an organ, as the tonesstart quietly and grow in volume.

www.audio4fn.org

11

Biologists often analyse data about the cellbiology of living animals to understand theirdevelopment. A large part of this involveslooking for patterns in the data to use torefine their understanding of what is goingon. The trouble is that patterns can be hardto spot when hidden in the vast amounts ofdata that is typically collected. Humans arevery good at spotting patterns in soundthough – after all that is all music is. So whynot turn the data into sound to find thesebiological patterns?

In hospitals, the heartbeats of critically illpatients are monitored by turning the datafrom heart monitors into sounds. Under the sea, in (perhaps yellow) submarines,“golden ear” mariners use their listeningtalent to help with navigation and detect potential danger for fish and thesubmarine. They do this by listening to thesoundscapes produced by sonar built upfrom echoes from the objects round about.This way of using sounds to represent otherkinds of data is called ‘sonification’. Perhapssimilar ideas can help to find patterns inbiological data? An interdisciplinary team ofresearchers from Queen Mary includingbiologist Rachel Ashworth, Audio expertsMathieu Barthet and Katy Noland andcomputer scientist William Marsh have been trying the idea out on zebrafish.

Why zebrafish? Well, they are used lots forthe study of the development of vertebrates(animals with backbones). In fact it is whatis called a ‘model organism’: a creature thatlots of people do research on as a way ofbuilding a really detailed understanding ofits biology. The hope is that what you learnabout zebrafish helps you understand thebiology of other vertebrates too. Zebrafishmake a good model organism because theymature very quickly. Their embryos are alsotransparent. That is really useful when doingexperiments because it means you candirectly see what is going on inside theirbodies using special kinds of microscopes.

The particular aspect of zebrafish biology the Queen Mary team has beeninvestigating is the way calcium signals are used by the body. Changes in theconcentration of calcium ions are importantas they are used inside a cell to regulate itsbehaviour. These changes can be tracked inzebrafish by injecting fluorescent dyes intocells. Because the zebrafish embryos aretransparent whatever has been fluorescentlylabelled can then be observed.

The Queen Mary team has developedsoftware that detects calcium changes byautomatically spotting the peaks of activityover time. They relied on a technique that isused in music signal processing to detectthe start of notes in musical sequences.Finding the peaks in a zebrafish calciumsignal and the notes from the Beatles’ DayTripper riff may seem to be light years apart,but from a signal processing point of view,the problems are similar. Both involvedetecting sudden bursts of energy in thesignals. Once the positions of the calciumpeaks have been found they can then bemonitored by sonifying the data.

What the team have found using thisapproach is that the calcium activity in the muscle cells of zebrafish varies a lotbetween early developmental stages of theembryo and the late ones. You can have ago at hearing the difference yourself – followthe links at www.audio4fn.orgwhere you can hear thesonified versions of thedata.

Sonifyingzebrafish biology

Page 12: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

Mad about music technology

12

There is a huge market for recreating thesound of these classics and the mostaffordable way is using software rather thanrebuilding the instruments. Two promisingways are described here. One way, ‘physicalmodelling’, is to build a mathematical modelin software based on the hardwarecounterpart. This can de done in varyingamounts of detail and sophistication rangingfrom modelling a section of a hardwaredevice down to modelling the componentsused. When building the physical models theresearchers measure how the original altersthe signal of various audio sources. After thisthey create a mathematical formula to use inthe software that best matches theperformance of the actual hardware. It isimportant that they also do listening tests tofine tune the model as it is the 'sound' of theoriginal that the user wants and not just amatching graph!

Compared to the original, creating aninstrument in software with physicalmodelling has some distinct advantages: youare not limited to the number of pieces of kityou own, just click to add another plugin.Furthermore, software is usually a lotcheaper than hardware and needs none ofthe latter’s hands on maintenance (well ok,so there might be an update occasionally toinstall!). There is one drawback to themodelled version in software though.Because the developers are always trying tomatch the characteristics of the original therealways comes a point where the softwaredesigners say “that sounds close enough”,but it's never 100% exact.

There is a second method that has beenused by the company Focusrite called‘dynamic convolution’ and a slightly differentversion by the company Acustica-Audio. It isbased on the idea of taking an acousticalfingerprint of a room – extracting theacoustics of a room so that virtual versions ofit can be created (see page 7). Dynamicconvolution takes a fingerprint not of theroom but of the gear in question for thedifferent combinations of knob position andinput level. As there are lots of combinationsthis leads to a huge amount of informationbeing stored. Often only a few positions arerecorded to save time and file size, but themore combinations stored the more accuratethe result. Settings between those stored canalso be reproduced by combining those fromthe knob positions on either side. There areat least 44100 samples for one second of CDquality sound and each distinct sample canuse a different one of those virtually createdsettings. This method means that things likedistortion and other characteristics of audiohardware can be reproduced better.Computers can only process so muchinformation at once though so the accuracyof the fingerprints sampled, which in theory

Moving forward bymodelling the past...Moving forward bymodelling the past...Digital audio has brought the ease of manipulating sounds to everyonewith a computer, but it seems musicians want things from the past notthe future: the sound of the first ‘Fender Stratocaster’ guitar, the ‘pre-EBStingray’ bass guitar, the ‘Fairchild 670 compressor’ and the ‘PultecEQP-1A’ for example all make aficionados drool.

The most affordableway is to do it usingsoftware.“

are perfect, has to be cut down to a pointwhere it is deemed that people can't hear thedifference. This is called ‘truncation’. So likethe modelling method there is also a pointwhere the software is as close as possible,but it will never be 100% perfect.

Which method is best? There's no clearanswer. Dynamic convolution is the mostlikely to deliver the closest sound to thehardware as measurements from the exactunit are used by the software, but it can bevery CPU-intensive to use – it takes lots ofcomputing power. That's why Focusrite sell itin a piece of hardware rather than as

Page 13: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

www.audio4fn.org

13

A soundlyengineeredwaterfall

Grillet was hired by the Duke to create awater feature for his garden – not the kind ofwater feature you buy in garden centresnowadays but a gigantic architectural feature.After all, the garden in question wasn’t out theback of a suburban semi, but the hillsidegarden behind Chatsworth: the grandDerbyshire stately home of the Devonshires.Grillet applied his previous experienceworking for the Sun King, Louis XIV of Franceto create a waterfall with an Audio twist.

The resulting ‘Cascade’ is an artificial stream,dropping a full 200 feet, from the hillsideabove down the garden behind ChatsworthHouse. It consists of groups of steps, eachdesigned so that as you walk up the cascadethe sound of the water subtly changes. Ofcourse this wasn’t done with some amazingadvanced technology. After all it was the late1600s so all that was available were stone-masons, his engineering know-how and agood dollop of creativity. He created the effect

by structuring each group of steps slightlydifferently. He varied the number of steps inthe group, their heights, distances apart, andalso added features like ledges or roundedlips. The result is that the water makes aslightly different sound as it flows over eachgroup. That means as you walk up the pathbesides the cascade, you find you are walkingthrough a gentle, natural but designedsoundscape. The cascade is much greenerthan most of our modern audio-engineeringtoo. It produces its effect without the need for a power guzzling energy source as it isgravity-fed by rainwater from a man-madelake at the top of the hill.

Today’s audio artists make use of moderntechnology to create all kinds of wonderfuleffects, but it is worth remembering thatsound art can be created just by working with nature...even if you then wonder whatyou could do by combining digital effects with a cascade.

You don’t need mikes, mixing desks and digital music to do audioengineering. In the 17th century they did it just with rainwater. At leastthat’s what a 17th century French audio engineer called Grillet, did forthe 1st Duke of Devonshire.

software to run on your own computer.On the other hand, if done well, usingphysical modelling can come extremelyclose to the real deal and very oftendoesn't use much computing power at allso lots of virtual hardware can be used atonce. Either way people always seemimpressed and excited at a softwareversion of a hardware classic, eventhough they have probably never heard(nor could afford!) the original whichmeans they won't know if it's not 100%perfect anyway!

Eye wobbling hauntings Sound doesn’t just vibrate your eardrums, it can also potentially vibrate your eyeballs,causing blurry vision. Some scientists think that this type of inaudable, eye wobbling,low frequency bass sound, called infrasound, might be responsible for spookyhauntings. The stories of strange figures in a supposedly haunted laboratory in Warwickwere, after investigation, discoved to be due to a newly installed extractor fan producinglow frequency infrasound waves.

Page 14: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

After thinking about the problem for a whileshe came up with an idea for an interactivesound installation. She wanted to entertainany children visiting the centre, but sheespecially wanted it to benefit children withpoor language skills. She wanted it to beinformal but have educational and socialvalue even though it was outside.

You name it, they press it!Somewhere around the age of 18 months,children become fascinated with pressingbuttons. Toys, TV remotes, light switches,phones, you name it they want to press it.Given the chance to press all the buttons atthe same time in quick succession, that isexactly what young children will do. They willalso get bored pretty quickly and move on tosomething else if their toy just makes lots ofnoise with little variety or interest.

Nela had to use her experience andunderstanding of the way children play andlearn to work out a suitable ‘user interface’for the installation. That is she had to designhow the children would interact with it andbe able to experience the effects. The user

Mad about music technology

14

Soundingout a SensoryGarden

Soundingout a SensoryGarden

When the construction of Norman Jackson Children’sCentre in London started, the local council commissionedartists to design a sensory garden full of wonderful sightsand sounds so the 3 to 5 year old children using thecentre could have fun playing there. Sand pit, waterfeature, metal tree and willow pods all seemed prettyeasy to install and wouldn’t take much looking after, butwhat about sound? How do you bring interesting sound toan outdoor space and make it fun for young children?Nela Brown from Queen Mary’s Interactive Media andCommunication group was given the job.

Page 15: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

interface had to look interesting enough toget the attention of the children playing in thegarden in the first place. It also obviously hadto be easy to use. Nela watched childrenplaying as part of her preparation to designthe installation both to get ideas and get afeel for how they learn and play.

Sit on itShe decided to use a panel with buttons that triggered sounds built into a seat. Oneimportant way to make any gadget easier touse is for it to give ‘real-time feedback’. Thatis, it should do something like play sound orchange colour as soon as you press anybutton, so you know immediately that thebutton press did do something. To achievethis and make them even more interestingher buttons would both change colour andplay sound when they were pressed. Shealso decided the panel would need to beprogrammed so children wouldn’t do whatthey usually do: press all of the buttons atonce, get bored and walk away. Nela recorded traditional stories, poems andnursery rhymes with parents and childrenfrom the local area, and composed music tofit around the stories. She also researcheddifferent online sound libraries to findinteresting sound effects and soundscapes.Of the three buttons, one played thesoundscapes, another played the soundeffects and the last played a mixture ofstories, poems and nursery rhymes. Nela

hoped the variety would make it all moreinteresting for the children so keep theirattention longer and by including stories andnursery rhymes she would be helping withlanguage skills.

Can we build it?Coming up with the ideas was only part ofthe problem. It then had to be built. It had to be weatherproof, vandal-proof and alloweasy access to any parts that might needreplacing. As the installation had to avoiddisturbing people in the rest of the garden,furniture designer Joe Mellows made twoenclosed seats out of cedar wood claddingeach big enough for two children, whichcould house the installation and keep thesound where only the children playing with itwould hear it. A speaker was built into theceiling and two control panels made ofaluminium were built into the side. Thebottom panel had a special sensor, whichcould ‘sense’ when a child was sitting in (orstanding on) the seat. It was an ultrasonicrange finder – a bit like bat-senses usingechoes from high frequency sounds humanscan’t hear to work out where objects are. Thesensor had to be covered with stainless steelmesh, so the children couldn’t poke theirfingers through it and injure themselves orbreak the sensor. The top panel had threebuttons that changed colour and playedsound files when pressed.

Interaction designer Gabriel Scapusio did thewiring and the programming. Data from thesensors and buttons was sent via a cable,along with speaker cables, through a pipeunderground to a computer and amplifierhoused in the Children's Centre. Thecomputer controlling the music and colourchanges was programmed using a specialinteractive visual programming environmentfor music, audio, and media called Max/MSPthat has been in use for years by a widerange of people: performers, composers,artists, scientists, teachers, and students.

The next job was to make sure it really did work as planned. The volume from thespeakers was tested and adjusted accordingto the approximate head position of youngchildren so it was audible enough forcomfortable listening without interfering with the children playing in the rest of thegarden. Finally it was crunch time. Would the children actually like it and play with it?

The sensory garden is making a difference –the children are having lots of fun playing init and within a few days of the opening oneboy with poor language skills was not justseen playing with the installation but listeningto lots of stories he wouldn’t otherwise haveheard. Nela’s installation has lots of potentialto help children like this by provoking andthen rewarding their curiosity with somethinginteresting that also has a useful purpose. Itis a great example of how, by combiningcreative and technical skills, projects likethese can really make a difference to achild’s life.

The panels in each seat were connectedto an open-source electronics prototypingplatform, Arduino. It's intended for artists,designers, hobbyists, and anyoneinterested in creating interactive objects or environments, so is based on flexible,easy-to-use hardware and software.

Audio! ActionIt’s often claimed that there exists aparticular sound frequency that makeshumans lose control of their bowels.Appropriately enough it’s called thebrown note. However after repeatedexperiments there is no scientificevidence to support this effect. The brown note is pants!

www.audio4fn.org

15

Photo: Nela Brown

Page 16: Issue3 -   · PDF fileIssue3 MAD about music technology. ... for giving expression to a guitar solo, but it isn’t what you want on a classical harp recording

– no bigger than a person’s mouth andthroat! That’s big enough to generate thesounds that match the words we want thecomputer to say, though.

These physics-based speech models give anew way a computer could beatbox. Ratherthan start from letters and translate them into sounds that correspond to beatboxingeffects, a computer could do what thecreative beatboxers do and experiment withthe positions of its virtual mouth and vocalcords to find new beatboxing sounds.

Beatboxers have long understood that theycould take advantage of the complexity oftheir vocal organs to produce a wide rangeof sounds mimicking those of musicalinstruments. Perhaps in the future ArtificialIntelligences with a creative bent could beconnected to physics-based speechsynthesisers and left to invent their own beatboxing sounds.

Mad about music technology

How to make Google beatbox?!Can Google Translate make music? It turns out it can – it can beatbox!Beatboxing is a kind of vocal percussionused in hip hop music. It mainly involvescreating drumbeats, rhythm, and musicalsounds using your mouth, lips, tongue andvoice. So how on earth can Google Translatedo that? Well a cunning blogger has workedit out and it’s easy and fun to do. Once onthe Google Translate page you first set it totranslate from German into German. Next typethe following into the translate box:

pv zk pv pv zk pv zk kz zk pv pv pv zk pv zkzk pzk pzk pvzkpkzvpvzk kkkkkk bsch;

When you click on the “Listen” button.Google Translate will beatbox.

Convincing?! Now, you can try to make yourown funky beats and have the computerperform them for you…

So how do programs like Google Translatethat turn text into speech do it? Thetechnology that makes this possible is called‘speech synthesis’: the artificial productionof human speech. To synthesise speech fromtext, words are first mapped to the way theyare pronounced using special pronunciation(‘phonetic’) dictionaries – one for eachlanguage you want to speak. The ‘CarnegieMellon University Pronouncing Dictionary’ isa dictionary for North American English, for

example. It contains over 125 000 wordsand their phonetic versions. Speech is aboutmore than the sounds of the words though.Rhythm, stress, and intonation matter too. Toget these right, the way the words aregrouped into phrases and sentences has tobe taken into account as the way a word isspoken depends on those around it.

There are several ways to generatesynthesised speech given its pronunciation,information about rhythm and so on. One issimply to glue together pieces of pre-recorded speech that have been recordedwhen spoken by a person. Another way useswhat are called ‘physics-based speechsynthesisers’. They model the way soundsare created in the first place. We createdifferent sounds by varying the shape of ourvocal tract, and altering the position of ourtongue and lips, for example. We can alsochange the frequency of vibration producedby the vocal cords that again changes thesound we make. To make a physics-basedspeech synthesiser, we first create amathematical model that simulates the way the vocal tract and vocal cords worktogether. The inputs of the model canthen be used to control thedifferent shapes and vibrationfrequencies that lead todifferent sounds. Weessentially have avirtual world formaking sounds.It’s not a verybig virtualworldadmittedly

If you are mad aboutmusic technology, thenvisit www.audio4fn.org for more on the fun sideof audio engineering.

”This issue of Audio! is edited by Paul Curzon, Mathieu Barthet and Simon Dixon with contributions from Dan Stowell, Martin Morrell, Nela Brown,Peter McOwan, Panos Kudumakis, Emmanouil Benetos, Duncan Menzies, Alice Clifford, Thomas Lidy and François Grandemange. It has beensupported by the EPSRC Musicology for the Masses project. Audio! is part of the cs4fn family of magazines on the fun side of science, maths andengineering. cs4fn is funded by EPSRC with support from Google and ARM.

www.audio4fn.org