I am a senior scientist currently working at Pandora in Oakland, CA. I completed my PhD at the University of Auckland in the Forensics and Biometrics (FaB) signal processing lab. There I studied speech enhancement and the statistical modelling of both speech and many other environmental acoustic signals from a frequentist and Bayesian perspective. My thesis is available here.

Prior to Pandora, I have worked on a number of projects in audio and music technology as a developer and research engineer at a number of excellent companies including Tait Communications, Serato and Gracenote. At these companies I have been working in the areas of digital / statistical signal processing and machine learning while putting a lot of effort into quality software development pactices. I believe building extensible and maintainable codebases is an indispensible skill in creating modern technologies that have a strong impact in the real world.

Predominantly in the fields of Music Information Retrieval and Audio Signal Processing, my work has included researching, designing, implementing and improving a number of algorithms such as:

  • Beat Tracking / Tempo Estimation
  • Music Segmentation
  • Automatic Equalisation
  • Time Stretching
  • Noise Reduction
  • Active Noise Cancellation
  • Echo Cancellation
  • Genre Classification
  • Audio Fingerprinting

Examples of some of this work exists in the Serato products Pyro and Sample. A brief overview of some of the technological hurdles we were wrestling with can be seen below.

Most recently, I have been particularly interested in how modern technology and machine learning may be used in an assistive way to help introduce passive music consumers to interactive music experiences and eventually create skilled musicians. This transition from consumer to musician has never evolved more naturally in modern times than in the world of DJs and sampling based music production. People took the leap directly from music consumption to production without any additional tools or instruction. A web browser or mobile phone are at least as versatile as the turntable, and so I spend a lot of time experimenting with intuitive ways to create new music from something familiar - familiar content and familiar playback tools / interfaces. Some of these experiments you can see on the Applications page.

Web browsers and phones are fairly ubiquitous today, but it is modern machine learning methodologies that will enable completely new music making workflows that improve the accessibility of music creation to the average consumer. Now more than ever, machine learning can be used to spark creative musical ideas, reduce mistakes, simplify the music making interface, and leave a varying degree of control in the hands of the user depending on their skill level. There is a lot of work still to do, but the possibilities are exciting. Music Information Retrieval research will play a significant role in how these possibilities are realized. My most recent published research in this area and others can be found on the Publications page.