This is a long read but it's worth it if you're into this brand of technological geekery. From Boston.com:
Soon after the release of the first iPhone five years ago, an astonishing new ritual began to be performed in cafes and restaurants across the country. It centered on an app called Shazam. When the phone was held up to a radio, Shazam would almost instantly identify whatever song happened to be on, causing any iPhone skeptics in the vicinity to gulp in bewilderment and awe.
There was something unspeakably impressive about a machine that could listen to a snippet of a random hit from 1981, pick out its melody and beat, and somehow cross-reference them against a database that seemed to contain the totality of all recorded music. Seeing it happen for the first time was revelatory. By translating a song into a string of numbers, and identifying what made it different from every other song ever written, Shazam forced us to confront the fact that a computer could hear and process music in a way that we humans simply can’t.
That insight is at the heart of a new kind of thinking about music—one built on the idea that by taking massive numbers of songs, symphonies, and sonatas, turning them into cold, hard data, and analyzing them with computers, we can learn things about music that would have previously been impossible to uncover. Using advanced statistical tools and massive collections of data, a growing number of scholars—as well as some music fans—are taking the melodies, rhythms, and harmonies that make up the music we all love, crunching them en masse, and generating previously inaccessible new findings about how music works, why we like it, and how individual musicians have fit into mankind’s long march from Bach to the Beatles to Bieber.