In an otherwise boring conversation about some press release or another, a Spotify PR person mentioned to me that an artist who had a big hit on the platform’s Fresh Finds playlist was discovered when one of the curators just happened to see them play a show in Bushwick. I was as surprised as anyone really can be by an email from corporate PR.
Fresh Finds is one of Spotify’s prized products, a weekly playlist crafted from a combination of two different data inputs: it identifies new, possibly interesting music with natural language processing algorithms that crawl hundreds of music blogs, then puts those songs up against the listening patterns of users their data designates “trendsetters.” What’s going to a show in Bushwick have to do with it? I had visions of a bunch of suits using their business cards to get into cool shows for no reason other than to feel like Vinyl-era record execs for a night. It seemed extremely redundant, and more than a little like posturing. Why bother?
“It’s basically their job,” I was told. Okay but, excuse me, how is that a playlist curator’s job? To find out, I asked if I could tag along with on a few of them on their nights out. I did not expect the answer to be yes, mostly because I thought it should be obvious that my intention was to point out how weird the whole thing was.
But the answer was yes. So, for three weeks, I went with Spotify playlist curators to live performances in Chinatown, Bushwick, and an infamous club on the Lower East Side. I got dozens of half-answers to the question: Why are you here?
Source: Following Spotify playlist curators around New York’s live music scene – The Verge
Spotify traditionally focused on using data and algorithms to surface new music. Apple Music, when launched, made a big show of their human-curated playlists. With the former’s interest in IRL listening, and the latter’s acceptance that computer-generated playlists can be good at scale, it seems like the differences are receding.
This is fairly extraordinary: an interactive presentation about generative music, in which notes and rhythms are chosen according to a set of pre-determined rules. Fun to learn about and play with the various tools along the way.
In 2003, the Long Winters released their second album into a crowded field of cleverly crafted, melody-driven guitar rock. Given the crop of that particular era—the Shins, Decemberists, New Pornographers, Pernice Brothers, Weakerthans (and lord, can I get a Beulah?)—you would be more than forgiven for not recognizing When I Pretend To Fall as the cream that rises more than a decade later. The album produced neither hit singles nor commercial jingles, and it all but destroyed the fragile league of extraordinary frenemies who created it. It’s the great sound of coming together while everything is simultaneously falling apart. John Roderick, the man at the center of When I Pretend To Fall, was striving: hoping to win back a girl and attempting to make his mark in a microcosmic indie-rock scene.
Source: MAGNET Classics: The Making Of The Long Winters’ “When I Pretend To Fall”
A fine oral history of a great record. John is a big hero of mine and I wish he’d record more.
Is your hearing now pretty good, considering?
This is a weird thing. What scientists discovered in the past five years is that when they look at people who work with sound in a professional capacity, the part of their brain [that processes sound] tends to be about five times bigger. So as people who work with sound get older, they know their hearing isn’t as good, but at the same time, a lot of guys can still do really good work. We don’t hear in any kind of passive, mechanical way. [Sound] interacts with your brain. So when you hear, it’s a bit like when the scientists talk about the nature of reality and how it’s like an illusion in our brain. Everyone has their own reality, in some sense.
My hearing is technically not perfect. In my early 30s, I had a dip from noise damage. But when it comes to music, I still tend to hear faults with equipment or things like that before most people. Because most music exists between a certain frequency range, and my brain is very focused on mid-range. You can have people with technically excellent hearing, but they can’t discern what’s happening because their brain isn’t processing it.
I’ll give you an example. Once, a very long time ago, I got really bad middle-ear damage from doing some live sound. Something happened, and my hearing collapsed, pretty much. It went on for quite a long time. If it’s more than two days, then you’re usually looking at permanent damage, and this was really, really bad.
Then, during this period of bad ear damage, the alarm system went off in the house. I noticed that I could really hear the components of how the alarm was put together in an incredibly detailed way that I never would have heard without my ear nearly being half gone. I could really hear shit that I could never hear before. My brain was essentially still processing whatever it was getting on a pretty high level—or working overtime. After years of practice, you just learn to work hard. Like muscles. So in that respect, I’m very conscious of my hearing at this point in my life.
The MBV gig I went to in 2008 was the loudest thing I’ve ever experienced, so I’ve always been a little curious about how Shields and co’s hearing is holding up. Also: new album on the way!
Source: My Bloody Valentine’s Kevin Shields Dissects His New Loveless Vinyl Remaster, Talks New Album | Pitchfork
These Earworm videos by Vox are great. They’re part Song Exploder, part 33 1/3, part music theory class.
I bought the subject of this episode—Captain Beefheart’s Trout Mask Replica—as a teenager and it has always been a curiosity to me: an album I admire more than I love or am inspired by. Here Estelle Caswell, with the help of Samuel Andreyev, breaks down album opener ‘Frownland’ to better understand it’s baffling mix of blues, rock and free jazz. It’s made me listen to the album again with fresh ears and notice things I wouldn’t have otherwise.
I’m a sucker for technical dives into Spotify’s Discover Weekly, and this is a great one.
In the article, Sophia Ciocca gives three types of recommendation models that are used to generate the playlists. The first is collaborative filtering: crudely, your friends like this, you might like this too. Digging deeper, the mathematical modelling sounds fascinating. The third is raw audio models: analysis of the audio tracks themselves. This is why Release Radar works so well, despite the tracks not having been played many times.
But I didn’t know about the second one, the emphasis Spotify puts on natural language processing, or NLP:
Spotify crawls the web constantly looking for blog posts and other written texts about music, and figures out what people are saying about specific artists and songs — what adjectives and language is frequently used about those songs, and which other artists and songs are also discussed alongside them.
While I don’t know the specifics of how Spotify chooses to then process their scraped data, I can give you an understanding of how the Echo Nest used to work with them. They would bucket them up into what they call “cultural vectors” or “top terms.” Each artist and song had thousands of daily-changing top terms. Each term had a weight associated, which reveals how important the description is (roughly, the probability that someone will describe music as that term.)
Spotify’s Discover Weekly: How machine learning finds your new music
Unlike many others, I’m a fan of the Apple Music UI and implementation. But I’ve not had terrific results with their recommendation engines. The opposite is true for Spotify. It’d be nice to save some money by cancelling one or other of the services, but they do such different things for me that I can’t see that happening any time soon.
A new tool for exploring songs from Song Exploder and Google Creative Lab:
What if you could step inside a song? This is a simple experiment that explores that idea. See and hear the individual layers of music all around you to get a closer look at how music is made.
You’ll have come across the idea of exploring songs by breaking them down into their component tracks. Inside Music uses spatial audio for a VR-esque feeling. It’s open source, too.
An important part of starting a new band is choosing an appropriate name. It is crucial that the name be unique, or you could risk at best confusion, and at worst an expensive lawsuit.
The neural network is here to help.
Prof. Mark Riedl of Georgia Tech, who recently provided the world a dataset of all the stories with plot summaries on Wikipedia, (enabling this post on neural net story names) now used his Wikipedia-extraction skills to produce a list of all the bands with listed discographies – about 84,000 in all.
I gave the list to the Char-rnn neural network framework, and it was soon producing unique band names for a variety of genres. Below are examples of its output at various temperature (i.e. creativity) settings.
Come for the funny names, stay for the bizarre shark influence.