An Anchor/Spotify Followup - Making A Show

BONUS CONTENT BABY

Hey, fine friends. Yesterday I wrote about Anchor and Spotify adding the ability to use licensed music in a show (it’s not a podcast, LET’S GET THA’ RIGHT [Gordon Ramsey voice].) I think the ability to craft music shows with full-length songs is going to open the floodgates for creators, and I expect some other streaming music platforms to follow suit.

I’ve also been waiting for this to happen because I have been sitting on a show idea for AGES, and I was finally able to do it without being thrown in music jail for life. So I set myself a challenge—I gave myself one hour, no more, to put a pilot together using only Anchor. I thought I’d share that experience with you, because guinea pig.

So, the idea I had was for a show called “Deep Six;” the gist of which is to explore the connective tissue between six songs to see how deep down a rabbit hole I can go from the first song to the last. For the pilot episode, I journeyed from worst to first—I started with an objectively terrible song, and ended on Rolling Stone’s number one song of all time.

So I made the show. Currently, you can only listen on Anchor or Spotify, and in order to get full-length songs (i.e., the intended experience) you need Spotify Premium. If you have regular-old-Spotify or Anchor, you get 30-second clips of the songs, which is unsatisfying, though understandable.

Here are my initial thoughts:

  • My initial goal was to only use Anchor’s tools. I plugged an ATR 2100 mic into a Focusrite interface, and recorded straight into Anchor. It sounded OK, but going from dry voice to song to dry voice didn’t feel like “a show” or up to my standards (which vastly exceed my production ability.) I found myself wanting to put a music bed under my voice, which currently isn’t possible. So I cheated. I re-recorded the show using Farrago (a Mac soundboard software) into Audio Hijack (combining a music bed from Farrago with the vocals from my mic.) This, of course, created extra work, as I had to first make this file, and then upload it to Anchor. So, in service of my own insanity, I cheated and used a few of my own tools so I didn’t have to listen to a dry vocal, which sapped the momentum of the show. Hopefully Anchor will add music beds in the future (they do have a library of interlude/bumper music, which I did not use.)

  • I finished my vocal tracks (all seven of them) in about a half an hour, and uploaded them to Anchor. Easy peasy. This particular show called for seven songs, and I found them all quickly and accurately in the Anchor interface. There is a drag-and-drop interface to move the vocal tracks and songs around to order them, which behaved fairly well.

  • One thing that didn’t behave as well as I would have liked—the built-in audio editor. Having cheated enough to create my vocal files, I told myself I wouldn’t edit the files locally (really, just removing a few seconds of dead air at the beginning of each) but would use Anchor’s browser-based tool. I found it very fiddly and slow, and that was just clipping a bit of obviously dead air at the start of a track. Hardcore um-and-ah removing would be very frustrating, I think, at least on my Mac. Next time I’ll edit my files locally.

  • Music beds under vocals would be my top request, but number two would be tighter segues or crossfades. The forward momentum of a great music show is to propulsively move from the fading notes of one song, straight into voice over, with the next song coming in before the voice over ends. For non-radio geeks, when a DJ stops talking the SECOND that the vocal starts in a song, that’s called “hitting the post.” I wanted to hit the post. Instead, you have to wait for a song to completely fade out to silence, then the voice track plays, and then the next song starts. Ideally you could tighten that up, but it might be a restriction of the licensing.

  • After I uploaded my vocal tracks and assembled the song order, I submitted the episode for review. I was actually pessimistic that it would be accepted, because of the music beds I used. My assumption is that Spotify uses audio fingerprinting to ensure that the vocal tracks contain vocals, and not music they can’t monitor/monetize. The two music tracks I used under my vocal were purchased from Pond5 for royalty-free usage, but I was worried they would cause Spotify to hiccup. They did not, and my episode was accepted in five minutes.

  • It took about 10 minutes from acceptance to actually being able to listen to the show in Spotify. So, if you submit a show, don’t keep reloading. Go make a cup of tea. It’ll be there.

  • I ended up noting in the podcast description and the episode notes that full-length songs can only be heard with Spotify Premium. That’s not readily apparent to the end-listener. The experience of getting 30-second clips (and not the 30 seconds I would have chosen from the song) is jarring and it clearly isn’t a “show” at that point. I’d rather they just not even publish that version as I think it only leads to a dissatisfied listener. But the full version on Spotify Premium played flawlessly. It loads as a single audio track, though the album cover art and artist/title do update as the track plays.

Ultimately, for version 1.0, I was pretty happy with the experience. I think it is a couple of tweaks away from my dream of truly creating a music show. I will still probably use my own audio tools on the recording side, but assembling and publishing a show is extremely simple, and I plan on more episodes.

If you’d like to hear the fruits of my labor, check out episode one of Deep Six: Worst to First.

Now back to your weekend.

Tom