Holly Herndon explains the ethical implications of the A.I.-generated Travis Scott song

“Travisbott” was constructed using the rapper’s music, without his consent. Is this a sign of things to come?

February 25, 2020
Holly Herndon explains the ethical implications of the A.I.-generated Travis Scott song (L) by Boris Camaca  

Art created with the help of artificial intelligence is a wide field with mixed offerings. Some of the best-known A.I.-assisted art is made when researchers choose a specific artist and input their works into neural networks to “train” them. (To get usable material, the networks must also be trained in theory, like chord progression and rhyming schemes.) The outputs generated by the A.I. are then used as raw material by researchers to create a new work in the chosen artist’s style. Because the researchers are unlikely to be as musically talented as the artists they’re imitating, the discussions around art made with A.I. are often more compelling than the art itself: What are the implications of networks that can complete an unfinished Beethoven symphony?

ADVERTISEMENT

Technological distrust is something that most people feel, so stories about A.I. music invariably cause waves on the internet. Earlier this month, an ad agency called space150 shared a song and music video called “Jack Park Canny Dope Man.” It was credited to an A.I. called “Travisbott,” an artificial intelligence model trained on MIDI data based on Travis Scott’s original music and lyrics. There are two important pieces of information to consider here: 1) a real human being rapped the computer-generated lyrics, and more were involved in creating a listenable song, and 2) the song is not even a close approximation of Travis Scott's music. The lyrics are almost entirely gobbledygook — “She got the crew on top of my chain / Wasted in the street like a pain,” for example — but every now and then you’ll hear something you might hear from a rapper, even if it's not Travis Scott. Then there’s the instrumental, a C-grade Travis Scott type beat you’d skip if you heard it on YouTube. Overall, “Jack Park” hardly makes the case for impending A.I. dominance of the Billboard charts; the song faded away soon after its release, so one could just dismiss it as a minor and forgettable gimmick for the press it generated (which was not insubstantial).

ADVERTISEMENT

However, “Jack Park” touches on a serious ethical question that a lot of A.I. generated art does not: How do we deal the sampling and reproduction of an existing artist’s musical likeness when someone completely unrelated stands to profit from it, whether financially or through publicity? "Jack Park” is available on streaming services and a vinyl single was announced. [In an email to The FADER after this article was published, a spokesperson for space150 said that it "has not, and never had any intention of, profiting off Travisbott," adding that vinyl copies of the song were made only for employees.]

Holly Herndon, a Berlin-based composer who essentially collaborated with an A.I.-generated "baby" called Spawn on her 2019 album Proto, raised the issue on her Twitter: A.I. is getting better at sounding like human beings, so what will the humans controlling the A.I. do with that power? “Research in voice generation is already at the point where synthesizing singer-a-likes will be commonplace in no time," she wrote. "In private tests we have generated believable results but would never advertise someone’s voice without their consent.” And that applies to Travisbott: space150 confirmed in an email to The FADER that the company did not seek permission to use Scott's likeness.

Spawn, Herndon's "collaborator," was trained on different, consenting human voices ranging from Herndon’s own to that of a 14-piece choir; she denies that Travisbott is an extension of her work even outside of ethical considerations. “We used audio material [instead of MIDI], which is actually much harder.” When she spoke with The FADER in 2019 about Proto, Herndon said she saw something like Travisbott coming: “I think we're going to see a flood of automated compositions, people using neural nets to extract the logic from other people's work, and a lot of appropriation. We're going to see big issues around attribution.” Now that this future has arrived, what’s next? Speaking over Skype, Herndon succinctly broke down her concerns while rejecting the “evil robot” vision of the future.

ADVERTISEMENT

The FADER: Is this the first time a pop star has been used for an A.I. project like this?

ADVERTISEMENT

I don't know if it has or has not been done with a pop star. It's a pretty traditional approach to using a neural network. [But] no one's really questioning that they're using someone else's likeness without that person's permission. I don't know if Travis has commented on this project. [Note: Travis Scott’s representative did not respond to The FADER’s request for comment]

I wouldn't be surprised if he doesn't want to call attention to it.

I think it's potentially illegal. There's legal precedent against being able to do something like this. So I think the headline here is not the technology. I don't think they're really doing anything that's groundbreaking. I think the headline is that we have now reached a point where we feel entitled to sample. Whatever the logical framework that sampling began a hundred years ago, or whatever, it has reached a logical conclusion that we now feel entitled to sample human beings and their likeness without anyone questioning that. Instead, everyone's like, Oh my God, cool tech. And it's like, Oh my God, you've just stole someone's likeness. That's pretty astounding to me.

ADVERTISEMENT

What about the entitlement specific to the streaming age with these new parasitic streaming models, and everything being given to us at the touch of a button?

I think they go hand in hand. My personal research, and what I wrote my PhD thesis about, was looking at sampling culture as a historical precedent for this. It's basically a critique of sampling and consent, which is essentially entitlement towards other people's work. Everyone talks about the power that sampling gives the individual, and no one talks about the guy who played the Amen Break. And so this gets more serious when the tools become ubiquitous so that we can really sample an individual, sample a likeness.

This goes back to Pierre Schaeffer from the Music Concrete. His whole thing was like, Oh God, we have to liberate a sound from its source. That's this elevated version of listening: If you could hear a train, and just hear it for its aesthetic qualities and not think about the train, and that's an elevated form of listening. That's a really interesting concept, but when you liberate a sound from its source, you also liberate it from its context, and in my opinion, context still matters. It's where something comes from. It's the human beings that made it. It's the cultures that cultivate it. It's the spaces, the physical spaces that housed it. And yeah, I don't think that those things should so easily be erased. We can have a conversation about an open source sharing situation.

ADVERTISEMENT

But then that has to go hand in hand with everything else, because art doesn't happen in a vacuum. We live in a society, and the whole “information wants to be free” thing is very much music to Silicon Valley's ears, because then they can create products, and create value on top of other people's efforts without having to ever think about remuneration. So yeah, I don't think the answer is like some sort of like crazy sample cop situation, or hardcore IP. I mean, that old IP law that only flatters the owners of a lot of IP, like major labels, and things like that. But I think it does ask us to question, what's the logical conclusion of this viewpoint? Is it okay to literally sample someone's personhood? Are we okay with that, as a society? And if we're okay with that, how does that play out within the existing power structures that we already have in society?

ADVERTISEMENT

On Twitter while discussing Travisbott, you said, "There is a general thirst for dystopia that is boring."

It's so important that artists don't play into that shit. The whole dystopian cyberpunk thing is an '80s and '90s aesthetic, and an artistic response to something that forward-thinking authors and artists were responding to. It's still very ubiquitous, but I find it very kitschy and super retro. The onus is on artists who are making work that is world-building or option building.

You’ve urged artists not to lean into the same pop archetypes year after year.

ADVERTISEMENT

Yeah. I just feel like there's this perpetuation that happens. If we just keep rolling around in the comfort of this kind of '80s dystopian nostalgia. I get excited when there's a new idea. It doesn't have to be super optimistic, and this is how we're going to fix society. I just want to see a proposal, something at stake. There's nothing at stake to just be like, Yeah, everything sucks. I want to see some kind of agency in people's thought process, that’s not just giving up and reveling in the shitstorm that we have. Things can feel so fixed, but they're not. We get to decide where society goes.

Do you think that one day voice modeling is something that could buttress this tendency to keep revisiting the same pop archetypes?

Oh, 100%. Are you kidding me? There's so many labels with gold mines worth of master tapes that they're just dying to reanimate. That's definitely going to happen.

ADVERTISEMENT

We'll see what the public decides that they want, or what they don't want. We also decided as a society that it's okay to not pay for music. So I don't know, maybe we're going to also decide that entertainment trumps everything else. We're going to see a lot of retro fetishism artistic necrophilia. And maybe some of it's even going to be interesting. But we have to figure out the ethics around it, and I don't think anyone's even wanting to have that conversation. We're still all excited about the shiny new toy instead of thinking about what it means.

Is it possible for something like Travisbott to exist ethically under existing capitalist structures? And can we even extract the ickiness from it?

ADVERTISEMENT

I don't know what kind of business model would work in a way that would be fair, but I think that that's even just a start. Being like, You're the person who generated this style, and this is your face. How can we compensate you for this contribution? Even just the fact of not assuming that it's okay to just do it without asking. I think it could lead to really interesting new ways for large scale collaboration, or large scale community creation of creative output. If that's the direction that it could go down, I think it could create really interesting new works that could somehow all filter back to the people who were involved in it.

But looking at human history, that's not what we've ever done before. We've been more extractive in our practices as a society. But if we could find another way, the technology's there. It's not like it's impossible, and I think it could be used to make really interesting, cool stuff. And I'm hoping that we don't just get boring rehashes, but of course we won't. There's tons of people who are super interesting and invested in this stuff that aren't just interested in doing that cheesy jukebox scene from Blade Runner, or something like that.

I think we'll be able to see a lot of really genre-specific work made. We're going to see a lot of mashing up of genres, almost like a Lil Nas X. I feel like that could be A.I. created, like, Oh, let's mash up these two styles or something. I think we'll see a lot of that, and some of that's going to be really cool, and interesting, and funny. [But] it depends on who's making that decision of what to mash up, and who's making the curatorial decision. You spit in a bunch of genres, and choose two at random, but they're not all going to sound cool. Some human has to be like, This is going to hit. This is going to be really interesting to people, and that's still going through a human filter. We don't have a computer that can quite respond to the world around it in a way that a human can. So as long as human composers are doing that, they won't be replaced. But if human composers are acting like bots then they will be replaced.

ADVERTISEMENT

This post was updated on Wednesday, February 26 2020 to include a statement on Travisbott's profits from space150.

ADVERTISEMENT
Holly Herndon explains the ethical implications of the A.I.-generated Travis Scott song