By Justin Sanders

Artificial intelligence is already impacting how artistic works are created – in ways both exciting and troubling. Soon, it will profoundly affect how we interpret ownership of those works. In some cases, it already has.

As the robots expand their cold arm extenders across music, publishing, film, and every other creative industry, here are three burning questions about copyright in the age of AI.

1) When is machine learning “reading” and when is it “copying”?

There is no shortage of experiments that showcase the ever-improving ability of computers to create “original “ artistic works from scratch. The developers involved with these pursuits have different ways of coaxing their respective machines toward creative enlightenment, but the gist of each experiment is essentially the same: A human feeds existing creative works into the AI, which synthesizes them, deconstructs them, and ultimately learns to make something resembling them – be it a song, book, or other creation.

The question is whether this “learning” process is a matter of the AI “reading” the works or “copying” them. If the machine is retaining whole copies, or large sections, of, say, every song the Beatles ever recorded, and has an intention of making those copies available for public consumption, then this would be copying and a clear violation of copyright law. If, on the other hand, the machine is deconstructing all of the songs and processing them through a brain-like neural network in order to teach itself how to make a song that sounds like a Beatles track… well, that’s similar to what law-abiding human songwriters do. Our brains “read” information, combine it with whatever “secret sauce” exists in the creator’s brain, and convert all of that into creativity.

“That said, intent of the computer scientist may be a significant factor,” writes the copyright blogger, David Newhoff. “For instance, if the training of the AI will have a commercial purpose, this may suggest a requirement to license the works under copyright.”

Watch for this issue to get very sticky as technology companies find increasingly efficient ways for their AIs to synthesize massive libraries and produce works that are indistinguishable from human-made creations. “Tech companies may not use raw silicon for free,” writes Newhoff, “so why should they get to exploit millions of creative works for free, no matter what they’re turning that data into?”

2) Who owns the copyright to work that an AI produces on its own?

In the 2013 movie Her, a futuristic operating system named Samantha (voiced by Scarlett Johansson) compiles letters written by her human companion, Theodore (Joaquin Phoenix), into a book and succeeds in having it published. The movie is concerned with deeper existential questions than licensing deals, but an enterprising creative might ask themselves during this scene, “Who owns the copyright to this publication?” Is it Theodore, the primary writer of the book’s raw materials, or Samantha, the book’s ostensible producer who also happens to be ostensibly non-sentient?

A recent settlement in a court case involving a selfie taken by a monkey, Naruto, is indicative of how the courts currently view this issue. An animal rights group representing Naruto wanted the animal to have the copyright and thereby collect the proceeds generated by the image. The owner of the camera on which the selfie was taken, however, thought he, the owner, should possess the copyright. In the end, the court ruled in favor of neither party. The monkey could not hold copyright because, among other things, monkeys have no need for the financial incentivization that copyright affords. And the camera’s owner could not hold copyright because, despite his impassioned argument to the contrary, providing the equipment and circumstances for a photo to occur is not the same thing as actually taking the photo.

In other words, Plagiarism Today’s analysis of the event put forth, “No human, no copyright protection.”

But does this mean that companies that are investing millions of dollars into machines with some capacity to create will not be able to collect revenues on the works their machines produce? After all, paint manufacturers do not own the copyright to paintings made with their product, and Microsoft does not own the copyright to stories written using Word. Why should it be any different with an AI tool built by programmers, no matter how sophisticated that tool’s output becomes – even sophisticated enough to emulate a human consciousness?

As AI evolves to make and release creative works at scale (one AI music startup claims the industry should start preparing “for a world of 10m original songs per day”), look for this issue to heat up in a hurry – and maybe spark fundamental changes to copyright law as we know it.

3) Will AI help artists get paid better?

That 10 million-song-per-day bounty mentioned above? If it happens, it is going to generate a ton of revenue – and music revenue, from royalties to residuals to merchandise sales, is difficult for humans to keep track of in the digital era at our current rate of production. Already, by some estimates, more than $2 billion in annual royalties go uncollected around the world – and AI startups are stepping in to help.

Exactuals, for example, is a payments company that works with music labels, publishers, and creatives to streamline metadata and make sure the right people get paid fairly for their work. One of its core products, RAI, is an open API that uses machine learning to sift through the data deluge and resolve conflicts in ownership attribution and other areas.

The efforts of companies like Exactuals fall squarely in the “AI could be a friendly helper to humans” camp (as opposed to the “AI will mark the end of humanity” camp), offering a service that actually improves the lives of creative professionals by removing a task that is truly burdensome. One could see a similar offering simplifying complex financial matters in film and television, book publishing, and all the other creative industries where simply getting paid can be a full-time job in itself.

THE HUMAN PROBLEM

Ultimately, and unsurprisingly, the behavior of AI reflects the behavior of its creators. AI on social media platforms has shown racial bias because humans are racially biased. And perhaps YouTube’s AI does a poor job at fair copyright enforcement because YouTube, along with its parent company Google, has blatantly profited from pirated content since its inception.

What might happen if the tech industry used its power and wealth to build machines that were more respectful of copyright than any army of humans could ever be? That somehow honored and even elevated the human-made works they repurposed or learned from?

For that to happen, the people in Big Tech would first have to change their own collective attitude toward copyright and finally embrace it as the Founders intended – as a tool for advancing science and the arts, and not as a hindrance to the growth and advancement of their own products and profits. In other words, they would have to, finally, design their offerings with care for the people they impact. 

Now there is a novel concept – not a robot uprising but a human one.