![]() ![]() The application of copyright’s exclusive rights to computer-generated works is copyright literalism par excellence-it punishes literal copying even though the final result is non-infringing and the putative harm to the copyright holder (the creation of new *non-infringing* works that are cheaper and easier to produce) is not the kind of harm that copyright exists to prevent. Fair use exists in part to shield legitimate uses from copyright literalism and contain copyright to its intended domain. Applied broadly and literally (I’ll call this “copyright literalism”), the exclusive rights in the law threaten to chill uses that benefit the public and that do not result in the kind of unfair competition that copyright was meant to prevent. In a nutshell, my argument is this: The exclusive rights in copyright law are not well-tailored to the law’s public interest purpose. I think the answer is rooted in copyright’s purpose, and the corresponding limits in its scope. ![]() Why should we embrace this (IMO) fact about the law, that fair use generally protects tools like chatGPT and Stable Diffusion against copyright liability? Even if we have legitimate concerns about the impacts of these technologies, we should recognize these are not copyright concerns and stand by fair use and the robots’ right to read. Maybe I’m being too glib about the technical legal answer, but in any case, I want to answer a different question. These uses are fair because precedent pretty clearly says they are. The technical legal answer I favor is straightforward, and the very short version is that there’s no meaningful difference between these tools and the other “non-consumptive”/ computational uses that courts have already blessed as fair use many times over. In copyright world – including in some the inevitable raft of lawsuits – the question has been put more narrowly: do these computer tools violate the copyrights of the works that are used to “train” them? Lots of smart people have opined on this already, so I don’t want to go too deeply down this rabbit hole myself. This raises a laundry list of policy questions, some as old as the story of John Henry (will machines put humans out of work?), others as 21st Century as data sovereignty (how can nations govern data pertaining to their citizens when it flows seamlessly around the globe?). ![]() These tools are creating a buzz in part because the works they generate are of sufficient quality that they could pass for or replace the work of humans, at least in some contexts. You probably know at least a couple by name: chatGPT (for text) and Stable Diffusion (for images) are the ones that seem to have taken over my social feeds. The last six months or so have seen the seemingly sudden appearance of several startlingly powerful tools that create complex new textual and visual works in response to relatively simple prompts. Avoiding Copyright Literalism and the Fairness of Computer-Generated Works ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |