Authored by Jeffrey Tucker via The Epoch Times,
The New York Times has dropped a major lawsuit against OpenAI and Microsoft for copyright infringement. The paper claims that these companies have been scraping NY Times content to train ChatGPT and other features of artificial intelligence software. They cite real injury here: People are using AI tools for information rather than subscribing to The NY Times, and therefore The NY Times is losing advertiser revenue.
My first reaction is: That explains so much!
In particular, it shows why on any topic regarding politics, news, public health, climate change, or anything even mildly controversial, ChatGPT comes across so stupidly conventional and ignorant of deeper literature. It is like reading The New York Times precisely because the AI engine is using The New York Times as its trainer! That truly does account for the core of the problem.
True, there are thousands of fun things you can do with AI. You can debug software. It can compose nice music and paint pretty pictures. It can slice and dice videos with nice results for TikTok. It can write an instant poem or lyrics of anything. It can instantly bang out an article on any topic. In every case, the results are delightful and very impressive.
And yet in every case, the results are obviously generated by a machine. Once you learn to recognize the telltale signs, it is unmistakable. And then the whole experience becomes boring and unimpressive.
People ask me if I as a writer felt threatened by this machine learning and instant prose generator.
For me, it is quite the opposite. Good writing and good thought comes from a spark that only the human mind can generate. No matter how sophisticated AI gets, it can never reproduce this. In fact, I find it hilariously amusing how bad this software really is.
For example, just now, I asked AI to compose an essay of 350 words on AI and copyright in the style of Jeffrey Tucker. It generated some of the most mind-numbing blather I’ve read in years, saying almost nothing of any significance but saying it in clean English prose that has the feel of authenticity while being barren of any of the reality.
The final paragraph of the result: “Ultimately, the intersection of AI and copyright necessitates thoughtful reflection, interdisciplinary collaboration, and an adaptive legal framework. As technological progress propels us into uncharted terrain, striking the right chord between attributing human agency and embracing the transformative power of AI holds the key to a harmonious coexistence in the realm of digital creativity.”
Eye roll! If I read that anywhere, my spidey sense would be immediately triggered that the author is just making stuff up. More precisely, it is not making stuff up but merely regurgitating known forms of conventional prose in a way that mimics thought but without the slightest spark of any creativity, much less depth of meaning. In other words, AI writes like a highly precocious 5-year-old, capable of astonishing feats of imitation but utterly incapable of actual intelligence. It’s like a sophisticated parrot: seeming to speak English but not really doing so. It’s great for parties but not much else.
Consider the copyright case alone. The New York Times claims to own its words and sentences and is furious that ChatGPT takes it verbatim, allowing people to gain access to ideas without having paid for them. If this is true, The NYT should have a major beef with the whole of corporate media and academia too, since it long ago set out to be the standard-bearer of approved thought and conventional wisdom. AI is merely amplifying.
I have no clue how the courts are going to come down on this question. Regardless, the implications of this case are rather broad. OpenAI and Microsoft admit that they have been using The NY Times for its services but say that this constitutes fair use in the law.
Truth is that the phrase “fair use” does not have a rigorously strict definition. It is what the courts say it is. It’s an exception to the rules concerning copyright that bows to the reality that information is not containable like real property. Without fair use, we would live in a preposterous world in which everyone would be required to forget what he learned by reading anything. So maybe it is fair use and maybe it is not.
A larger problem is the institution of copyright itself. Today it is based on the intuition that a creator should own his work. It did not start out that way, however. That was the whole point of the original Statute of Anne (1709). It amounted to a royal grant of monopoly privilege for publishers and authors, and it was deployed mostly for purposes of censoring dissident political and religious opinions. It also set off centuries of litigation in the commonwealth countries and in the United States.
The practical import of copyright today has very little to do with authors’ rights and mainly centers on the rights of publishers to retain exclusive printing and distribution rights to works. Over the years, the term has been extended, from 28 years to 70 years after the lifetime of the author.
That’s how long publishers retain rights. In the old days, publishers would let books go out of print and the rights would revert to the author. No more. Now publishers keep catalogs for the whole term, resulting in an odd situation in which the author loses all intellectual rights and only his grandchildren are in a position to reprint.
It’s nuts, but that’s how the law works. There are hundreds of thousands of books published after 1930 that are still in copyright and have not been digitized. They are inaccessible for all practical purposes in today’s world. And yet they pay no royalties and even the rights holders have forgotten about them. This is a giant tragedy.
The whole theory of copyright is wrong. It is based on the model of private property, as in real things. Real property is ownership exclusive. If I have a fish, you cannot have the same fish. If I have a boat, you cannot have it too at the same time. That’s why the social norm of property came about in the first place: to allocate the rights of control over things that are scarce. It is designed to prevent conflict and bring peace.
But ideas once created are not scarce. You can take every idea in this article and it takes nothing from me. Ideas are infinitely reproducible and therefore not like property at all. The attempt to make them into property requires state action and ends up creating industrial monopolies that benefit not authors but publishers. When authors get paid, it is called getting “royalties,” as in a stream of money from a royal grant of privilege. There is nothing wrong with getting paid based on sales but that can and does happen without copyright.
For example, you cannot copyright recipes, but services that provide recipes for cooking are a highly lucrative business. You cannot copyright sports strategies and plays, but there is a huge demand for books on them. Same with chess moves. It was true with music until the 1880s in Germany: Bach, Beethoven, and Brahms composed without copyright by simply selling publishers access to their works. This did not diminish output but arguably made it better by ensuring a highly competitive marketplace.
In the early days, you could not copyright computer code either. That’s how it came to be that spreadsheet technology became so dominant so quickly and transformed business life. Only later did copyright come along. Now any developer will tell you that the entire industry is gummed up by intellectual property claims. That’s true of many industries today. Hardly anyone is truly happy with the regime as it exists, except perhaps Disney, which has long lobbied for longer terms.
In any case, ChatGPT is doing nothing morally wrong by scraping The New York Times for content. I happen to think this is a bad business idea because The NY Times is a known propaganda sheet and far from definitive on any topic. But that is the choice that OpenAI (wrongly named because they are taking recourse to intellectual property too) has decided to make. I hope the courts side with OpenAI, but that would be only a temporary fix to a much larger problem of the institution of copyright itself.
In conclusion, the intersection of AI and copyright necessitates thoughtful reflection, interdisciplinary collaboration, and an adaptive legal framework. Just kidding!
Tyler Durden
Sun, 12/31/2023 – 16:40