The New York Times’s lawsuit against OpenAI and Microsoft highlights an uncomfortable contradiction in how we view creativity and learning. While the Times accuses these companies of copyright infringement for training AI on their content, this ignores a fundamental truth: AI systems learn exactly as humans do, by absorbing, synthesizing and transforming existing knowledge into something new.
Consider how human creators work. No writer, artist or musician exists in a vacuum. For example, without ancient Greek mythology, we wouldn’t have DC’s pantheon of superheroes, including cinematic staples such as Superman, Wonder Woman and Aquaman. These characters draw unmistakably clear inspiration from the likes of Zeus, Athena and Poseidon, respectively. Without the gods of Mount Olympus as inspiration, there would be no comic book heroes today to save the world (and the summer box office).
This pattern of learning, absorbing and transforming is precisely how large language models (LLMs) operate. They don’t plagiarize or reproduce; they learn patterns and relationships from vast amounts of information, just as humans do. When a novelist reads thousands of books throughout their lifetime, those works shape their writing style, vocabulary and narrative instincts. We don’t accuse them of copyright infringement because we understand that transforming influences into original expression is the essence of creativity.
Critics will argue that AI companies profit from others’ work without compensation. This argument misses a crucial distinction between reference and reproduction. When LLMs generate text that bears stylistic similarities to works they trained on, it’s no different from a human author whose writing reflects their literary influences. The output isn’t a copy, it’s a new creation informed by patterns the system has learned.
Others might contend that the commercial nature of AI training sets it apart from human learning. However, this ignores how human creativity has always been commercialized. Publishing houses profit from authors whose styles developed by reading other published works. Hollywood studios earn billions from films that remix existing narrative traditions. The economy of human creativity has always involved building commercial works upon the foundation of cultural knowledge.
Moreover, this economic reality aligns perfectly with the Constitution’s original intent for intellectual property. Article I, Section 8 explicitly empowers Congress “to promote the Progress of Science and useful Arts” through copyright law — not simply to protect content creators, but to advance human knowledge and innovation. Allowing AI systems to learn from existing works furthers this constitutional purpose by fostering new economic activity and technological progress.
It’s also crucial to recognize that when verbatim copying occurs in AI outputs, it almost always results from specific user prompts, not the inherent nature of the AI system itself. This highlights how LLMs are tools, capable of being used responsibly or abused for copyright infringement based entirely on how users interact with them. That LLMs can be used to violate copyrights is logically little different than how a hammer can be used as a deadly weapon. Common sense tells us that a hammer’s potential for violent assault doesn’t justify treating it as an inherently dangerous weapon, as said usage represents a rare exception of its use rather than the norm.
The Sony v. Universal Studios case of 1984 illustrated this logic legally when the Supreme Court ruled that VCRs were not illegal because they had “substantial non-infringing uses,” despite their potential to be used for copyright violations. This exact case gives courts the legal framework to side with AI companies today, as LLMs clearly offer tremendous value entirely separate from any potential copyright concerns.
While there remains a good chance that OpenAI will emerge victorious in their legal battle, we should not rely on courts alone to reach the correct conclusion in these cases. Congress must act to clarify copyright law for the AI age, just as it did when photography and recorded music disrupted prior understandings of intellectual property.
When photography first emerged in the 19th century, courts struggled to determine whether photographs deserved copyright protection or were merely mechanical reproductions of reality. Congress eventually stepped in, recognizing photography as a creative medium deserving protection. Similarly, when player pianos and phonographs emerged, enabling mechanical reproduction of music, Congress created the compulsory licensing system in the 1909 Copyright Act rather than allowing copyright holders to block the technology entirely.
Today’s situation demands similar legislative vision. Rather than allowing the risk of a judicial interpretation that strangles innovation, Congress should immediately move to establish a clear framework that recognizes AI training as fundamentally transformative and non-infringing.
Nicholas Creel is an associate professor of business law at Georgia College & State University. The views expressed are his own.