Artifact 01
The Great AI Heist:
How Machines Are Learning From Your Life's Work
Publication
The Atlantic · Technology · Opinion · Guest Essay
Date
March 31, 2026
Course
ENGW 3304 — Advanced Writing in Business Administration · Northeastern University
Target Audience
General educated public; Atlantic readership; anyone who uses AI tools
Skills Demonstrated
Sections
Introduction to Artifact
Full Text
Reflection
Introduction to Artifact
This Op-Ed was written for ENGW 3304: Advanced Writing in Business Administration at Northeastern University, formatted for publication in The Atlantic as a guest essay in the Technology and Opinion section. The assignment asked students to identify a contested public issue at the intersection of technology and society, take a clear and defensible position, and write for a broad, non-specialist audience in the voice and style of a specific publication.
The central challenge of this assignment was not simply to have an opinion, but to earn the reader's agreement through evidence, analogy, and urgency — without the scaffolding of academic citation style or the safety of hedged academic prose. Writing for The Atlantic meant writing for readers who are intelligent, skeptical, and time-constrained. Every paragraph had to justify its existence. Every claim had to be grounded in a credible source. And the argument had to move — from hook to stakes to evidence to call to action — with the momentum of good journalism.
In producing this piece, I developed and refined several skills that are directly transferable to professional writing contexts: the ability to frame a complex technical issue for a non-technical audience; the discipline of integrating sources smoothly without disrupting the flow of an argument; and the craft of writing a conclusion that feels earned rather than merely appended. I also practiced the rhetorical skill of anticipating and addressing counterarguments — in this case, the tech industry's "fair use" defense — in a way that strengthens rather than weakens the central claim.
Full Text
Tech companies are building the future on the backs of human creativity — without asking for permission or offering a dime.
Imagine you spent years perfecting a craft — painting, writing, composing — and then shared that work with the world. Now imagine a corporation quietly collected every piece you ever made, fed it into a machine, and used it to build a product worth billions of dollars. They never asked for your permission. They never told you it was happening. And when their product began to replace you in the marketplace, they called it progress.
This is not a thought experiment. It is the operating model of the modern artificial intelligence industry. According to Brito (2024), this extraction of creative work is happening right now, at a scale most people have not yet grasped — and the public is only beginning to understand what that means.
"The 'intelligence' of these systems is built entirely on the uncompensated, uncredited labor of human beings — and we are only beginning to reckon with what that means."
To understand why this matters, it helps to ask a question most users never think to ask: What, exactly, are these systems learning from? As generative AI tools become embedded in daily life — writing emails, designing marketing materials, generating code — the answer has become harder to ignore. Every article, every digital painting, every photograph, every line of code ever posted online has been harvested and fed into these algorithms. Chayka (2023) documents how this happened without the knowledge or consent of the people who created the work in the first place.
The stakes could not be higher. Artificial intelligence is advancing faster than our legal and ethical frameworks can follow, and the window for meaningful intervention is narrowing. The decisions we make today — about consent, compensation, and intellectual property — will define the relationship between human creativity and machine intelligence for generations. If we get this wrong, we will not simply have failed a few artists. We will have dismantled the economic foundation that makes human creativity possible.
The Human Cost of Machine Learning
To see what this looks like in practice, consider the visual arts. When a user asks an AI image generator to produce a picture "in the style of" a specific living illustrator, the system draws on thousands of that artist's copyrighted works. Those works were absorbed without permission during training. The original artist receives no credit, no royalty, and no notification. They are not even aware their work was used. Meanwhile, the company that built the tool profits from every query.
Nor is this an isolated problem. Writers, photographers, musicians, and coders have all discovered that their life's work has been used as raw material for systems that now compete directly against them. A growing wave of lawsuits from authors, visual artists, and news organizations reflects the same underlying grievance. As Vincent (2024) reports, the AI industry has built its fortune on an act of mass, unconsented extraction — and creators are finally fighting back.
In response, the tech industry typically invokes the doctrine of "fair use." The argument goes like this: training an AI on publicly available data is no different from a student reading books in a library. But this analogy does not hold. A student reads to learn and then creates something original. A corporation, by contrast, processes billions of data points to build a commercial product that directly replicates and replaces the work it was trained on. As Heikkilä (2023) argues, that is not education. That is appropriation at industrial scale.
What Must Change
The solution is not to halt the development of AI. These tools offer genuine benefits — in medicine, education, accessibility, and scientific research. The question, then, is not whether AI should exist. It is whether it should be permitted to exist on terms that are fundamentally unfair to the people whose work made it possible.
At minimum, three things must change. First, AI companies should be required to obtain consent before using copyrighted work for commercial training purposes. Second, creators whose work is used should receive compensation — whether through licensing agreements, collective royalty pools, or other mechanisms yet to be designed. Third, AI-generated content should be clearly labeled, so that audiences can distinguish between human and machine-made work and make informed choices.
None of these demands are radical. They are the same protections we extend to every other form of intellectual property. The only reason they do not yet apply to AI training data is that the technology moved faster than the law. The companies that benefited from that gap have spent considerable resources ensuring it stays that way.
The creative class built the internet. Their words, images, and ideas are the reason these AI systems have anything to learn from at all. They deserve a seat at the table — not as an afterthought, but as the foundation on which this entire industry rests. The question is not whether we can afford to compensate them. It is whether we can afford not to.
Reflection
Writing this Op-Ed taught me that public argumentation is a discipline, not just an opinion. The most difficult part was not forming a position — it was earning the reader's trust quickly enough to hold their attention through an argument that challenges a powerful industry. I learned to write with urgency without sacrificing precision, and to use sources not as footnotes but as building blocks of credibility. These are skills I intend to carry into every professional communication I produce.