Does AI Undermine Human Creativity?
The danger is not help. The danger is surrender.
A recent study I came across made me return to a question I have been thinking about for some time. Titled “When ChatGPT Is Gone: Creativity Reverts and Homogeneity Persists,” the paper reports that ChatGPT improved creative-task performance while students were using it, but that this boost disappeared once ChatGPT was removed. More importantly, it found that ChatGPT-assisted work became more homogeneous, and that this homogenizing effect persisted after the tool was taken away.
The study went viral not because of this quite sensible finding, but because of a much stronger claim attached to it: that anyone who has used ChatGPT for writing or brainstorming in the last six months may have suffered permanent creative damage.
That does not seem to be what the study shows. But the question itself is valid: will AI undermine human creativity? Until further research sheds more light on this question, let me share what I think.
I do not think AI will necessarily undermine human creativity. But it may. The question, then, is under what conditions.
The first condition concerns where we think creativity comes from.
Does it come from the dragging, often burdensome activity of production itself? Or does it begin as a spark, an intuition, an image, a sentence, a melody, a connection, which then has to be pursued, tested, developed, and finally given form?
I do not think this is an either-or question. A serious work, whether academic writing, a song, a poem, or a novel, is rarely the product of pure inspiration alone. But it is also not produced by mechanical labor alone. It usually comes from a combination: a living impulse at the beginning, and a long process of judgment, revision, discipline, and execution afterward.
The danger of AI is obvious if it replaces the human role at the decisive stages of this long process. If it kills the impulse or impairs the judgment. If AI supplies the idea, the structure, the language, and the final decision, then the human being has not created much. He has merely selected from available outputs.
But if the impulse remains human, and if the judgment remains human, then AI may play a different role. It may reduce some of the drag between the first impulse and the finished form.
But the process does not end when the maker finishes the work. It enters the world and waits to be judged. That is the second condition. And this condition is more specifically about originality.
Originality concerns the place of a work among other works, and the judgment passed on it by others. A person may feel that he has created something original. That feeling matters. But originality is rarely settled by the maker alone. It is tested by others: readers, listeners, viewers, critics, peers, rivals, and sometimes by time itself.
Even though we create alone, we do not create in a vacuum. Somewhere in the act, there is always an imagined audience. We do not only ask, “Does this feel new to me?” We also ask, “What will this or that person think? Will they recognize something new in it?”
Here too AI presents a danger. But again, the danger is not automatic. The danger begins when human beings allow AI not only to help produce works, but also to define the standards by which works are judged. If AI tells us what is creative, what is original, what deserves attention, and what should be ignored, then the human community has surrendered judgment itself.
This is different from using AI while still enforcing human criteria. If writers use AI to test a sentence, compare a structure, sharpen a title, or clarify an argument, human judgment has not disappeared. It remains the authority. The problem begins when writers, readers, publishers, and audiences increasingly ask AI to decide what is worth writing, reading, publishing, or admiring. Then originality is in danger here too.
A work becomes original not because a machine labels it so. Not even because its maker believes it so. It becomes original when human beings find in it something worth noticing, preserving, disputing, or loving.
So the dystopian future, if it comes, will not come simply because AI writes, composes, or paints. It will come if human beings abandon the one role they cannot abandon without ceasing to be fully human: judgment.
The judgment that says: this is good, this is false, this is empty, this is moving, this is derivative, this is alive.
* * *
Is such a dystopian future awaiting us? A world in which AI produces, AI judges, and human beings merely abide by the process?
I am not sure.
I am more Dostoevskian on this. In Notes from Underground, Dostoevsky says something deeply human about human beings. They do not always obey the clean system. Sometimes they resist precisely because the system is too clean, too rational, too complete.
I believe there will always be restless minds that refuse to accept a managed consensus as truth. There will always be someone who calls the performance a performance, the miracle a trick, the judgment a managed outcome.
There will always be a child, to refer to Andersen, who will say the emperor has no clothes.
If such people remain, then AI will not kill creativity.
If no such people remain, then the problem is no longer AI. The problem is that human beings have surrendered judgment.

