- Prompted
- Posts
- AI and Creativity: An Early Warning System?
AI and Creativity: An Early Warning System?
Here's why the new creative tools matter in the wider conversation on AI risk.
Prominent thinkers and researchers—some of them deeply involved in building AI systems—have raised the alarm that advanced AI could pose serious, even extinction-level, threats to humanity. Others argue that’s unlikely, or that the conversation is premature. This conversation is ongoing and ever-evolving in the AI Safety sphere.
Wherever you land on the spectrum of “P(doom)” (short for “probability of doom”), one thing is clear: more people than ever are now thinking seriously about what it means to be human in the age of machines. And in that light, some might ask:
Why talk about creativity? Why focus on AI art, ideas, music, or storytelling when the bigger questions are about survival?
I believe these questions are short-sighted. When a major societal shift begins, the earliest impacts often show up in culture before they show up in systems. We've seen this play out many times throughout history (I'll provide some examples shortly). It's for this reason that I think the discourse around AI in the creative industries is much more useful than it's given credit for, and has a serious part to play in the wider conversation around AI risk.
I believe AI in the creative space isn’t a 'cute' topic, but the indication of something more profound. It's one of the reasons I decided to create this platform. The idea that culture feels a shift before systems catch up is something we’ve seen before. Cultural response is often the early warning system, the dreamscape, or the mirror before society consciously processes what’s happening.
Here are a few standout historical examples that reflect that.
Some historical examples of culture moving before systems

Young "hippie" standing in front of a row of National Guard soldiers, across the street from the Hilton Hotel at Grant Park.
1. The 1960s Counterculture (Before Policy Reform)
Culture moved first:
Music, fashion, art, and protest (think Dylan, Hendrix, Woodstock, underground zines) began openly rejecting war, capitalism, and conformity. This was years before major legal or policy changes regarding civil rights, gender roles, or the Vietnam War.
System caught up later:
It took years for governments, institutions, and mainstream media to respond—and when they did, it was often reactive.

2. Science Fiction Anticipating Tech + Ethics
Culture moved first:
Books like 1984, Brave New World, and films like 2001: A Space Odyssey or Blade Runner explored themes of surveillance, AI, and corporate dystopia long before those conversations were happening in tech policy or law.
System caught up later:
Discussions around data privacy, digital ethics, and AI alignment only really took off after those fears had been culturally seeded.

To celebrate the 100th anniversary of the Dada movement, a Pseudo-Symposium on 'Data Dada' was held at Stanford in 2016.
3. Dada and Surrealism Post-WWI
Culture moved first:
Artists reacted to the trauma and absurdity of war by creating anti-art movements like Dada—irrational, rebellious, emotionally raw. Surrealism followed, exploring the subconscious.
System caught up later:
Formal psychological theory (e.g. Freud, Jung), institutional trauma care, and shifts in global politics came after the emotional cultural outpouring.

4. The Internet in the 1990s
Culture moved first:
Blogs, early websites, Napster, memes, MySpace—these gave people new forms of identity and connection, and they evolved faster than governments or schools could comprehend.
System caught up later:
Only after culture had changed did systems start to develop things like data laws, content moderation rules, or digital identity regulation.
Examples of modern friction in arts, culture and AI
It’s easy to think of art and creativity as optional. But they’re often the first place change becomes visible. People in the creative industries have been 'playing around' with AI for quite some time, but it's only in the last few years that AI systems and tooling have begun disrupting entire creative industries and the workflows within them.
I believe we're seeing AI in the 'culture moving' phase, which is involving creative professionals reacting to and experimenting with tools (and often protesting against them), even as regulation, philosophy, and safety efforts are still catching up. This process is creating multiple friction points that are making headline news. Here's just a few of them.

1. Meta's AI Training on Copyrighted Works is Sparking Legal and Ethical Debates
Meta is facing multiple lawsuits for allegedly using pirated books from shadow libraries like LibGen to train its AI models, including Llama. Notably, internal documents suggest that CEO Mark Zuckerberg approved this practice despite legal concerns. Authors such as Sarah Silverman and Ta-Nehisi Coates are among the plaintiffs challenging Meta's actions.
This controversy has ignited broader discussions about the ethics of using copyrighted material without consent in AI development, highlighting the tension between technological advancement and intellectual property rights.

2. AI-Generated Art Mimicking Studio Ghibli Style Has Raised Concerns
OpenAI's image generator recently faced backlash for producing artwork that closely resembled the distinctive style of Studio Ghibli. Critics argue that this not only infringes on the creative identity of the original artists but also raises questions about the originality and ethical implications of AI-generated art. OpenAI responded by restricting prompts related to Ghibli's style, but the incident underscores the ongoing debate about artistic authenticity and the boundaries of AI creativity.
attempt to provide a new lens on this topic in my piece The Patterned Mind: What AI Can Teach Us About Human Creativity.
3. Posthumous Music Composition Through AI and Lab-Grown Brain Cells is Freaking Everyone Out
An innovative art installation titled "Revivification" features music composed by a lab-grown mini-brain developed from the cells of the late composer Alvin Lucier. This project blurs the lines between life and technology, raising profound questions about consciousness, creativity, and the role of AI in extending human artistic expression beyond death. It exemplifies how cultural experimentation often leads the way in exploring the ethical and philosophical dimensions of emerging technologies.
Moving forward
Culture has always done this. It doesn’t wait for permission to respond. It feels, intuits, expresses—often before it understands. Which is precisely what makes the creative space so important in the age of AI. Not as decoration, but as an arena where the real frictions like authorship, ownership and personhood play out in raw form.
The tension around Meta’s use of copyrighted work, the Ghibli-style art backlash, and experimental projects like lab-grown composers aren’t just niche creative debates. They’re prototypes of future crises. They reveal how easily AI can replicate, obscure, or even overwrite human expression—and how unprepared our legal, ethical, and technical frameworks are to respond. These stories matter not because they’re scandalous, but because they are culturally legible symptoms of deeper systemic gaps. Creativity is the canary in the AI coal mine.
We are feeling deep friction between machine and meaning in the creative arts because that's where we test what it means to be original, to have agency, to be human. And it’s likely where AI safety protocol will need to learn how to operate—not just at the level of control, but at the level of culture.
The AI safety conversation often lives in the realm of threat models and edge cases. But there is a quieter, more human version of that same question unfolding right now—in galleries, in music studios, in writing rooms. It is not dramatic, but it is no less vital. And it may be here, not in labs or protocols, where the deeper insights emerge first. We would do well to listen.
Reply