AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn’t exactly make them conscious. It’s not like ChatGPT experiences sadness doing my tax return … right?
The Rise of AI Welfare Debate
A growing number of AI researchers at labs like Anthropic are asking when — if ever — AI models might develop subjective experiences similar to living beings, and if they do, what rights they should have. The debate over whether AI models could one day be conscious — and merit legal safeguards — is dividing tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare.”
Suleyman’s Warning and Blog Post
Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.” Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we’re just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots.
Industry Divides and Research Programs
Suleyman’s views may sound reasonable, but he’s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Beyond Anthropic, researchers from OpenAI have independently embraced the idea, while Google DeepMind recently posted a job listing to study questions around machine cognition, consciousness and multi-agent systems.
Human Impact and Unhealthy Attachments
While the vast majority of users have healthy relationships with AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the product, which still represents hundreds of thousands of people. The idea of AI welfare has spread alongside the rise of chatbots, raising questions about how humans interact with these systems.
The Future of AI Rights and Human Interaction
Suleyman believes it’s not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life. According to Suleyman, “We should build AI for people; not to be a person.” One area where both sides agree is that the debate over AI rights and consciousness is likely to pick up in the coming years as AI systems become more human-like.
Bottom Line:
As AI systems grow more persuasive and human-like, the debate over consciousness and rights is bound to intensify. While Suleyman insists AI should be built for people and not to be a person, researchers at Anthropic, OpenAI, and Google DeepMind continue to explore AI welfare — ensuring that this controversial discussion will remain at the forefront of the AI industry.