In a notable move reflecting the growing complexity of artificial intelligence, Google DeepMind has brought on board a professional philosopher to examine one of the most debated frontiers in technology—machine consciousness. The appointment signals a broader effort by leading AI firms to address not just technical challenges, but also the ethical and philosophical implications of advanced AI systems.
The philosopher, Henry Shevlin, announced his new role on social media platform X, stating that he will be working on issues related to artificial general intelligence (AGI) and the evolving relationship between humans and intelligent machines.
Shevlin’s role at DeepMind will revolve around critical areas that are increasingly gaining attention in AI research. He confirmed that he will be “focusing on machine consciousness, human-AI relationships, and AGI readiness.”
These areas are central to the long-term vision of AI development. While current AI systems excel at specific tasks, the concept of artificial general intelligence (AGI) refers to machines that can think, reason, and learn across a wide range of domains—much like humans.
In addition to technical exploration, Shevlin is expected to contribute to shaping ethical frameworks for AI systems. His role involves ensuring that AI models align with human values, helping prevent scenarios where machines could act in ways that conflict with societal interests.
The hiring of philosophers by AI firms is not entirely new. Earlier this year, Anthropic appointed Amanda Askell to guide its AI model Claude in understanding ethics and moral reasoning.
This trend reflects the industry’s recognition that AI development is no longer just a technical endeavor. As systems become more advanced, questions around consciousness, decision-making, and ethical behavior become increasingly important.
DeepMind’s decision to bring in philosophical expertise highlights its proactive approach to tackling long-term risks. As AI capabilities expand, concerns have grown about machines potentially prioritising their own objectives over human welfare.
These concerns have been widely explored in popular culture, particularly in films like The Matrix, which depict scenarios where AI systems gain autonomy and challenge human control.
The growing discourse around AI risks is not limited to theory. Reports have indicated increasing public anxiety about the potential consequences of advanced AI systems.
In one recent incident, a 20-year-old man allegedly attacked the residence of Sam Altman in San Francisco, reportedly driven by fears of human extinction due to AI. While such cases are rare, they underscore the level of concern and misunderstanding surrounding emerging technologies.
By integrating philosophical perspectives into AI development, companies aim to build trust and ensure that AI systems operate within clearly defined ethical boundaries. Teaching machines concepts of right and wrong could play a crucial role in shaping their decision-making processes.
Henry Shevlin is currently affiliated with the University of Cambridge, where he works as a part-time researcher and educator. He has indicated that he will continue his academic role alongside his new responsibilities at DeepMind.
Shevlin completed his PhD at the City University of New York and holds both a BPhil and a BA from the University of Oxford. His academic work focuses on philosophy of mind, cognitive science, and the ethical implications of artificial intelligence.
The inclusion of philosophers in AI development teams represents a significant shift in how technology companies approach innovation. It acknowledges that building advanced AI systems requires not only engineering expertise but also a deep understanding of human values, consciousness, and ethics.
As AI systems become more integrated into daily life—from virtual assistants to autonomous systems—the nature of human-AI relationships is expected to evolve. Philosophers like Shevlin will play a key role in guiding how these relationships are defined and managed.
Google DeepMind’s decision to hire a philosopher marks an important step toward more responsible and human-centric AI development. By addressing questions of machine consciousness, ethics, and human-AI interaction, the company is preparing for a future where artificial intelligence plays an even more significant role in society. As AI continues to advance toward AGI, integrating philosophical insights could prove essential in ensuring that technology evolves in alignment with human values and societal well-being.