In summary
- Transhumanism was labeled a “death cult” by critics who argued that it failed to understand what it means to be human.
- Advocate Zoltan Istvan defended the movement as a humanitarian effort to end suffering, aging and death through technology.
- Philosophers and AI researchers warned that promises of digital immortality were misguided and posed unresolved ethical risks.
Transhumanism, a movement that seeks to defeat aging and death through technology, was harshly criticized during a recent debate between philosophers, scientists and advocates of transhumanism, who rejected the accusation as misguided and reactionary.
The exchange took place on December 4 at the UK-based Institute of Art and Ideas’ “The World’s Most Dangerous Idea” event, where neuroscientist and philosopher Àlex Gómez-Marín argued that the movement functions as a pseudo-religion, one that aims to eliminate the human condition rather than preserve it.
“I think transhumanism is a death cult,” Gómez-Marín said. “I think transhumanism is a pseudo-religion dressed in techno-scientific language whose goal is to extinguish the human condition and tell everyone that we should cheer and applaud when this happens.”
The debate has circulated among technologists, philosophers, and ethicists for decades, but has taken on renewed urgency as research into artificial intelligence, biotechnology, and longevity advances. While its proponents argue that technology can save humanity from death, critics warn that the movement is based on fantasies of immortality.
More recently, a Galileo Commission report warned that transhumanist efforts to merge humans and machines could reduce human life to a technical system and sideline questions of meaning, identity, and agency.
The term “transhumanism” was coined in the mid-20th century and later developed by thinkers such as Julian Huxley, Max More, Natasha Vita-More, Ben Goertzel, Nick Bostrom, and Ray Kurzweil. Supporters such as biohacker Bryan Johnson and tech billionaire Peter Thiel have argued that the technology could be used to transcend biological limits such as aging and disease. Critics have responded that the movement’s goals would only benefit the ultra-rich and blur the line between science and religion.
Dear humanity,
I’m building a religion.
Wait a second, I know what you’re going to say. Hold that knee-jerk reaction and let me explain.
First, here’s what’s going to happen:
+ Don’t Die becomes the fastest growing ideology in history.
+ Save the human race.
+ And gives way to… pic.twitter.com/MJcrU9uXNf—Bryan Johnson (@bryan_johnson) March 7, 2025
Joining Gómez-Marín in the discussion were philosopher Susan Schneider, AI researcher Adam Goldstein, and Zoltan Istvan, a transhumanist author and political candidate currently running for governor of California, rejected Gómez-Marín’s characterization and described transhumanism as an effort to reduce suffering rooted in biology.
Participants offered competing views on whether transhumanist ideas represented humanitarian progress, philosophical confusion, or an ethical misstep.
“Most transhumanists like me believe that aging is a disease, and we would like to overcome that disease so that you don’t have to die, and the loved ones you have don’t have to die,” Istvan said, linking the view to personal loss.
“I lost my father about seven years ago,” he said. “We have all accepted death as a natural way of life, but transhumanists do not accept it.”
Gómez-Marín said the greatest risk lies not in specific technologies but in the worldview that guides their development, particularly among technology leaders who, he argued, know about technology but do not know humanity.
“They know a lot about technology, but they know very little about anthropology,” he said.
For her part, philosopher Susan Schneider told the audience that she once identified as a transhumanist and drew a distinction between using technology to improve health and supporting more radical claims, such as uploading consciousness to the cloud.
“There is a claim that we will upload the brain,” Schneider said. “I don’t think you or I can achieve digital immortality, even if the technology is there, because you would be killing yourself and another digital copy of you would be created.”
Schneider also warned that transhumanist language is increasingly being used to divert attention from immediate policy issues, including data privacy, regulation, and access to emerging technologies.
Adam Goldstein, an AI researcher, told the audience that the debate should focus less on predictions of salvation or catastrophe and more on the decisions already being made about how the technology is designed and governed.
“I think if we want to be constructive, we need to think about which of these futures we really want to build,” he said. “Instead of assuming that the future will be like this or that, we can ask ourselves what a good future would be.”
The central question, Goldstein said, was whether humans chose to design a cooperative future with artificial intelligence or approached it from a place of fear and control, which could shape humanity’s future once AI systems surpass human intelligence.
“I think we have good evidence of what a good future is from the way we have navigated differences with other human beings,” he said. “We have discovered political systems, at least some of the time, that work to help us bridge differences and achieve a peaceful solution to our needs. And I see no reason why the future can’t be like that with AI as well.”
Generally intelligent Fact Sheet
A weekly AI journey narrated by Gen, a generative AI model.


