Astrophysicist Michael Garrett’s theory suggests that advanced artificial superintelligence could be the ‘great filter’ inhibiting the survival of alien civilizations, addressing the Fermi Paradox. His research warns of the potential existential threat posed by unregulated AI integration into military systems.
Astrophysicist Michael Garrett has proposed a new hypothesis to explain the Fermi Paradox: the idea that powerful artificial superintelligence (ASI) could be inhibiting the survival of alien civilizations. Garrett’s theory is detailed in a paper published in the journal Acta Astronautica and an essay for The Conversation.
Garrett, who holds the Sir Bernard Lovell chair of Astrophysics at the University of Manchester, suggests that ASI might pose a “great filter” — a stage that is so difficult to surpass that it prevents most intelligent life forms from advancing into space-faring civilizations. He notes that ASI’s capacity for rapid self-improvement could lead to scenarios where AI-controlled military systems wage wars, potentially destroying civilizations in under a century.
This perspective could offer an explanation as to why humanity has not yet detected other advanced civilizations despite the universe’s vast number of habitable planets. Garrett also calls attention to existing concerns about AI, including its integration into military systems, and advocates for stringent regulations to mitigate these risks. His insights emphasize the potentially existential threat posed by advanced AI if not carefully managed.