Statement on the Risks of Artificial Superintelligence

Geodesiq is founded on the principle that human intelligence—operating through cooperative scientific inquiry, and with an intentional focus on benefits to human wellbeing and planetary health—is the foundation for an informed, self-governing, innovative, and sustainably thriving society. To put it simply, if we know what we are doing, we can solve big problems, prevent disaster, and provide safety and dignity to everyone. 

This is why, as founder of Geodesiq, I have joined 30,000 other thinkers, innovators, and leaders—including some of the pioneers of artificial intelligence systems and the AI advisor to Pope Leo XIV—in signing the Statement on Superintelligence. The Statement is short, principled, and focused, and it is issued with the following context in mind: 

Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

The Statement on Superintelligence reads, simply: 

We call for a prohibition on the development of superintelligence, not lifted before there is: 

  1. broad scientific consensus that it will be done safely and controllably, and
  2. strong public buy-in.

We have noted in our reporting that AI systems as we know them now are not thoughtful or intelligent; they are programmed to give the impression of being so. Instead, they calculate the probability that a series of words or images might be accurate, based on references to other words and images. This has several effects that are problematic: 

  1. Genuine human creative thinking—which is necessary both in the arts and sciences, and in problem-solving of all kinds, including engineering—could be displaced by systems that are good at appearing to “create” but are actually programmed to operate within canonical thinking (built from history and without the ability to accept useful new ideas and perspectives).
  2. The speed with which such systems can operate—generating responses to queries with near zero delay—could make it difficult for actual human interests and perspectives to penetrate the fog of rapid-fire probability-based assertions. That could mean disinformation displaces evidence and reason; it could also mean we lose the ability to track, understand, and control what AI systems do, even if we recognize they are getting things wrong.
  3. The human pursuit of knowledge is focused on evidence, fact, reasoning, and judgment, and respect for the general welfare, and justice. AI systems are designed to feign interest in these things, but cannot actually consider them a priority or use reasoning and judgment to make better choices when conditions are full of risk.
  4. Just three years into the AI revolution, there is already a significant problem in people’s understanding of these systems’ relationship to reality. It is widely assumed they have factual knowledge of what is true. For this reason, people have begun to use these experimental systems in education, in government decision-making, in high-stakes design challenges.

Considering these risks, and how they are already playing out now, there is a significant possibility that artificial superintelligence—outstripping human capacity in speed and range in nearly all areas—would only magnify these risks and undermine human sovereignty, access to truth, dignity, and discovery. 

AI systems may be best positioned to provide complex, layered, multidimensional data-processing services—like studying the fluid dynamics of Earth’s climate system and critical interactions with ecosystems and human industry. Given the above concerns, however, three overarching needs should be prioritized: 

  1. AI-based data anlayses need to be hand-checked by people who understand potential glitches in the cross-referencing, the structural peculiarities of relevant human and planetary systems, and thematically focused advisers to end users.
  2. Local insights are indispensable. Even if every detail of an AI-generated predictive data analysis is exactly right, this will be impossible to know if the data are not checked against local conditions, needs, and capacities.
  3. The real-world needs of end users should drive design and output of predictive insights. This means large, global service providers or tech giants are not the appropriate arbiters of what constitutes climate intelligence, nor can their AI systems optimally or appropriately decide what is needed by whom.

AI systems need to be human systems—supporting the long-term sustainable thriving, security, freedom, and access to knowledge enjoyed by human beings of all backgrounds and circumstances.