What are we building?

Buckminster Fuller wrote that “The human brain is Nature’s most powerful anti-entropy engine.” Intelligence is not just information, or thought or the semblance of thought; it is the process of making enough sense of what is happening to be able to come through it safely, find good health and opportunity on the other side, and to support the same needs of a wider fabric of life.

Brainpower devoted to destruction is not intelligence, but something else—a deviation, a failure to comprehend. Brainpower devoted to conscious, cooperative thriving, in safety, security, and personal and collective freedom—that is intelligence. It is possible to intelligently pursue self-interest; doing so allows one to benefit from supporting the wider success of people and natural systems.

Self-interest that does not comprehend or act toward this wider project of thriving is not really in the interests of the self pursuing it. Instead, it is more of an illusion—the pretense that by claiming as much as possible for oneself, one can become independent of the intricate, vast, symbiotic and competitive landscapes of interaction that make life possible. One cannot.

So, we find ourselves in a race against time, or against our own worst instincts, against fear thinking that prioritizes actions that degrade others’ capacity for thriving, even if those others are not on the other side of a violent conflict. We must overcome this paranoid approach to human industry and ingenuity.

Whether we can is one of the defining questions of our time.

Artificial intelligence systems have become a high-value future-building tool. Governments have already started utilizing them to sift and store information, and to make decisions that affect people’s real-world access to health, safety, opportunity, and prosperity.

This is happening before we have seen proof that these systems can operate without error, and it is happening in a time of frontier technological experimentation. We do not know what kind of business models are best suited to making AI intelligent enough to support generalized sustainable thriving.

Why does this matter? We are being asked to accept an entirely new paradigm for the management of facts, evidence, reporting, design, and decision-making, without knowing what that paradigm means for the balance of power between institutional structures and fundamental human rights, including the rights to life, liberty, and pursuit of happiness.

  • What happens when an AI system mistakenly denies you access to pension funds, or health treatments?
  • What is the process for an individual suffering ongoing, devastating personal harm, to fight a trillion-dollar corporation in court, when that corporation’s flagship product is conditioning mainstream understanding of its own relevance, quality, and value to society?
  • Will there be any way to compel a process of correction, reversal, and redress? Will the systems that cause such errors be designed to welcome the news that they failed, so they can learn and get better? Or will they suppress evidence of errors?
  • To protect fundamental rights, the First Amendment says it cannot be lawful to reduce the right to pursue legal redress for harm caused; what if AI systems are committing millions of small errors per day, all with differences in detail, and with different outcomes, in different jurisdictions?
  • Will we require AI platform-owners to train their systems to serve the interests of humanity, first and foremost?
  • If we do, how do we define “the interests of humanity”, and could getting that wrong lead to dangerous unintended consequences?

The advent of computation promised a future of accelerated information processing. That allowed encryption and decryption, rapid advances in electronics and information storage and transmission, and created the conditions for computational models that can study carefully plotted scenarios and give us insights into the evolution of conditions unfolding around us.

At the InfoAge Science Museum, it is possible to explore the early stages of modern radio, electronics, and information technology. It is difficult to predict which new reality will be made possible by which tech breakthrough, but how we start will shape our future prospects for enhanced agency and wellbeing. Photo: Joe Robertson.

At the heart of the IT revolution is the idea of human liberation and empowerment. We can be freer and dream bigger, pursue a more diverse range of new ideas, and gain insight into otherwise complex enigmas, like the fluid dynamics and chemistry of Earth’s atmosphere, and their implications for human and planetary health.

We can also get ahead of big questions, turning data into actionable insight and practical capability, even before we fully understand the underlying phenomena. While physicists and cosmologists continue to debate the mechanism through which gravity and electromagnetism work, we are able to understand their dynamic effects and harness that information to explore the universe.

AI systems should help us gain greater insight more quickly, but to do that, they will need to stop making up answers. We cannot afford to discard the proven standard of sourced information, evidentiary analysis, and critical thinking, built up over millennia, because computers can now mimic language or “generate” images that are partly copies of real-world images and partly fabricated by algorithms.

If AI systems are going to serve humanity well, if they are going to help us behave more intelligently, expand our horizons, and maximize our shared access to safety and prosperity, they are going to have to favor and empower human creators, defer to the hard won evidence-based insights and critical analysis of professional scientists, and “learn” to distinguish between relaying facts and writing fiction.

Four areas in particular stand out as high priority areas of deep ethical concern:

  1. Education – AI systems are starting to be used in education—both with formal support and in violation of rules against inauthentic work product. If students are not able to access quality, verifiable information and develop their own writing, investigative, and critical thinking skills, we could lose critical stabilizing elements of free, modern human societies, before we have a chance to correct course.
  2. Healthcare – Use of AI systems in healthcare carries many risks. The hope is for earlier, more frequent, and more accurate detection of health risks, to allow better prevention and treatment, but erroneous decisions, or loss of human judgment in the process of care could have devastating costs in terms of health outcomes and fiscal stability.
  3. Government – Government systems are made up of human beings who have a legal duty to act in service of other human beings. Replacing them with algorithms that make choices based solely on math, however complex and fine-tuned, risks eliminating the self-government dynamic that makes democracies work; loss of freedom could easily follow. Weaponry is a related area of deep ethical concern: What happens to legal accountability for violations of rights, when people are killed by AI-enabled weapons?
  4. Search – As with eduction and healthcare, there are risks in the primary method of search being turned over to chatbots. One of those risks is the possibility that the hard work that makes professional fact-finding a reality could lose funding and fall out of favor, making it less likely “search results” using AI systems will return relevant factual information.

These four risk areas come together to signal a general risk of information poverty and reduced access to safety, justice, personal sovereignty, and generalized opportunity. That does not mean AI systems can do no good; they have great potential to make human systems more intelligent, but they have to be designed to achieve that goal—not just to maximize profits by mimicking researchers, reporters, and creators.