On Monday, January 27, 2025, AI-related stocks lost more than $1 trillion in value. NVIDIA alone lost around $589 billion in a single day. The reason was a stock-selling panic induced by release of a new chatbot service from Chinese firm DeepSeek.
The panic has been explained as a market reaction to key details of what DeepSeek claims to have created. The company says its generative AI model achieves workable results (intended to improve over time) with fewer chips, of lower quality, that cost less to make, acquire, and run, and use much less energy. Several worrying questions about the quality and reliability of AI systems, and related market incentives stand out:
- Are DeepSeek’s claims really a factual representation of their product?
- Are AI investors not actually all that invested in the products and services they had previously embraced?
- Is this new product actually a competitor? (Is it desirable, is it functional, is it safe, can it be trusted?)
- Is the selloff an indication that markets already embraced lower standards for AI than they should have?
A new piece at The Navigator asks whether strict standards need to be adopted, not only for how AI systems are developed and deployed, but for how they are funded. It also notes AI systems are expected to shape decisions about health, environmental protection, and military action, so they will need to investment to ensure they are high-quality and reliable in their use of information.
The Good Food Finance Blueprint for Data Systems Integration includes important insights on standards for data security, confidentiality, intellectual property, and AI safeguards. The report notes:
“Beyond the risk of fabrication or distortion is the risk of excess deference to systems that do not actually make informed judgments, but produce words that state that they have. Surrendering decisions to such systems in the early stages of development can, even without hallucinations, lead to unintended negative outcomes, which might escape detection or fail to be addressed in a timely manner.”
The report also cites a number of efforts ongoing to shape standards for the responsible development and deployment of AI systems, including that they be “be human-centric, trustworthy and responsible”. Not setting such standards could lead to money, incentives, technology, and local practice, all favoring destructive practices that will further degrade natural systems and eventually collapse the food supply.
Before we surrender any more ground to artificial intelligence, we need to ensure that human values, judgments, rights, and interests, are central to how AI systems judge the “wisdom” of a decision. And, we need to understand better how to secure engaged, innovative, small-scale local economies against the overwhelming force that can come from uncontrolled global speculation.

