An Unsettling Parallel, Part 2

Forging a New Path with AI Courageous Advocacy
An Unsettling Parallel, Part 2
Like

Share this post

Choose a social network to share with.

This is a representation of how your post may appear on social media. The actual post will vary between social networks

In Part 1 of my analysis of the parallels between AI development and the phenomena of social media, I painted a grim picture: a future in which artificial intelligence, driven by the same economic incentives that warped our attention-seeking landscape, becomes the ultimate engine of a post-truth world. The parallels are not just theoretical; they are active, present-day realities. However, this trajectory is not inevitable. It is a product of specific technical, economic, and social choices, and different options can be made.

This next part of my analysis moves from diagnosis to prescription. It outlines a concrete, multi-layered framework of solutions. It postulates a positive future achievable through the dedicated, collaborative action of what we in the NEXXT community call AI Confident and Courageous Advocates. The challenge is immense, but the path forward is clear, and the stakes could not be higher.

So when it comes to the future of artificial intelligence, we seem to have more questions than answers. Will artificial intelligence be capable of determining its own morals, ethics, and values? Will those values transcend and include the continued existence of the human race, or will this intelligence share so little resonance with us that our very survival could be threatened? --- Ken Wilber [7]

The Interior and Exterior Dimensions of Accountability

American philosopher Ken Wilber's "Four Quadrants" model in his book, The Theory of Everything, provides a framework for approaching AI accountability. Wilber's view of human developmental process and progress differentiates between our self-perceptions of the interior and the exterior, as well as between the individual and the collective aspects of any phenomenon.[8] Applying his model to AI accountability, his framework suggests we must address all four dimensions:

  • The interior of the individual: Who are the humans creating the AI? What is their individual level of consciousness, ethics, and wisdom? Wilber argues that human consciousness and wisdom are necessary to guide the development of technology.
  • The exterior of the individual: How do we hold individuals—tech programmers, AV executives, and AV end-users—responsible for the actions of an AI? This involves creating and enforcing clear rules and consequences.
  • The interior of the collective: What are the shared cultural values, ethics, and norms that influence how AI is developed and used? Wilber's work on cultural evolution suggests that for AI to flourish, there needs to be a collective shift toward a more inclusive, "worldcentric" consciousness. The NEXXT community is the logical place to explore this aspect.
  • The exterior of the collective: What are the external systems, policies, and laws that govern AI? This quadrant concerns tangible regulations, such as the EU's strict AI laws.

A Framework for a Truth-Centered AI Ecosystem

Escaping the gravity of the attention economy requires more than a single fix. It demands a coordinated effort across three critical layers: the technology itself, the economic models that fund it, and the social structures that govern it.

Layer 1: The Technical Solution — Designing for Truthfulness

The foundation of a better AI is to embed truthfulness into its very architecture. As researchers, Owain Evans et al. argue in their seminal paper, "Truthful AI," the differences between AI and humans create an opportunity to hold AI to a higher standard of truthfulness [9]. We should not settle for an AI that is merely as truthful as a human; we should demand one that is demonstrably more so.

This is not a theoretical ideal. It is an engineering challenge with a proposed solution:

  1. Establish Clear Standards: The goal should be to create AI that avoids "negligent falsehoods"—a standard that is more precise and easier to assess than the complex human concept of lying. This means the AI should not state falsehoods that it could have known were false through proper diligence.
  2. Train for Truth: AI models must be explicitly trained with truthfulness as a primary objective, not as a secondary preference after engagement or plausibility. This involves using curated, high-quality datasets and developing new reinforcement learning methods that prioritize accuracy and source-checking above all else.
  3. Independent Auditing: Establish robust, independent institutions to evaluate AI systems before and after deployment. These bodies would act like a combination of the FDA (for pre-market safety) and the NTSB (for post-incident investigation), ensuring that AI models meet and maintain strict truthfulness standards.

Layer 2: The Economic Solution — Aligning Incentives with Well-being

Technical solutions alone are insufficient if the economic engine is still pointing in the wrong direction. The advertising-driven, engagement-at-all-costs model of surveillance capitalism is fundamentally misaligned to create truthful AI. To fix this, we must build and support alternative business models that align profit with user value.

  • Subscription Services: When users pay a recurring fee, the incentive shifts from maximizing engagement to providing continuous, demonstrable value. Users will not pay for a service that consistently misleads them. While this raises concerns about a "digital divide," it is a viable model for high-stakes professional, medical, or financial AI tools where accuracy is paramount.
  • Micropayments & Direct-Pay: This model ties the cost directly to a specific service or piece of information. The user pays for a valuable output, creating a direct economic incentive for the provider to ensure that the output is reliable and accurate.
  • The “MetaFilter” Model: As proposed by data scientist Hilary Mason, a small, one-time membership fee can create a powerful barrier against bad actors and bots without being prohibitive [8]. This model fosters a sense of community ownership and is not predicated on achieving monopolistic scale, allowing it to prioritize health over growth.
  • Public & Non-Profit AI: Just as we have public broadcasting and non-profit research institutions, we can envision publicly funded or philanthropically supported AI systems dedicated to knowledge, education, and public well-being, completely decoupled from the profit motive.

These models all share a common principle, as articulated by Mason: “What it really comes down to, essentially, is that you want people to pay you for something they value” [10]. A system that values truth will attract users who are willing to pay for it, creating a viable market for trustworthy AI.

Layer 3: The Social Solution — The Rise of the AI Courageous Advocate

Neither technical architectures nor new business models will emerge from a vacuum. They must be demanded, championed, and protected by a coalition of informed and courageous advocates. This is the essential human element in the loop. History has shown that focused advocacy can change the trajectory of technology, from the open-source movement that challenged software monopolies to the grassroots activism that has successfully banned facial recognition in major cities.

The AV AI Confident and Courageous Advocate is not necessarily a technical expert. They are developers, policymakers, investors, artists, educators, and everyday citizens who:

  • Demand Accountability: They push for transparency in how AI models are trained and deployed and are not intimidated by the technology's complexity.
  • Champion Alternatives: They use their purchasing power, investment decisions, and public platforms to support companies and projects building AI on user-aligned business models.
  • Educate and Organize: They translate the abstract dangers of AI-driven misinformation into tangible, relatable issues for the public and for policymakers, building a broad-based coalition for change.
  • Insist on Independent Governance: They advocate for the creation of independent, technically proficient, and politically insulated oversight bodies required to make truthfulness standards meaningful.

Collaborative Action: A Future Rebuilt on Trust

If these advocates succeed, what future can we expect? It will not be a utopia where all AI is perfectly truthful. The attention economy and its accompanying pathologies will likely persist in some form. However, we will have created a viable, powerful, and trusted alternative. The result will be a bifurcated but healthier information ecosystem.

In this future, users will have a meaningful choice. They can opt for the “free” but manipulative systems of the attention economy, or they can choose to inhabit a digital world built on a foundation of trust. Truthful AI will become a competitive advantage.

What comes after what comes next? The ultimate achievement of this movement is not just truthful AI, but the creation of Wisdom-as-a-Service. Imagine an AI that does not just give you a plausible answer, but helps you understand the context, reveals the biases in your question, provides counter-arguments, and sources its claims impeccably. Imagine an AI that enables you to think more clearly, not just one that feels for you. This is an AI that acts as a tool for epistemic security, strengthening our collective ability to make sense of the world.

This is the profound opportunity that lies before us. The parallel path with social media is both a warning and a gift. We have seen this story before. We know how it ends. This time, we have the chance to write a different ending. The future of AI is not a technical inevitability; it is a social choice. It is a choice made by the AV AI confident and courageous advocates in the NEXXT community who are willing to demand a technology that serves humanity, not one that simply hijacks their attention.

For more information on the NEXXT NOW Community, click here

References

[7] https://integrallife.com/future-artificial-intelligence/

[8] A Theory of Everything: An Integral Vision for Business, Politics, Science and Spirituality, 2000, paperback ed.: ISBN 1-57062-855-6

[9]Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., Righetti, L., & Saunders, W. (2021). Truthful AI: Developing and governing AI that does not lie. arXiv. Retrieved from https://arxiv.org/abs/2110.06674

[10] Brown, S. (2021, June 16). The case for new social media business models. MIT Sloan School of Management. Retrieved from https://mitsloan.mit.edu/ideas-made-to-matter/case-new-social-media-business-models

Please sign in or register for FREE

If you are a registered user on AVIXA Xchange, please sign in

  • Xchange Advocates are recognized AV/IT industry thought leaders and influencers. We invite you to connect with them and follow their activity across the community as they offer valuable insights and expertise while advocating for and building awareness of the AV industry.