What Probably Inspired me at ISE 2026?
In Part 1 of my analysis of the parallels between AI development and the phenomena of social media, I painted a grim picture: a future in which artificial intelligence, driven by the same economic incentives that warped our attention-seeking landscape, becomes the ultimate engine of a post-truth world. The parallels are not just theoretical; they are active, present-day realities. However, this trajectory is not inevitable. It is a product of specific technical, economic, and social choices, and different options can be made.
This next part of my analysis moves from diagnosis to prescription. It outlines a concrete, multi-layered framework of solutions. It postulates a positive future achievable through the dedicated, collaborative action of what we in the NEXXT community call AI Confident and Courageous Advocates. The challenge is immense, but the path forward is clear, and the stakes could not be higher.
So when it comes to the future of artificial intelligence, we seem to have more questions than answers. Will artificial intelligence be capable of determining its own morals, ethics, and values? Will those values transcend and include the continued existence of the human race, or will this intelligence share so little resonance with us that our very survival could be threatened? --- Ken Wilber [7]
American philosopher Ken Wilber's "Four Quadrants" model in his book, The Theory of Everything, provides a framework for approaching AI accountability. Wilber's view of human developmental process and progress differentiates between our self-perceptions of the interior and the exterior, as well as between the individual and the collective aspects of any phenomenon.[8] Applying his model to AI accountability, his framework suggests we must address all four dimensions:
Escaping the gravity of the attention economy requires more than a single fix. It demands a coordinated effort across three critical layers: the technology itself, the economic models that fund it, and the social structures that govern it.
The foundation of a better AI is to embed truthfulness into its very architecture. As researchers, Owain Evans et al. argue in their seminal paper, "Truthful AI," the differences between AI and humans create an opportunity to hold AI to a higher standard of truthfulness [9]. We should not settle for an AI that is merely as truthful as a human; we should demand one that is demonstrably more so.
This is not a theoretical ideal. It is an engineering challenge with a proposed solution:
Technical solutions alone are insufficient if the economic engine is still pointing in the wrong direction. The advertising-driven, engagement-at-all-costs model of surveillance capitalism is fundamentally misaligned to create truthful AI. To fix this, we must build and support alternative business models that align profit with user value.
These models all share a common principle, as articulated by Mason: “What it really comes down to, essentially, is that you want people to pay you for something they value” [10]. A system that values truth will attract users who are willing to pay for it, creating a viable market for trustworthy AI.
Neither technical architectures nor new business models will emerge from a vacuum. They must be demanded, championed, and protected by a coalition of informed and courageous advocates. This is the essential human element in the loop. History has shown that focused advocacy can change the trajectory of technology, from the open-source movement that challenged software monopolies to the grassroots activism that has successfully banned facial recognition in major cities.
The AV AI Confident and Courageous Advocate is not necessarily a technical expert. They are developers, policymakers, investors, artists, educators, and everyday citizens who:
If these advocates succeed, what future can we expect? It will not be a utopia where all AI is perfectly truthful. The attention economy and its accompanying pathologies will likely persist in some form. However, we will have created a viable, powerful, and trusted alternative. The result will be a bifurcated but healthier information ecosystem.
In this future, users will have a meaningful choice. They can opt for the “free” but manipulative systems of the attention economy, or they can choose to inhabit a digital world built on a foundation of trust. Truthful AI will become a competitive advantage.
What comes after what comes next? The ultimate achievement of this movement is not just truthful AI, but the creation of Wisdom-as-a-Service. Imagine an AI that does not just give you a plausible answer, but helps you understand the context, reveals the biases in your question, provides counter-arguments, and sources its claims impeccably. Imagine an AI that enables you to think more clearly, not just one that feels for you. This is an AI that acts as a tool for epistemic security, strengthening our collective ability to make sense of the world.
This is the profound opportunity that lies before us. The parallel path with social media is both a warning and a gift. We have seen this story before. We know how it ends. This time, we have the chance to write a different ending. The future of AI is not a technical inevitability; it is a social choice. It is a choice made by the AV AI confident and courageous advocates in the NEXXT community who are willing to demand a technology that serves humanity, not one that simply hijacks their attention.
For more information on the NEXXT NOW Community, click here.
[7] https://integrallife.com/future-artificial-intelligence/
[8] A Theory of Everything: An Integral Vision for Business, Politics, Science and Spirituality, 2000, paperback ed.: ISBN 1-57062-855-6
[9]Evans, O., Cotton-Barratt, O., Finnveden, L., Bales, A., Balwit, A., Wills, P., Righetti, L., & Saunders, W. (2021). Truthful AI: Developing and governing AI that does not lie. arXiv. Retrieved from https://arxiv.org/abs/2110.06674
[10] Brown, S. (2021, June 16). The case for new social media business models. MIT Sloan School of Management. Retrieved from https://mitsloan.mit.edu/ideas-made-to-matter/case-new-social-media-business-models
As an architect by training (BS Architecture, Cal Poly SLO) and a collaborative technologist with four decades of practice, I’m passionate about mentoring the next generation of AV professionals in the intersection of technology, strategy, and leadership. I've been active in AVIXA since 1986, and served on the national board, 1993-2000. I'm a Fellow of the Society of Marketing Professional Services (SMPS), and an Associate member of the American Institute of Architects.
My expertise spans audiovisual systems design, integrated building technology, strategic business development, and higher education technology planning. I bring an award-winning, B2B design thinking approach developed through leadership roles with national AEC and technology firms. I’ve led marketing and sales strategy, designed future-ready digital experience environments, and helped organizations implement AI-powered tools to scale expertise and performance.
Xchange Advocates are recognized AV/IT industry thought leaders and influencers. We invite you to connect with them and follow their activity across the community as they offer valuable insights and expertise while advocating for and building awareness of the AV industry.
We and selected partners, use cookies or similar technologies as specified in the cookie policy and privacy policy.
You can consent to the use of such technologies by closing this notice.
Please sign in or register for FREE
If you are a registered user on AVIXA Xchange, please sign in