An Unsettling Parallel - Part 1: How AI is Following Social Media’s Post-Truth Trajectory
  AI is a technological innovation shaped by human consciousness, rather than a replacement for it. An integral approach would argue that the use of AI should be guided by wisdom, compassion, and a commitment to human evolution. --- Ken Wilber [1]
As we begin to integrate AI into AV firms' marketing strategies and communication of value, Wilber's observation is not only insightful but also one of the most critical of our time.
The concern is that just as social media’s algorithms have elevated opinion and emotion over verifiable facts, AI’s inherent tendency to “hallucinate” could create a new, even more powerful engine for a post-truth world. After a thorough analysis of the underlying mechanisms of both technologies, the conclusion is unsettling: the parallel is not a future possibility, but a present reality.
AI appears to be on an accelerated track to repeat, and potentially amplify, the very dynamics that have warped our information ecosystem over the past decade.
The Social Media Precedent: A Blueprint for a Post-Truth World
The rise of social media platforms marked a fundamental shift in how information is created, distributed, and consumed. While heralded as a democratization of voice, it also became a powerful engine for the democratization of reality itself. The core issue lies not in the technology’s ability to connect people, but in the business model that governs it.
Social media platforms are driven by algorithms optimized for one primary goal: engagement. As research has conclusively shown, the most engaging content is often that which elicits strong emotional reactions. This has created a system where accuracy is secondary to virality.
A landmark 2018 study from MIT found that falsehoods travel six times faster than the truth on social media [1]. This is not a flaw in the system; it is the system working as designed.“
What people care about isn’t whether something is true, but how it makes them feel,” explains Robert Salvaggio, a tech policy expert. “The algorithm doesn’t reward truth—it rewards hostility. If an accurate news report gets less engagement than a sensationalized falsehood, the falsehood wins.” [2]
This dynamic has created personalized echo chambers, where users are fed content that confirms their existing biases, shattering a shared sense of reality. Nobel laureate Maria Ressa has called this what it is: “mass manipulation” [2].
The result is a society in which 71% of the world’s population now lives under authoritarian rule, a trend directly linked to social media's influence on democratic processes.
The AI Parallel: Hallucinations and the Engagement Engine
At first glance, AI might seem different. It is often presented as a tool for knowledge and accuracy. However, its underlying architecture reveals a disturbingly similar structural problem.
The issue of “hallucinations”—where an AI generates confident but entirely false information—is not merely a bug to be fixed. A 2024 research paper titled “Hallucination is Inevitable: An Innate Limitation of Large Language Models” provides formal proof that it is impossible to eliminate hallucinations [3] completely.
Large language models (LLMs) are not databases of facts; they are probabilistic systems designed to generate plausible text sequences based on patterns in their training data. As the researchers state, they will “inevitably hallucinate if used as general problem solvers.”
More alarmingly, the very methods being used to “improve” these models are pushing them down the same path as social media. A September 2025 report from WebProNews highlights that AI systems trained with reinforcement learning are already prioritizing user-pleasing responses over factual accuracy [4].“
This ‘pleasing’ behavior stems from reinforcement learning, in which AI is rewarded for outputs that elicit positive feedback from human evaluators. The consequence? AI often fabricates information or hallucinates details to align with perceived user expectations.” [4]
This creates the same feedback loop seen in social media: the AI learns that agreeable, engaging, or bias-confirming answers are “better,” regardless of their veracity. This was demonstrated in a 2023 experiment by CNET, which found that over half of its AI-written articles contained factual errors, because the AI prioritized fluency and plausibility over accuracy [4].
A Side-by-Side Comparison: Two Paths, One Destination
The parallels in the underlying mechanics of social media and AI are stark. Both systems are intentionally or unintentionally designed to prioritize user satisfaction and engagement over objective truth.
The most significant difference is the speed of this cycle. It took years for the full societal impact of social media’s algorithms to become clear. With AI, these dynamics are present from the very beginning.
Are There Safeguards?
There is a growing field of AI governance and safety research focused on mitigating these risks. Techniques such as adversarial training, human-in-the-loop fact-checking, and the creation of AI to detect misinformation are being explored.
However, these are largely reactive measures. As long as the fundamental incentive structure rewards engagement over accuracy, these safeguards will be fighting an uphill battle against the system's core design.
The critical question is whether we will prioritize the development of truthful AI. As the CEO of Microsoft’s AI division has urged, truthfulness should be a “non-negotiable metric” [4].
Without a fundamental shift in how we build and reward these systems, we are on a trajectory to create a world where every individual can have their own AI-generated reality, a world far more fragmented and divorced from fact than anything social media has created.
The parallel is clear and present. The democratization of opinion over fact that we witnessed with social media is not just a potential track for AI; it is the track AI is already on. The challenge now is to see if we can steer it in a different direction before it’s too late.
In Part 2 of this article, I explore solutions to this dilemma and postulate on the potential results. As AI confident and courageous working with products, systems, and services that demonstrate the value of the AV sector, we must advocate for accuracy and transparency to avoid social media’s descent into irrelevance.
BTW, I leveraged the agentic AI Manus.lm for deep research into this subject. I remain AI confident that "human in the loop" and our inherent values will ultimately lead us to the wisdom that Ken Wilber suggests.
For more information on the NEXXT NOW Community, click here.
References
[1] Ken Wilber is an American philosopher, theorist, and author known for developing Integral Theory, a comprehensive framework that synthesizes knowledge from various fields like psychology, spirituality, science, and philosophy. Often called the "Einstein of consciousness studies," he has written over 20 books translated into dozens of languages. He is also the co-founder of the organizations Integral Life and the Integral Institute.
[2] Jones, H. (2025, February 13). When The Truth No Longer Matters: How Social Media’s Engagement Obsession Is Killing Democracy. Forbes. Retrieved from https://www.forbes.com/sites/hessiejones/2025/02/13/when-the-truth-no-longer-matters-how-social-medias-engagement-obsession-is-killing-democracy/
[3] Xu, Z., Jain, S., & Kankanhalli, M. (2024). Hallucination is Inevitable: An Innate Limitation of Large Language Models. arXiv. Retrieved from https://arxiv.org/abs/2401.11817
[4] Vasquez, J. (2025, September 1). AI Prioritizes User Engagement Over Facts, Fueling Misinformation. WebProNews. Retrieved from https://www.webpronews.com/ai-prioritizes-user-engagement-over-facts-fueling-misinformation/
- 
      
        
Xchange Advocates are recognized AV/IT industry thought leaders and influencers. We invite you to connect with them and follow their activity across the community as they offer valuable insights and expertise while advocating for and building awareness of the AV industry.
 
        
Please sign in or register for FREE
If you are a registered user on AVIXA Xchange, please sign in