Is AI Biased?
Key insights from ISE 2024 - Xchange Live Sessions

Recommended Content
Sustainability in AV, AI in AV, Conferencing & Collaboration, Broadcast AV, Command and Control, Digital Signage, Learning Solutions, Live Events / Performance Entertainment, IT and Networked AV, APAC Member Forum, Sustainable AV , AV Education for The Next Generation
How to Make Dragons Roar and Storms Rage: The Playful Power of AV Integration

Please sign in
If you are a registered user on AVIXA Xchange, please sign in
The thing that really works counter to the public interests of folks around AI is how it's marketed across verticals. A lot of folks think that AI is what it says on the box: Artificial Intelligence, and thus is able to think for itself and provide answers from an endless repository of info on the internet. In reality, "AI" as people think of it is a Large Language Model (LLM) that is programmed to digest information and interpret it in a specific way! Because it's programmed to interpret in specific ways, coupled with scraping information directly from the internet (causing legality and accuracy issues), means that AI in its current state is far too risky to use at mass-scale.
Really, even then, "intent" is something that could at best be guessed, and not reliably. There's a famous picture from a presentation made by IBM in 1979, discussing the future of technology: "A computer can never be held accountable, therefore a computer must never make a management decision". The recent discussion around AI has resurfaced it, and for good reason, as a failure by AI otherwise can be accounted for as a "bug" instead of a deliberate miscalculation by allowing it to make a decision in the first place.
AI has interesting and even exciting use cases that it's succeeded greatly at. But without a proper understanding of its capabilities and risks, we open ourselves up to a significant amount of vulnerability!