Is AI Biased?
Key insights from ISE 2024 - Xchange Live Sessions

From these examples we realize the importance of addressing bias in AI tools and ensure that we have the appropriate teams in place to identify bias in our data systems and datasets. It's necessary that our industry reflects the diversity of the world. We're facing significant challenges such as unfairness, bias, racism, and prejudice, all of which need urgent attention.
In conclusion, the industry needs to collectively work - be it manufacturers, technologists or users to mitigate and rectify these issues of bias. The panel agreed that the AI technology needs to evolve in harmony with societal values and embrace inclusivity for all.
What are your thoughts on this topic - Is AI biased?
I lead AVIXA’s APAC marketing and communications initiatives, which include member communication, vertical market events, educational programmes and AVIXA’s brand presence at the association’s trade shows in the region. I look forward to making new connections on the Xchange and leverage digital as a medium to offer value to members and support the association’s commitment of being a hub for AV in the region. https://xchange.avixa.org/users/avixa
We and selected partners, use cookies or similar technologies as specified in the cookie policy and privacy policy.
You can consent to the use of such technologies by closing this notice.
Please sign in
If you are a registered user on AVIXA Xchange, please sign in
The thing that really works counter to the public interests of folks around AI is how it's marketed across verticals. A lot of folks think that AI is what it says on the box: Artificial Intelligence, and thus is able to think for itself and provide answers from an endless repository of info on the internet. In reality, "AI" as people think of it is a Large Language Model (LLM) that is programmed to digest information and interpret it in a specific way! Because it's programmed to interpret in specific ways, coupled with scraping information directly from the internet (causing legality and accuracy issues), means that AI in its current state is far too risky to use at mass-scale.
Really, even then, "intent" is something that could at best be guessed, and not reliably. There's a famous picture from a presentation made by IBM in 1979, discussing the future of technology: "A computer can never be held accountable, therefore a computer must never make a management decision". The recent discussion around AI has resurfaced it, and for good reason, as a failure by AI otherwise can be accounted for as a "bug" instead of a deliberate miscalculation by allowing it to make a decision in the first place.
AI has interesting and even exciting use cases that it's succeeded greatly at. But without a proper understanding of its capabilities and risks, we open ourselves up to a significant amount of vulnerability!