Is AI Biased?

Key insights from ISE 2024 - Xchange Live Sessions
Is AI Biased?
During ISE 2024, at the AVIXA Xchange live booth a Fireside chat with Gill Ferrell, Christopher H., Stacia Pfeiffer, Jessica Sanders was held to discuss the topic - Is AI Biased?
The panel discussed how AI bias originates not only from the hiring practices but also from the development process itself.  One of the panelists cited an instance in collaboration in a hybrid format with both virtual and physical attendees, where the facial recognition system failed to accurately identify individuals in the room. Consequently, online participants couldn't discern who was speaking. This experience, though unsettling, serves as a practical demonstration of how biases in technology development can shape the future of AI. There have also been cases of wrongful detention by law enforcement agencies, due to faulty facial recognition technology.
All these instances underscores the profound implications of biased technology in law enforcement.
Likewise, many known, tech giants have had occurrences of their technology being biased and this has been extensively covered by reputable sources like the BBC and The Guardian, underscoring the inherent problem that major organizations have faced, leading to very public scandals over the past decade.
It's crucial to engage in a serious dialogue about the individuals developing this technology and whether they prioritize diversity, equity, inclusion, and belonging, not only concerning racial identities but also gender identity. The challenges faced by voice recognition technology in accommodating gender transitions further highlight the necessity for inclusive practices during the development phase.

From these examples we realize the importance of addressing bias in AI tools and ensure that we have the appropriate teams in place to identify bias in our data systems and datasets. It's necessary that our industry reflects the diversity of the world. We're facing significant challenges such as unfairness, bias, racism, and prejudice, all of which need urgent attention.
In conclusion, the industry needs to collectively work - be it manufacturers, technologists or users to mitigate and rectify these issues of bias. The panel agreed that the AI technology needs to evolve in harmony with societal values and embrace inclusivity for all. 

What are your thoughts on this topic - Is AI biased? 


Please sign in

If you are a registered user on AVIXA Xchange, please sign in

The thing that really works counter to the public interests of folks around AI is how it's marketed across verticals. A lot of folks think that AI is what it says on the box: Artificial Intelligence, and thus is able to think for itself and provide answers from an endless repository of info on the internet. In reality, "AI" as people think of it is a Large Language Model (LLM) that is programmed to digest information and interpret it in a specific way! Because it's programmed to interpret in specific ways, coupled with scraping information directly from the internet (causing legality and accuracy issues), means that AI in its current state is far too risky to use at mass-scale.

Really, even then, "intent" is something that could at best be guessed, and not reliably. There's a famous picture from a presentation made by IBM in 1979, discussing the future of technology: "A computer can never be held accountable, therefore a computer must never make a management decision". The recent discussion around AI has resurfaced it, and for good reason, as a failure by AI otherwise can be accounted for as a "bug" instead of a deliberate miscalculation by allowing it to make a decision in the first place.

AI has interesting and even exciting use cases that it's succeeded greatly at. But without a proper understanding of its capabilities and risks, we open ourselves up to a significant amount of vulnerability!