Why AV Manufacturers Need to Pay Attention to the OWASP LLM Top 10

OWASP should be the backbone of secure LLM adoption in AV. If AV wants safer AI‑powered features, OWASP must sit at the core.
Why AV Manufacturers Need to Pay Attention to the OWASP LLM Top 10
Like

Share this post

Choose a social network to share with.

This is a representation of how your post may appear on social media. The actual post will vary between social networks

AV is no longer just hardware. It’s cloud + AI + identity + automation. Crestron, Q‑SYS, Zoom Rooms, Teams Rooms, Cisco, Logitech, and Barco are all embedding LLM‑powered features across devices, rooms, and cloud platforms, and the attack surface is expanding as a result.
More worrying is the fact that traditional AV architectures were never designed to handle LLMs, this gap is now introducing real security risks.

The OWASP Top 10 for LLM Applications (2025) is the global benchmark for understanding AI‑related risks, and it’s directly relevant as AV manufacturers embed LLM‑powered features into their platforms.

From meeting summaries and voice commands to room automation, diagnostics, scheduling insights, cloud‑connected workflows, and RAG‑powered support tools, AI is now woven into the AV and secure‑by‑design must become the standard.

Here’s how the OWASP LLM Top 10 can guide secure LLM adoption in AV and the risk AV must take seriously.

  • Prompt Injection
    Malicious meeting content, signage feeds, or documents can hijack AI behaviour.
  • Sensitive Data Disclosure
    LLMs can leak private meeting data, device logs, or internal system details.
  • Insecure Output Handling
    If AI output is trusted blindly, it can trigger unsafe device actions or workflows.
  • Model Denial of Service
    Attackers overload AI features, causing outages or cloud cost spikes.
  •  Supply Chain Vulnerabilities
    Models, datasets, plugins, and libraries can be compromised, and most AV vendors don’t track them.
  •  Excessive Agency
    Giving AI too much control over room devices or cloud APIs without guardrails is dangerous.
  • Insecure Plugin / Tool Integration
    LLM‑connected APIs (device control, scheduling, monitoring) can be misused.
  • Data Poisoning
    Attackers corrupt RAG sources, logs, or metadata to influence AI decisions.
  • Model Theft
    Attackers extract proprietary prompts, models, or fine‑tuned behaviour.
  •  Hallucinations & Overreliance
    AI confidently generates false information, and humans often trust it.

    Why this matters for AV manufacturers?
    There are currently no publicly reported AI‑related security incidents in AV but other industries (automotive, robotics, IoT, finance) are already seeing real AI failures and attacks. It’s only a matter of time before it reaches AV, especially as our industry is still in the early stages of AI adoption and rapidly expanding its attack surface.
    The manufacturers who lead on AI security will define the next decade of AV.

    As AV moves deeper into LLM‑powered features, the security conversation becomes unavoidable. If you’re building or modernising AI‑enabled AV platforms, let’s compare notes on AV security and AI governance as the patterns emerging across the industry deserve a wider conversation.

Please sign in or register for FREE

If you are a registered user on AVIXA Xchange, please sign in

  • Xchange Advocates are recognized AV/IT industry thought leaders and influencers. We invite you to connect with them and follow their activity across the community as they offer valuable insights and expertise while advocating for and building awareness of the AV industry.