US Official: AI Threats Demand New Approach to Security Designs

Artificial Intelligence (AI) is advancing rapidly, raising concerns about potential threats it may pose. In response, a top US official emphasized the need to safeguard systems from the beginning rather than add protections later.
"We've normalized a world where technology products come off the line full of vulnerabilities and then consumers are expected to patch those vulnerabilities. We can't live in that world with AI," Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, said.
"It is too powerful. It is moving too fast," she said in an interview after a meeting in Ottawa with Sami Khoury, head of Canada's Centre for Cyber Security.
Said Khoury, "We have to look at security throughout the lifecycle of that AI capability."
The Software and Information Industry Association (SIIA), a trade association comprised of over 380 global tech companies, recently created guidelines for the responsible development of artificial intelligence (AI) tools for education.
Now, the United States is among 18 countries to endorse new guidelines for AI cyber security. These guidelines, developed in Britain, emphasize secure AI design, development, deployment, and maintenance. As a result of this agreement, AI developers will collaborate with governments to establish a standardized process for identifying and mitigating risks.
At last week's AI Safety Summit, the US, UK, and other major powers unveiled a 20-page document that provides general recommendations for companies developing and/or deploying AI systems, including monitoring for abuse, protecting data from tampering, and vetting software suppliers. The agreement warns that security shouldn't be a "secondary consideration" regarding AI development and instead encourages companies to make the technology "secure by design."
These moves are not without controversy, however. A rift is emerging in the AI community over how much we should focus on "existential" risks associated with "frontier" models that could end the world as we know it. Some argue that we should focus on mitigating AI societal risks now - like algorithms that deny people health coverage or home loans - rather than worrying about potential doomsday risks in the future.
Please sign in
If you are a registered user on AVIXA Xchange, please sign in