The generative AI market is booming — and while many AI chatbots are having a positive impact, hackers have also been riding the wave. Some have hacked legitimate large language model-based (LLM) tools, while others have started crafting their own malicious generative AI tools.
DarkGPT, BadGPT, EscapeGPT, WolfGPT, EvilGPT, DarkBARD, FraudGPT. These names might not ring a bell yet, but their suffixes should point you in the right direction. They are chatbots, like ChatGPT, but developed by the organized crime industry — capable of coding computer viruses, writing phishing emails, building fake websites, and more.
Daniel Kelley, who has analyzed malicious AI tools, said in an article for Infosecurity Europe, “We’re now seeing an unsettling trend among cyber-criminals on forums, evident in discussion threads offering ‘jailbreaks’ for interfaces like ChatGPT.”
AI And Social Engineering: A Threatening Duo
With any technology, there is always a risk that bad actors, or people who use information to harm, will find a way to utilize it for malicious purposes. Unfortunately, bad actors now use AI chatbots for social engineering. They can generate highly successful emails or texts that can be used to gain personal or organizational data — a process known as phishing. Since an AI chatbot uses any information given, bad actors can easily impersonate someone else’s writing voice and style to make phishing attacks seem more realistic.
“In addition to impersonating someone you may already know, bad actors using malicious AI chatbots can also leverage information about you from your social media pages and friend groups,” said @Paul Konikowski, CIS Manager for a government contractor. “With today's technology, it's not hard to combine all of your social media feeds and come up with a believable story. Then, they follow up with, ‘I think this is maybe you in this pic?’ and include a malicious link.”
4 Ways to Protect Yourself Against Phishing Attacks
Despite all the AI-driven improvements, reliable methods exist to avoid being phished:
- Be cautious of urgent requests. Creating a sense of urgency or threatening dire consequences like account closure or disciplinary action is a common tactic for eliciting quick reactions.
- Avoid links and attachments. Unexpected attachments or links can be dangerous, contain malicious software, or lead to fraudulent web pages. You can hover over links to get a preview of the link’s URL. Any misspellings or unknown websites in the email links are dead giveaways that it’s a phishing email.
- Beware of requests for sensitive information. It’s rare for organizations to request sensitive information via email. Verify requests by contacting the sender through other trusted channels.
- Report. If you receive a phishing email or suspect that an email is phishing, always report it to your security team as soon as possible.
AI might be the future, but don’t let it scare you. By following these best practices, you can safeguard your personal data and your organization’s private information.
In Closing
The existence of AI has made it easier for cybercriminals to target you, especially when it comes to phishing emails! AI not only speeds up the process of writing emails but also helps cybercriminals make them more believable. To protect yourself from these enhanced phishing attacks, be wary of urgent requests, avoid unexpected links, beware of requests for sensitive information, and always report suspicious emails to your security team.
For more tips on how to combat cyber threats, check out this episode of AVIXA’s Signal Flow Podcast!
Please sign in
If you are a registered user on AVIXA Xchange, please sign in
This is such an eye-opening read! It’s important to understand both the benefits and the risks of AI chatbots. I appreciate your tips on how to protect ourselves while using the best AI chatbots. Thanks for bringing this important topic to light!