How Will ChatGPT Impact Your Business's Email Security?
How Will ChatGPT Impact Your Business's Email Security?
ChatGPT and other generative AI models pose serious threats to email cybersecurity. Hackers can use this technology to create more sophisticated phishing content and even polymorphic malware programs. Fortunately, businesses can also use AI to strengthen their cybersecurity through improved phishing detection and cyber awareness.
How AI Threatens Email Security
ChatGPT and other generative AI models are designed to help people, but hackers can also exploit them. They can leverage the natural language processing capabilities of ChatGPT to create malicious content and spread it through email.
More Sophisticated Phishing Risks
Research shows up to 91% of cyberattacks start with a phishing email. Social engineering has become one of the most popular tactics among hackers of all skill levels. Using ChatGPT, they can create phishing content more easily and potentially more convincingly.
ChatGPT has safeguards to prevent it from intentionally creating phishing content. If a prompt explicitly asks the algorithm to write a phishing email, it will reply with a message explaining that it can’t encourage or aid in creating malicious content. However, there is nothing to prevent hackers from asking ChatGPT to write a normal email for them or an email with specific content or goals.
For example, hackers can ask ChatGPT to write an email designed to get readers to open a link or request a wire transfer. Hackers can refine prompts to include specific details, such as context for a specific business they want to target. Adding elements like this to a phishing email can make them much more convincing.
Generative AI models like ChatGPT excel at replicating natural language. In fact, ChatGPT is so good at it that it can make completely incorrect information seem authoritative and legitimate. Hackers are using this capability to create convincing phishing emails customized with specific victims’ details in any language and any quantity they want.
Easier Malware Creation
Hackers aren’t just using AI to write emails — they’re using it to write all-new malware. ChatGPT’s ability to write and edit code was originally a helpful tool for software developers and computer science students. Unfortunately, hackers can abuse this feature to automate the process of creating malicious programs.
OpenAI has taken steps to prevent bad actors from using ChatGPT for malware creation. Explicitly asking the AI to write malicious code will normally trigger a refusal from ChatGPT and a warning about the platform’s content policy. But in a 2023 project, researchers found a little negotiating with ChatGPT can trick the AI into writing malware anyway. Once ChatGPT created the code, the researchers could mutate it through ChatGPT, generating dangerous polymorphic malware.
This capability means hackers without coding knowledge or experience can quickly generate malware through AI. They can combine this with AI-generated phishing emails to deliver large quantities of malware in very little time.
How AI Can Strengthen Email Security
Is there anything businesses can do to defend against heightened email cybersecurity threats from AI? Hackers may be able to create malicious content more efficiently with AI, but security strategies can evolve to meet the challenge. Businesses can use ChatGPT and other AI models to strengthen security and cyber awareness.
AI Communication Analysis
AI models like ChatGPT can help businesses recognize phishing content, whether it’s written by a human or not. By learning the communication patterns used within a specific company or by a particular person, AI can identify content that doesn’t match.
This is one of the many benefits of machine learning models in cybersecurity. As security expert Tony Bryan explains, “[E]very day you’re on social media, somebody is pulling artificial intelligence off you.” AI learns from every bit of content it’s exposed to across the internet. Businesses can leverage this against hackers to get better at spotting suspicious messages.
Anti-Phishing Training
ChatGPT can be an excellent tool for creating and improving email cybersecurity training programs. Effective training is vital to any business’s cyber resilience because it strengthens employee awareness. Even an AI-generated phishing email will still use tried and true social engineering tactics like urgent calls to action or commands to open a link or attachment.
Businesses can use ChatGPT to build personalized cybersecurity training modules for their employees. ChatGPT can create practice questions and flashcards and answer questions that provide context for crucial security topics. Additionally, using ChatGPT to generate the kinds of emails hackers try to use will familiarize employees with the language commonly found in new AI-generated phishing content.
ChatGPT: Security Threat or Security Tool?
ChatGPT and other generative AI models are impressive and great for all kinds of everyday tasks, but bad actors can also exploit them for cybercrime. Hackers can use ChatGPT to create phishing emails and malware more efficiently, posing a serious threat to business email cybersecurity. Luckily, companies can also leverage AI to improve their phishing detection methods and strengthen their employees’ cyber awareness.
Zac Amos is the Features Editor at ReHack, where he covers cybersecurity topics like email security, phishing, and ransomware. For more of his work, follow him on Twitter or LinkedIn.
The views expressed in this article are those of the author and do not necessarily reflect those of StartMail.
More from the blog
Why You Should Protect Your Emails
Continue readingStartling Email Privacy Facts That Will Make You Rethink How to Protect Your Inbox
Continue reading6 Common Types of Email Scams to Watch For
Continue reading