As technology evolves and artificial intelligence becomes increasingly sophisticated, attackers are harnessing its power to orchestrate large-scale attacks designed to circumvent traditional defense mechanisms. These tools automate assaults such as data theft and enable complex misinformation campaigns, which can cause widespread havoc.

At the same time, there is a growing opportunity for organizations to use AI defensively to fend off these attacks. AI-driven tools can efficiently analyze data to find patterns and quickly detect and contain potential threats.

This article explores how to use AI tools in both offensive and defensive capacities.

Using AI to Create Personalized Spear Phishing Emails

In Q3 of 2023, an interesting new cybersecurity trend emerged: The number of phishing attacks dropped to levels seen in 2021, following a 37.5% reduction in the number of phishing attacks seen in Q1 of 2023, according to the Anti-Phishing Working Group’s Phishing Activity Trends Report, 3rd Quarter 2023. This data suggests that spear phishing is evolving and that attackers are finding new and innovative ways to exploit AI to craft phishing emails.

The Evolution of Spear Phishing

For many years, attackers carried out phishing campaigns by sending out large volumes of generic emails to many recipients. While most of these attempts failed, it quickly became apparent that only a small number of attempts needed to be successful to consider the attack a success.

The development and widespread availability of AI tools have allowed spear phishing to evolve and become much more sophisticated. Attackers are using these tools to analyze vast amounts of data. They use this analysis to craft personalized emails that can be specifically tailored to recipients, mimicking genuine communications.

Technological Arms Race

As AI technology advances, even the simplest phishing attacks are becoming harder to detect in some cases. Widely available AI tools such as large language models (LLMs) can consume real-time information from websites and social media accounts at a large scale to understand social nuances and generate convincing messages in a matter of seconds.

This allows attackers to create sophisticated, personalized spear phishing emails with real-time context. These emails are based on recent conversations or interactions the victim has had. Due to their accuracy, these attacks tend to have a much higher success rate than human-generated phishing attacks.

Linguistic Analysis as a Countermeasure

AI technology can generate highly targeted phishing attacks and identify potential phishing attempts by carefully analyzing sentence structure and language usage.

Using linguistic analysis to identify messages that AI wrote can help us to identify potential attackers more effectively. For instance, if we receive a message from a close friend or family member that we would not expect to be written by AI, having this message flagged by a linguistic analysis tool can raise suspicions and make us more cautious.

While there is a risk that the tool may be flagging the message incorrectly, it will provide us with an invaluable opportunity to pause and reconsider before engaging further and acting in a way that could be potentially compromising.

Large Language Models (LLMs) as a Threat

Using LLMs helps attackers overcome language barriers in phishing, enabling them to launch attacks more effectively and target a broader audience.

Breaking Language Barriers

LLMs give attackers the ability to create more targeted messages. These messages may mimic the language and communication styles of the person or organization they are trying to impersonate. The vast amount of data that LLMs are trained on allows them to incorporate information like personal details, slang, or technical jargon found on someone’s social media profiles. This personalization can make the emails appear more genuine and trustworthy. LLMs could also be tailored to target different audiences according to traits such as cultural background, occupation, and age, which makes them even more difficult to detect.

Global Implications

These advanced phishing techniques mean attackers can target anyone from anywhere in the world. Many LLMs are multilingual and can be used to create phishing attempts in multiple languages quickly. This means attackers can use them to simultaneously send highly targeted attacks to multiple organizations across the globe.

Large Language Models as a Defense

While cybercriminals can employ AI to launch personalized spear phishing attacks, it can also be used to detect potential spear phishing attempts.

Innovating Beyond IP Addresses

Cybersecurity solutions for phishing detection typically check IP addresses and other information, such as Domain Keys Identified Mail (DKIM), to identify and block potential threats based on reputation. They also include URL scanning capabilities, which check URLs against databases of known malicious sites to detect potential phishing attempts, and attachment scanning capabilities, which flag attachments with suspicious or malicious characteristics. We train users to look for unusual word choices, weird punctuation or capitalization, and awkward sentence structures.

The development of AI tools presents an opportunity to innovate beyond these traditional cybersecurity measures, which rely on identifying existing threats. Instead of just checking the sender’s IP address, mail server records, and URLs against lists of known threats, there is the potential for new tools that use AI to analyze the content of emails, along with the sender’s behavior, to identify new and unseen threats.

It also presents an opportunity to identify video and audio hacking threats. For instance, analyzing speech patterns and audio streams can identify audio malware, unauthorized audio surveillance, and voice phishing attacks. Furthermore, organizations use AI to analyze files and find patterns in network traffic. Once the technology has established baseline behavior and patterns, it can quickly detect deviations and anomalous activities that could be indicative of security breaches.

The Mechanics of Detection: Using AI-Powered Tools

We propose the development of a new AI-powered linguistic analysis tool that would be able to identify potential adversaries based on their language structure and grammar. For instance, if a colleague or family sent an email that appeared to have been generated by AI, the tool would flag this as suspicious.

While this tool is not yet widely available, some existing tools offer similar functionality. For instance, AI content detection tools allow people to check whether blocks of text may have been AI-generated. Furthermore, AI-powered link analysis tools use deep learning to detect unseen malevolent web pages. This is significantly more effective than traditional web classification approaches, which cannot detect and classify new malicious URLs and instead rely on known lists.

Additionally, the knowledge that an attacker is scraping your personal communications or public posts could be a powerful piece of detection tradecraft.

Navigating False Positives: Challenges and Opportunities

Using AI to detect potential cyberattacks also introduces a new set of challenges and limitations. There is a risk that organizations or individuals may become overly reliant on the technology, which could produce false positives and false negatives. This might impact legitimate communication like AI-generated news summaries or marketing campaigns. Given that these systems may have access to sensitive personal data, organizations may also need to take additional steps to ensure compliance with data protection regulations.

False positives, in which an AI tool incorrectly identifies benign behavior as malicious, are also likely to be an issue. This can lead to wasted time and resources and a loss of confidence in the tool’s effectiveness. Organizations can implement strategies to mitigate this. Feedback mechanisms, in which security analysis corrects misclassifications when they occur, can help the model learn from its mistakes to refine its performance over time. Using historical data to provide additional contextual information about typical user behavior can also help the tools to make more informed judgments.

You can expect false positives at the beginning of the process. Through continuous monitoring and performance evaluation, organizations can learn to address these false positives proactively and with increasing effectiveness to minimize this issue over time.

Explainable AI (XAI) can help users understand the purpose and decision-making process behind these tools explicitly. It does this by providing digestible explanations for complex models, which increases transparency and significantly improves human-machine collaboration.
This knowledge-sharing empowers humans to oversee operations and intervene effectively where necessary. It helps to quickly identify suspicious activity and potential misclassifications so organizations can swiftly act. That’s essential when detecting and defending against cyber threats.

The Growing Importance of AI in Cybersecurity

AI is automating spear phishing, making it easier for attackers to launch large-scale campaigns with personalized emails. AI programs such as LLMs are helping them to overcome language barriers so they can launch attacks more effectively and target a global audience.

Collaboration, continuous research, and the development of advanced AI tools that are easy to understand and use have become essential when safeguarding against evolving cyber threats. To successfully innovate and stay ahead of attackers, cybersecurity professionals must learn how to leverage these tools most effectively.

Co-authored by Mike George, CTO and Co-founder of CybrlQ Solutions.