How Attackers Are Using AI
- Nas Belfon
- 14 hours ago
- 4 min read

AI isn’t just a tool for businesses and developers. Attackers are using it too, and it’s making certain types of attacks cheaper, faster, and more effective. This isn’t theoretical; these techniques are being used in real attacks today.
AI-generated phishing emails
For years, one of the easiest ways to spot a phishing email was bad grammar and awkward phrasing. Non-native English speakers ran many phishing campaigns, and the writing gave them away. That advantage is gone.
LLMs can generate polished, grammatically correct phishing emails in any language. An attacker can prompt an AI to write a convincing email impersonating IT support, a bank, or a CEO. They can generate hundreds of variations in minutes, each slightly different, making it harder for email filters that rely on matching known templates. They can also tailor emails to specific targets by feeding the AI information about the recipient, such as their company, role, recent projects, or social media posts.
The result is spear phishing at scale. What used to require manual research and careful writing for each target can now be partially automated. The volume goes up, and the quality stays high.
Deepfake audio and video
Deepfake technology uses AI to generate realistic fake audio and video. In a security context, the most dangerous application is voice cloning. With just a few minutes of sample audio (from a YouTube video, podcast, or public earnings call), an attacker can generate a synthetic voice that sounds nearly identical to the target.
This has already been used in real attacks. In 2019, the CEO of a UK energy company was tricked into transferring $243,000 after receiving a phone call from what he believed was his boss at the parent company. The voice was AI-generated. Similar incidents have been reported since, and the technology has only gotten better and more accessible.
Video deepfakes are less common in attacks right now because they require more computing power and are easier to detect. But they’re improving rapidly. There have been cases of deepfake videos being used in job interviews and in real-time video calls to impersonate executives.
Automated vulnerability discovery
Attackers are using AI to accelerate the discovery of software vulnerabilities. LLMs can analyze source code and identify potential security flaws faster than manual review. They can also generate exploit code for known vulnerability types.
This doesn’t mean AI is finding zero-days left and right. The technology isn’t advanced enough yet to address complex vulnerabilities. But for common vulnerability patterns, such as SQL injection, cross-site scripting, and insecure deserialization, AI tools can scan codebases and flag potential issues. Attackers who previously needed significant coding skills to find and exploit vulnerabilities can now use AI as an assistant to speed up the process.
AI-assisted password cracking
Traditional password cracking relies on dictionary attacks (trying common passwords) and brute-force attacks (trying every possible combination). AI adds a layer of intelligence to this. Machine learning models trained on leaked password databases can learn patterns in how humans create passwords: common substitutions (e.g., @ for a, 3 for e), typical structures (capital letters, lowercase letters, numbers, symbols), and cultural patterns.
These models generate password guesses that are statistically more likely to match real passwords. Instead of trying random combinations, the system tries passwords that look like ones humans would actually create. Research has shown that AI-assisted cracking can guess passwords significantly faster than traditional methods, especially for passwords that aren’t fully random.
Malware that adapts
AI is being used to create malware that can modify its own code to evade detection. Traditional antivirus software relies heavily on signatures, known patterns of malicious code. Polymorphic and metamorphic malware have existed for years, but AI makes it easier to generate variants that look different to security scanners while still performing the same malicious actions.
There’s also research into AI-powered malware that can observe its environment and change its behavior based on what it detects. If it notices it’s running in a sandbox (an isolated analysis environment), it can lie dormant. If it detects specific security tools, it can attempt to disable or evade them. This cat-and-mouse between attackers and defenders is getting more sophisticated on both sides.
Social engineering at scale
Beyond phishing emails, AI enables more sophisticated social engineering across multiple channels. Attackers can use AI to maintain convincing conversations via chat, text, or social media. AI chatbots can impersonate customer support agents, recruiters, or colleagues in real-time conversations, responding naturally and adapting to what the target says.
Combine this with data scraped from LinkedIn, company websites, and social media, and you get highly personalized manipulation at a scale that wasn’t previously possible. One attacker can run dozens of simultaneous social engineering conversations, each tailored to the specific target.
What this means for defenders
The bar for entry-level cybercrime is dropping. Attacks that previously required significant technical skill can now be partially automated with AI tools. This means more attacks, coming from more attackers, with higher quality.
For defenders, this means traditional detection methods need to evolve. You can’t rely on spotting phishing emails by their grammar anymore. You can’t assume that a familiar voice on the phone is who it claims to be. Signature-based detection alone isn’t enough for AI-generated malware variants.
The defense side is using AI too, for anomaly detection, behavioral analysis, and automated response. But the race between offensive and defensive AI use is ongoing, and staying informed about how attackers use these tools is part of being effective in any cybersecurity role.
Bottom line
AI is making attacks more accessible, more scalable, and harder to detect. The same technology that powers helpful chatbots and productivity tools is being used to write better phishing emails, clone voices, crack passwords faster, and generate evasive malware. Understanding these offensive applications is important whether you’re entering cybersecurity or just trying to protect yourself and your business.

$50
Product Title
Product Details goes here with the simple product description and more information can be seen by clicking the see more button. Product Details goes here with the simple product description and more information can be seen by clicking the see more button

$50
Product Title
Product Details goes here with the simple product description and more information can be seen by clicking the see more button. Product Details goes here with the simple product description and more information can be seen by clicking the see more button.

$50
Product Title
Product Details goes here with the simple product description and more information can be seen by clicking the see more button. Product Details goes here with the simple product description and more information can be seen by clicking the see more button.



Comments