![Generative AI Boosts the Efficiency of Cyberattacks in China and Iran](/media/News/2025/01/31/73be88612c854e6ba77f2273fca6afba.png)
Generative AI Boosts the Efficiency of Cyberattacks in China and Iran
Google's report outlines how generative AI, through tools like the Gemini chatbot, has increased the efficiency of cyberattacks from China, Iran, and North Korea against the U.S. The report highlights the lack of revolutionary change while noting the enhanced speed and volume of attacks. As AI evolves, experts anticipate balanced utility for both defense and offense in cybersecurity.
Generative AI Fosters Efficiency in Cyberattacks by Chinese and Iranian Hackers
A recent report by Google highlights how artificial intelligence, particularly through the Gemini chatbot, is enhancing the efficiency of cyberattacks from nations like China, Iran, and North Korea against U.S. targets. The accessibility of publicly available large language models (LLMs) has amplified the speed and volume of these attacks without fundamentally altering their nature.
Efficiency Without Novelty
LLMs, known for their ability to handle complex language patterns and generate error-free programs, are exploited by cyberattackers, increasing their operational efficiency. Though these tools accelerate the rate at which threat actors, both low and high-skilled, can perform, they have not significantly advanced the types of attacks executed. Instead, generative AI technologies offer marginal benefits to attackers without revolutionary breakthroughs.
Past studies by other tech giants, such as OpenAI and Microsoft, confirm Google’s findings—that current AI tools have yet to create entirely new offensive tactics for cyber operations.
Impact on Cybersecurity
Technology experts remain cautious about the transformative potential of AI in cybersecurity. Adam Segal from the Council on Foreign Relations remarked that AI has yet to become a game-changer for malicious entities. It enhances some aspects, like phishing attempts and code discovery, but dramatic changes remain unseen. As AI technology progresses, it remains unclear whether it will offer greater advantages to defenders or attackers in cybersecurity.
Caleb Withers from the Center for a New American Security predicts an evolving arms race between offensive and defensive AI applications, but suggests that the balance of utility should remain relatively even for both sides.
Types of Threat Actors
Google’s report classifies hackers using the Gemini chatbot into two main threat categories:
- Advanced Persistent Threats (APT): Government-supported hackers focusing on espionage and destructive attacks.
- Information Operations (IO): Coordinated actions to influence public opinions through deceptive online activities.
Iranian hackers emerged as the dominant users of Gemini for both APT and IO threats, employing it for various tasks ranging from intelligence gathering to content creation for propaganda. Chinese APT actors used it extensively for reconnaissance and deepening network access post-intrusion, whereas North Korean entities also exploited it to place hidden IT operatives in Western firms to aid in intellectual property theft.
In IO operations, Iranian hackers again led in activity, crafting content to sway public opinion. Russian actors, though minimally present in APT contexts, appeared significantly in IO-related use.
Urgency for Collaborative Action
Kent Walker, Google’s President of Global Affairs, emphasizes the urgent need for cooperation between the private sector and the U.S. government to tackle threats posed by AI-enhanced cyberattacks. He warns that despite America currently leading in AI advancements, this advantage could erode without decisive measures. Walker advocates for streamlining technology adoption processes in government agencies and fostering public-private partnerships to bolster cyber defenses.
Note: This publication was rewritten using AI. The content was based on the original source linked above.