November 7, 2024

Microsoft reports that North Korea and Iran are utilizing AI for hacking

The US tech giant states that it has identified threats from foreign nations that have utilized or tried to exploit generative AI developed by the company

Microsoft revealed on Wednesday that US adversaries, primarily Iran and North Korea, with lesser involvement from Russia and China, are starting to utilize generative artificial intelligence for organizing or executing offensive cyber operations.

The tech giant stated that it, in collaboration with business partner OpenAI, detected and disrupted numerous threats that attempted to use or exploit AI technology they had jointly developed.

In a blog post, Microsoft noted that while these techniques were “early-stage” and not particularly novel or unique, it was crucial to publicly expose them. This was because US rivals were leveraging large-language models to enhance their capabilities in breaching networks and conducting influence operations.

While cybersecurity firms have traditionally used machine learning for defense, primarily to identify unusual behavior in networks, criminals and offensive hackers have also adopted these techniques. The emergence of large-language models, spearheaded by OpenAI’s ChatGPT, has elevated this cat-and-mouse game.

Microsoft’s announcement on Wednesday, which aligned with its release of a report, highlighted the company’s substantial investment in OpenAI, amounting to billions of dollars. The report indicates that generative AI is poised to advance malicious social engineering, resulting in more sophisticated deepfakes and voice cloning. This poses a threat to democracy, particularly in a year when over 50 countries are scheduled to hold elections, potentially amplifying disinformation, which is already a prevalent issue.

Microsoft cited several examples where all generative AI accounts and assets of specific groups were disabled:

  • The North Korean cyber-espionage group, Kimsuky, utilized the models to research foreign think tanks studying the country and to create content likely used in spear-phishing hacking campaigns.
  • Iran’s Revolutionary Guard employed large-language models for social engineering, troubleshooting software errors, and studying how intruders might evade detection in a compromised network. This included generating phishing emails, such as one impersonating an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism, with the AI accelerating and enhancing email production.
  • The Russian GRU military intelligence unit, Fancy Bear, utilized the models to research satellite and radar technologies potentially related to the conflict in Ukraine.
  • The Chinese cyber-espionage group, Aquatic Panda, which targets various industries, higher education, and governments from France to Malaysia, interacted with the models in ways suggesting limited exploration of how LLMs can enhance their technical operations.
  • The Chinese group, Maverick Panda, known for targeting US defense contractors among other sectors for over a decade, had interactions with large-language models indicating an evaluation of their effectiveness as a source of information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs.

In a separate blog post released on Wednesday, OpenAI stated that its current GPT-4 model chatbot provides “only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI-powered tools.”

Cybersecurity researchers anticipate this situation to change.

Last April, Jen Easterly, the director of the US Cybersecurity and Infrastructure Security Agency, informed Congress that “there are two epoch-defining threats and challenges. One is China, and the other is artificial intelligence.”

Easterly emphasized the importance of ensuring that AI is developed with security as a primary consideration.

Critics of the public release of ChatGPT in November 2022—and subsequent releases by competitors such as Google and Meta—argue that it was rushed and irresponsible, given that security was largely an afterthought in their development.

Amit Yoran, CEO of the cybersecurity firm Tenable, remarked, “Of course, bad actors are using large-language models—that decision was made when Pandora’s Box was opened.”

Some cybersecurity professionals criticize Microsoft for creating and promoting tools to address vulnerabilities in large-language models instead of focusing on making them more secure from the outset.

“Why not create more secure black-box LLM foundation models instead of selling defensive tools for a problem they are helping to create?” questioned Gary McGraw, a computer security expert and co-founder of the Berryville Institute of Machine Learning.

Edward Amoroso, a NYU professor and former AT&T chief security officer, noted that while the immediate threat posed by the use of AI and large-language models may not be obvious, they “will eventually become one of the most powerful weapons in every nation-state military’s offense.”

Copyright © All rights reserved | WebbSocial |