Google’s Threat Intelligence Group (GTIG) is sounding the alarm once again on the risks of AI, publishing its latest report on how artificial intelligence is being used by dangerous state-sponsored hackers.
This team has identified an increase in model extraction attempts, a method of intellectual property theft where someone queries an AI model repeatedly, trying to learn its internal logic and replicate it into a new model.
Our new Google Threat Intelligence Group (GTIG) report breaks down how threat actors are using AI for everything from advanced reconnaissance to phishing to automated malware development.
More on that and how we’re countering the threats ↓ https://t.co/NWUvNeBkn2
— Google Public Policy (@googlepubpolicy) February 12, 2026
While this is worrying, it isn’t the main risk that Google is voicing concern over. The report goes on to warn of government–backed threat actors using large language models (LLMs) for technical research, targeting and the rapid generation of nuanced phishing lures.
The report highlights concerns over the Democratic People’s Republic of Korea, Iran, the People’s Republic of China and Russia.
Gemini and phishing attacks
These actors are reportedly using AI tools, such as Google’s own Gemini, for reconnaissance and target profiling, using open-source intelligence gathering on a large scale, as well as to create hyper-personalized phishing scams.
“This activity underscores a shift toward AI-augmented phishing enablement, where the speed and accuracy of LLMS can bypass the manual labor traditionally required for victim profiling”, the report from Google states.
“Targets have long relied on indicators such as poor grammar, awkward syntax, or lack of cultural context to help identify phishing attempts. Increasingly, theat actors now leverage LLMs to generate hyper-personalized lures that can mirror the professional tone of a target organization”.
For example, if Gemini were given the biography of a target, it could generate a good persona and help to best produce a scenario that would effectively grab their attention. By using AI, these threat actors can also more effectively translate in and out of local languages.
As AI’s ability to generate code has grown, this has opened up doors for its malicious use too, with these actors troubleshooting and generating malicious tooling using AI’s vibe coding functionality.
The report goes on to warn about a growing interest in experimenting with agentic AI. This is a form of artificial intelligence which can act with a degree of autonomy, supporting tasks like malware development and its automation.
Google notes its efforts to combat this problem through a variety of factors. Along with creating Threat Intelligence reports multiple times a year, the firm has a team constantly searching for threats. Google is also implementing measures to bolster Gemini into a model which can’t be used for malicious purposes.
Through Google DeepMind, the team attempts to identify these threats before they can be possible. Effectively, Google looks to identity malicious functions, and remove them before they can pose a risk.
While it is clear from the report that use of AI in the threat landscape has increased, Google notes that there are no breakthrough capabilities as of yet. Instead, there is simply an increase in the use of tools and risks.
dailyhodl.com
protos.com