Google says attackers are using AI for zero-day research, malware development, reconnaissance, and access to premium AI tools ...
Hundreds of packages across npm and PyPI have been compromised in a new Shai-Hulud supply-chain campaign delivering ...
Google identified the first malicious AI use for a zero-day 2FA bypass in an open-source admin tool, accelerating threat ...
Cyber adversaries have long used AI, but now attackers are using large language models to develop exploits and orchestrate ...
Fake OpenAI Privacy Filter hit #1 on Hugging Face with 244,000 downloads, spreading infostealer malware to Windows users.
The repository reached the #1 trending position on Hugging Face within 18 hours, highlighting how public AI repositories are ...
Google's GTIG identified the first zero-day exploit developed with AI and stopped a mass exploitation event. The report documents state actors using AI for vulnerability research and autonomous ...
While previous assessments categorized AI-assisted cyberattacks as experimental, current data suggests generative AI is now a mature, industrialized component of offensive operations.
The attacks compromise aerospace and drone firms' systems to exfiltrate GIS files, terrain models, and GPS data to gain a clear picture of analysts' intel.
Google says hackers have used AI to discover and exploit a previously unknown software vulnerability for the first time.
On May 11, the same day Google's Threat Intelligence Group disclosed the first confirmed case of attackers using AI to build ...
A malicious repository on Hugging Face impersonated OpenAI’s “Privacy Filter” project and briefly reached the platform’s top trending position before removal ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results