At least three foreign ministers, a U.S. senator, and a state governor were reportedly targeted through texts, Signal messages, and voicemails in the fraudulent campaign. The identities of the recipients were not disclosed.
“This incident is being actively monitored and addressed,” said State Department spokeswoman Tammy Bruce. “We continue to strengthen our cybersecurity to prevent future incidents.” She declined further comment, citing security concerns and an ongoing investigation.
While the attempts were described by one official as “not very sophisticated” and ultimately unsuccessful, the other official noted that it was still “prudent” to alert U.S. personnel and international partners given the growing threat posed by foreign actors using AI to compromise information security.
The cable emphasized there was no direct cyber threat to the department but warned that any information shared with compromised individuals could be exposed.
This incident mirrors a similar case in May involving President Donald Trump’s chief of staff Susie Wiles, where AI-generated messages and calls, possibly crafted using data from her personal contacts, were sent to public officials and business figures. Some of those calls reportedly featured a voice that sounded like Wiles, though they didn’t come from her number.
Rubio himself was also targeted by a deepfake earlier this year, when a manipulated video falsely claimed he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service — a claim the Ukrainian government later dismissed.
Experts have warned that AI-generated deepfakes are becoming harder to detect as technology advances. “The level of realism and quality is increasing,” said Siwei Lyu, a computer science professor at the University at Buffalo. “It’s an arms race, and right now the generators are getting the upper hand.”
The FBI had already issued warnings earlier this year about AI-driven impersonation schemes targeting senior U.S. officials, stressing the potential risks to associates and institutions.
To counter such threats, proposals have ranged from stronger laws and penalties to media literacy campaigns and AI-powered tools to detect deepfakes — though those detection systems are now struggling to keep up with ever-improving fake content, reports UNB.