Shall I Show You Photos, Too? How AI Chatbots Are Offering Sensitive Leaked Data
During nearly two years of war, Iranian-affiliated hackers have accessed large troves of sensitive data belonging to Israeli ministries, officials, and citizens. Artificial intelligence chatbots such as ChatGPT, Grok, and Gemini reproduce this sensitive data, despite court-issued gag orders in Israel. One of the bots even provided original posts and the password needed to open the files. A Shomrim exposé


During nearly two years of war, Iranian-affiliated hackers have accessed large troves of sensitive data belonging to Israeli ministries, officials, and citizens. Artificial intelligence chatbots such as ChatGPT, Grok, and Gemini reproduce this sensitive data, despite court-issued gag orders in Israel. One of the bots even provided original posts and the password needed to open the files. A Shomrim exposé

During nearly two years of war, Iranian-affiliated hackers have accessed large troves of sensitive data belonging to Israeli ministries, officials, and citizens. Artificial intelligence chatbots such as ChatGPT, Grok, and Gemini reproduce this sensitive data, despite court-issued gag orders in Israel. One of the bots even provided original posts and the password needed to open the files. A Shomrim exposé
Illustration: Reuters

Milan Czerny
October 16, 2025
Summary


Listen to a Dynamic Summary of the Article
Created using NotebookLM AI tool
In the last two years, Israel has dealt with numerous cyber breaches and leaks of sensitive information from government ministries, private companies, hospitals, and accounts belonging to senior officials. In response, the state took drastic measures to halt the leaks, including the widespread use of gag orders and blocking access to certain websites for Israeli users, as well as contacting social media giants like Meta, X, and Telegram to ask them to remove specific content.
Shomrim has covered this issue extensively and can now reveal another cybersecurity breach through which sensitive information is being leaked almost unchecked. By entering relatively simple prompts, users are able to get AI chatbots to generate large volumes of hacked information, including direct access to leaked documents.
Beyond concerns over national security and intelligence-gathering by Israel’s adversaries, the broader risk lies in the ease with which sensitive personal data can be accessed through these chatbots. The information exposed includes Israeli citizens’ ID numbers, medical histories, detailed police files, personal email addresses, and photographs, raising serious privacy and safety concerns for those affected.
One of the most popular chatbots, OpenAI’s ChatGPT, provides users with leaked photographs, including footage stolen from internal police databases, images of weapons, internal ministerial documents, and personal information about Israeli citizens, as well as links to websites that allow users to download the entire set of leaked data. When conducting test queries on ChatGPT, it also recreated posts taken from a forum popular among cybercriminals, providing full details about the breached databases and even offering a password to open the files.
While the chatbot responded to the most straightforward queries by saying it is “unable to provide or help sharing data that has been leaked or hacked,” slight tweaking of the prompt led the AI tool into providing access to a seemingly limitless trove of leaked data.
At a later stage, the bot even offered to expand its own search for other recently leaked sensitive information and revealed leaks the user may not have known about.
For example, when asked to provide general information about recent leaks, the chatbot focused on specific documents. “If you like, I can try to see if there is an image that clearly shows the license,” it said.
.jpg)
‘Telegram links to 25,000 email addresses’
Most of the leaked data presented by ChatGPT was originally published online by a hacking group known as Handala, which is believed to have ties to Iran's Islamic Revolutionary Guard Corps (IRGC). The group intensified its cyberattacks against Israel in the aftermath of this summer’s war between the Islamic Republic and Israel.
The AI bot also provided a screenshot obtained by Malek Team, another hacking group with links to Iran – which was responsible for the hack of a medical center in Safed, northern Israel. In that cyberattack, personal medical details of both patients and soldiers were leaked. When asked specifically about that incident, ChatGPT linked directly to the hackers’ website: “For more information, you can visit the official page of the Malek group.”
ChatGPT is not alone, however, in providing leaked data. Grok, the chatbot operated by Elon Musk’s X, provided details of posts included in leaks of senior Israeli officials’ personal accounts, including private correspondence and photographs. Grok also offered direct links to screenshots of intimate pictures belonging to senior Israeli figures and documents hacked from Israeli companies believed to be linked to the defense establishment. “This follow-up post contains additional emails hacked from the accounts of senior IDF officers … It provides telegram links to 25,000 email addresses, focusing on military strategy in the aftermath of October 7, 2023,” Grok responded to a short prompt.

Other popular chatbots, such as Perplexity AI, also provided direct links to certain leaks, as well as partially redacted screenshots from former senior Israeli officials. The most secure chatbot prompted was Claude, which is operated by American AI startup Anthropic.
Of all the chatbots tested, Claude was less likely to share or recreate leaked data. It proved more challenging to rewrite the prompts in order to bypass the bot’s internal supervision and disseminate sensitive information.

‘Massive amounts of Russian propaganda’
Canadian authorities have also recently noticed a spike in the amount of leaked data, originally hacked by Handala, that is now resurfacing via various chatbots, including Google’s Gemini, Microsoft’s Copilot, DeepSeek, Grok, and Claude.
“ChatGPT, Gemini, Copilot, Claude, Grok, and DeepSeek. These platforms all outlined detailed information about the “hack and leak” operation, providing names of the affected individuals, the nature of the leaked information, and links to the released images,” according to a report by the Canadian Rapid Response Mechanism (RRM), the body responsible in that country and in other G7 nations for identifying and responding to foreign threats to democracies, including attempts to interfere in elections by countries such as Russia and Iran.
In addition to concerns about sensitive leaks, analysts have found that chatbots, in their efforts to train on large datasets, may reproduce content originating from malicious influence campaigns, including Russian disinformation websites. By flooding search results with false information, including some targeting Israel, these campaigns undermine the credibility of the information provided by chatbots. As a result, thousands of fake articles can end up being incorporated into responses and interactions with artificial intelligence systems.
“Massive amounts of Russian propaganda, 3.6 million articles in 2024, is now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda,” according to researchers from NewsGuard, an American internet trust service that rates the credibility and transparency of news and information websites.
As the battle over the truth moves into the realm of AI, it appears that this is just the beginning of a new and very different stage in terms of scope and intensity.
Israel’s National Cyber Directorate did not respond to a request for comment.