
Project Info
Category
Date
AI-Generated Screenshot Falsely Linked to Bondi Beach Attack
The Viral Claim and Its Narrative
In the aftermath of the December 14, 2025 Bondi Beach attack in Sydney[1], a screenshot began circulating rapidly across social media platforms including X[2] and Facebook. The image purported to show a Facebook profile allegedly belonging to one of the attackers. According to the viral captions, the profile belonged to a man named “David Cohen,” described as Jewish and based in Tel Aviv, Israel. Posts using the screenshot went further, claiming that the attack had been deliberately misreported and that the “real identity” of the perpetrator had been concealed for political reasons. The framing of these claims was clearly designed to inject a religious and ethnic angle into a deeply traumatic event, fueling suspicion and communal hostility.
The screenshot was widely shared as supposed “leaked evidence,” with users claiming the profile had been deleted shortly after the attack. Given the emotional intensity surrounding the incident, the image spread rapidly, often without scrutiny, and was amplified by accounts known for pushing sensational or divisive narratives.
Verification and Digital Forensic Findings
CyberPoe conducted a detailed forensic analysis of the viral screenshot to determine its authenticity. The first major red flag was the absence of any corroborating evidence that such a Facebook profile ever existed. No archived versions, cached links, or historical records of the alleged account could be found through open-source intelligence tools.
More critically, the image was analyzed using Google’s SynthID detection system. SynthID is a specialized tool developed to identify AI-generated or AI-modified images by detecting imperceptible digital watermarks embedded during content generation. The analysis confirmed that the screenshot contained a SynthID watermark, establishing that the image was either fully generated or significantly altered using Google’s AI models. This alone is sufficient to conclude that the screenshot is synthetic media rather than a genuine capture of a real Facebook profile.
Visual inconsistencies further supported this conclusion. The typography, layout proportions, and profile photo alignment showed subtle irregularities that are common in AI-generated screenshots but uncommon in native platform captures. Taken together, the forensic evidence confirms that the image is not authentic.
What Official Records Actually Show
Contrary to the claims made online, Australian authorities have provided clear, verifiable information about the Bondi Beach attack. According to the New South Wales Police, the incident was carried out by a father and son. The father, Sajid Akram, aged 50, was shot dead at the scene by police.[1] His son, Naveed Akram, aged 24, survived and has since been charged with 59 criminal offences, including 15 counts of murder and one terrorism-related offence.
Police investigations established that the attack was ideologically motivated, with links to Islamic State-inspired extremism. Authorities also confirmed that Sajid Akram migrated to Australia from India in 1998, while Naveed Akram was born in Australia. Verified immigration and travel records show that the pair had traveled to the southern Philippines prior to the attack. At no point have investigators identified any individual named David Cohen in connection with the case, nor is there any evidence linking the attackers to Israel or Judaism.
The Broader Impact of Synthetic Disinformation
This case illustrates a growing and dangerous trend in digital misinformation: the use of AI-generated content to manufacture false identities and narratives around real-world tragedies. By introducing fabricated ethnic or religious elements, such content exploits public grief and confusion to inflame tensions and redirect blame. Screenshots, in particular, are increasingly weaponized because they carry an illusion of authenticity and are difficult for casual users to verify.
The Bondi Beach case demonstrates how synthetic media can be deployed within hours of an incident, filling information vacuums before official facts are widely understood. Once such content gains traction, corrections often struggle to achieve the same reach as the original falsehood.
Conclusion
The viral screenshot claiming to show a Facebook profile belonging to a Bondi Beach attacker named “David Cohen” is entirely fabricated. It is AI-generated synthetic media, confirmed through digital watermark detection and the absence of any real-world corroboration. Official police records conclusively identify the perpetrators and leave no room for the narrative being pushed online. This incident serves as a stark reminder that in the age of advanced AI tools, visual content must be treated with caution, especially when it is used to assign blame or provoke communal division.
CyberPoe | The Anti-Propaganda Frontline 🌍