Sunday 12/April/2026 – 03:18 AM

















Experts have warned of a dangerous escalation in the use of artificial intelligence techniques to produce what is known as “slobaganda,” or random targeted propaganda. Countries such as the United States and Iran are exploiting these modern technologies to flood the Internet with fake content with the aim of manipulating beliefs and emotions. Although some of this content seems ridiculous, it has an enormous ability to destroy public trust and spread doubt in societies.

The slopaganda storm: How AI is rewriting war narratives
One of the AI-made films promoted by Iran to mock Trump

The emergence of subjugation in arenas of direct conflict

The period that followed the exchange of military strikes between the United States and Iran, after the first week of its outbreak, witnessed the emergence of a new and disturbing form of political propaganda. The White House published a video clip that combined real scenes of American raids with clips excerpted from popular movies, television series, and video games. In return, Iran and its sympathizers responded by flooding social media platforms with old video clips from previous wars and presenting them as modern footage from the current conflict. This was accompanied by the publication of extensive content generated by artificial intelligence depicting fake attacks on the city of Tel Aviv and the American military bases spread in the Gulf waters.

Recently, video clips designed by an Iranian team have spread, showing prominent figures such as Donald Trump and Benjamin Netanyahu in the form of plastic toy figures. Academics call this new and distorted content the term “slobaganda,” which expresses the poor content produced by artificial intelligence to serve purely political goals.

Objectives of spoliation and influencing public sentiments

The use of this new weapon was not limited to times of direct war, but rather extended to include the domestic political arena. According to the New York Times, in October 2025, US President Donald Trump published a video clip designed with artificial intelligence showing him flying an F-16 fighter plane and throwing garbage at demonstrators. Later, he published another clip imagining his presidential library in the form of a gaudy skyscraper equipped with a golden elevator. This type of propaganda does not aim to convince people to believe that Donald Trump is really flying a fighter plane. Rather, he aims to send expressive messages that arouse negative emotions and create certain mental associations in the mind of the viewer. These misleading clips and images penetrate our usual mental defenses through repeated and continuous exposure to them through the media and social networks, especially when the audience is distracted and quickly moves between digital applications.

Contaminating the truth and destroying trust in times of crisis

Slobaganda is considered a very effective way to pollute the cognitive environment and blur the lines between truth and fiction. Artificial intelligence tools work as machines to produce content that does not care about facts, but rather focuses only on arousing attention and directing anger. Experts confirm that this content becomes doubly dangerous in times of crises and wars, as people eagerly search for accurate information in the absence or scarcity of reliable sources. Once misleading information or a wrong mental association enters a person’s mind, it becomes very difficult to remove or correct it in the future. To huge audiences estimated at millions of people, any misleading influence, even if small, can lead to serious consequences, including influencing election results, directing protest movements, or changing public sentiment toward military wars.

Epistemological nihilism and the collapse of certainty in modern societies

The massive spread of this fake content makes people doubt everything they read or see on screens, and as individuals improve their ability to detect misleading content, at the same time the possibility of them rejecting real content and original documents increases, which ultimately leads to a general and dangerous decline in the level of public trust in government institutions and serious media, bringing society into a state of cognitive nihilism and loss of absolute certainty, and when it becomes impossible to accurately identify reliable sources, people choose to believe what they find comforting to their feelings or provoking their anger, which increases polarization. Communities already struggling with overlapping and severe economic, political and environmental crises.

Proposed solutions to curb rampant propaganda

Researchers Mark Alfano from Macquarie University and Michal Klincewicz from Tilburg University propose a set of solutions on three main levels to confront this growing threat. Individuals must first enhance their digital literacy by learning how to spot the distinctive signs of artificial intelligence intervention in texts and images, and get used to constantly checking original sources instead of just reading deceptive headlines, in addition to blocking and blocking sources that regularly spread propaganda. Governments and regulatory bodies must intervene immediately to impose strict technological solutions that require… By placing clear watermarks on any content generated by artificial intelligence programs, with the necessity of removing misleading and dangerous content from news platforms. Finally, the matter requires holding major technology companies such as Google, Open Ai, and Platform

LEAVE A REPLY

Please enter your comment!
Please enter your name here