WASHINGTON (AP) — Among the images of bombed-out homes and devastated streets in Gaza, some stood out for their utter horror: bloody, abandoned infants.

These images, viewed millions of times online since the start of the war, are deepfakes created using artificial intelligence. If you look closely, you can see clues: fingers that curve strangely or eyes that shimmer in an unnatural light – all telltale signs of a digital deception.

However, the outrage the images were intended to provoke is all too real.

Images from the Israel-Hamas war have vividly and painfully illustrated the potential of AI as a propaganda tool, capable of creating lifelike images of bloodshed. Since the war began last month, digitally altered images shared on social media have been used to make false claims about responsibility for victims or to deceive people about atrocities that never happened.

While most of the false claims about the war circulating online did not require artificial intelligence and came from more conventional sources, technological advances that are rarely monitored are becoming more common. This highlighted the potential for AI to become another form of weaponry and provided a glimpse of what lies ahead in future conflicts, elections and other major events.

“It’s going to get worse — much worse — before it gets better,” said Jean-Claude Goldenstein, CEO of CREOpoint, a technology company based in San Francisco and Paris that uses AI to assess the validity of online claims. The company has created a database of the most viral deepfakes from Gaza. “Images, video and audio: with generative AI there will be an escalation like you’ve never seen before.”

In some cases, photographs from other conflicts or disasters were used for other purposes and passed off as re-issued. In other cases, generative AI programs have been used to create images from scratch, such as a crying baby amid bomb wreckage that went viral in the early days of the conflict.

Other examples of AI-generated images include videos showing alleged Israeli rocket attacks, or tanks rolling through destroyed neighborhoods, or families searching for survivors in the rubble.

In many cases, the fakes appear designed to elicit a strong emotional response by involving the bodies of babies, children or families. In the bloody first days of the war, supporters of both Israel and Hamas claimed that the other side had victimized children and babies; Fake images of crying infants provided photographic “evidence” that was quickly cited as evidence.

The propagandists who produce such images are adept at targeting people’s deepest impulses and fears, said Imran Ahmed, CEO of the Center for Countering Digital Hate, a nonprofit that has tracked wartime disinformation. Whether it’s a deepfake baby or an actual image of an infant from another conflict, the emotional impact on the viewer is the same.

The more abhorrent the image, the more likely a user is to remember it and share it, unintentionally spreading the disinformation further.

“People are being told right now, look at this picture of a baby,” Ahmed said. “The disinformation is designed to make you deal with it.”

Similar misleading AI-generated content spread after Russia invaded Ukraine in 2022. An altered video appeared to show Ukrainian President Volodymyr Zelensky ordering Ukrainians to surrender. Such claims were circulating as recently as last week and show how persistent even easily debunked misinformation can be.

Each new conflict or election season presents disinformation traffickers with new opportunities to demonstrate the latest AI advances. Many AI experts and political scientists are therefore warning about the risks next year, when major elections will take place in several countries, including the USA, India, Pakistan, Ukraine, Taiwan, Indonesia and Mexico.

The threat that AI and social media could be used to spread lies to US voters has alarmed lawmakers of both parties in Washington. At a recent hearing on the dangers of deepfake technology, U.S. Rep. Gerry Connolly, Democrat of Virginia, said the U.S. needs to invest in funding the development of AI tools to counter other AI.

“We as a nation need to get this right,” Connolly said.

Around the world, numerous startup technology companies are working on new programs that can detect deepfakes, watermark images to prove their origin, or scan text to verify any spurious claims that may have been inserted by AI.

“The next wave of AI will be: How can we verify the content that is out there? How can you detect misinformation, how can you analyze text to determine whether it is trustworthy? said Maria Amelie, co-founder of Factiverse , a Norwegian company that has developed an AI program that can scan content for inaccuracies or biases created by other AI programs.

Such programs would be of immediate interest to educators, journalists, financial analysts, and others interested in detecting falsehoods, plagiarism, or fraud. Similar programs are being developed to detect manipulated photos or videos.

While this technology is promising, those who use AI to lie are often one step ahead, according to David Doermann, a computer scientist who led an initiative at the Defense Advanced Research Projects Agency to respond to the national security threats posed by AI-manipulated images.

Doermann, now a professor at the University at Buffalo, said that an effective response to the political and social challenges posed by AI disinformation requires both better technology and better regulations, voluntary industry standards and major investments in digital literacy programs require to help internet users cope better and find ways to tell the truth from imagination.

“Every time we release a tool that detects this, our adversaries can use AI to cover up these tracks,” Doermann said. “Tracking and trying to take this stuff down is no longer the solution. We need a much bigger solution.”

Source : apnews.com

Leave a Reply

Your email address will not be published. Required fields are marked *