Don’t Be Fooled: How to Distinguish Truth from Fiction Online

Recent articles

For the past 30 years or so, kids have been told not to believe everything they see online, but now that advice may need to be extended to adults.

The world is currently witnessing a boom in the phenomenon of ‘deep faking’, where AI technology is being used to process video and audio clips in a way that simulates real life with high accuracy, according to the UK’s ‘Daily Mail’.

Advertising material

dividing lines

To help further illustrate with a more transparent example, the world’s first deepfake video was released using AI studio software Revel.ai, which appears to show Nina Schick, a professional AI consultant, giving a heads-up about as “the boundaries between reality and fantasy”.

Of course, it wasn’t really Nina Shaik in the video, and the video was cryptographically signed by digital authentication firm Truepic, declaring it contained fake content created using artificial intelligence software.

Slowly and clearly, the fake video says, “Some say the truth is a reflection of our reality. We are so used to defining it with our senses. But what if our reality changes? What if we can no longer rely on our senses to determine the authenticity of what we see and are here? We are in the early days of artificial intelligence, and it has become The lines between reality and fantasy are really blurred.”

The passage adds that in “a world where shadows blur with reality, it is sometimes necessary to radically change one’s perspective to see things as they really are.”

encrypted signature

The high-resolution video ends with the message that the clip was faked using Revel.ai with the consent of Nina Chic herself and that it has been cryptographically signed by Truepic.

Deepfake technologies are a form of artificial intelligence that uses “deep learning” to process audio, images or video and create very realistic multimedia content, but it is actually a fake video.

President of Ukraine

One of the best-known uses of deepfake technology was a crude impersonation of Ukrainian President Volodymyr Zelensky appearing to surrender to Russia in a video that circulated widely on Russian social media last year.

The clip shows the Ukrainian president speaking from his podium as he calls on his forces to lay down their arms and surrender to Russian forces. But savvy netizens were quick to notice the discrepancies between the color of Zelensky’s neck and face, the strange accent, and the discrepancy between the background and the ambient shadows around his head.

Recreational purposes

Despite deepfakes’ entertainment value, some experts have warned of the dangers it can pose, as concerns have been raised in the past about how it could be used to create videos of child abuse or revenge against others with fake porn, as well as political hoaxes.

Law project

In November 2022, an amendment was made to the UK Government’s Online Safety Act which stated that using deepfake technology to spoof pornographic images and screenshots of people without their consent is illegal.

Deep false intelligence has the potential to undermine democratic institutions and national security, said Dr. Tim Stephens, director of the Cybersecurity Research Group at King’s College London.

weapon in wars

He said the widespread availability of these tools could be exploited by warring states to “phish” and manipulate target populations in an effort to achieve foreign policy goals and “undermine” other countries’ national security.

Threat to national security

Dr Stevens added that “the possibility exists for artificial intelligence and deepfake systems to affect national security, and it is not just a matter of high-level defense and interstate warfare, but in general undermining trust in democratic institutions and in the media. Authoritarian regimes can exploit Deepfake technologies.” Vic” to falsify videos that would lower the level of trust in the institutions and official organizations of the countries it is at war with.

capillary diffusion

With the advent of freely available AI tools for converting text to image and text to video, such as DALL-E and Make-A-Video from Meta, manipulated media will become more prevalent.

In fact, it has been predicted that by 2025, 90% of online content will be created using AI. For example, social media users have been able to pinpoint the truth of a fake, allegedly AI-created image of a cat with reptile-like spots on its body, which has been declared a recently discovered species.

credibility standards

Cybersecurity and AI experts are hoping AI platforms and companies will be forced to put a signature on content generated by their software to establish an open standard for content credibility.

Experts have predicted that artificial intelligence will be an essential part of the production process of almost all digital information, so if there is no way to authenticate this information, whether it is generated by artificial intelligence or not, it will be very difficult to manage them with trust and credibility with the digital information system.

A source of information

Experts said that “while users have not realized that they have the right to understand the source of the information they receive or see, they hope this campaign demonstrates that this is possible and that this is a right they must claim.”

The encrypted digital signature generation technology complies with the new standard, developed by the Content Creation and Authenticity Alliance (C2PA), an industry body whose members include Adobe, Microsoft and the BBC, which works to combat the spread of disinformation online.

Eliminate confusion for greater security

Ms. Schick and companies Truepic and Revel.ai say their video shows it’s possible for a digital signature to increase transparency over AI-generated content, and they hope it will remove confusion about the source of the video, helping to improve the internet. safer place.

An ethical scientist with reliability and transparency

“When an AI tool is used properly, it can be an amazing medium for storytelling and creative freedom in the entertainment industry,” said Bob de Jong, creative director of Revel.ai. evolving is something the world has never seen before.” .

“It is up to everyone, including content creators, to design an ethical world with credibility and transparency for content creation so that AI can continue to be used and society can embrace it, enjoy it and not be harmed by it,” he said. noted de Jong.

Leave a Reply