• Wednesday, February 12, 2025
businessday logo

BusinessDay

AI deepfakes: The double-edged sword of artificial intelligence

AI deepfakes: The double-edged sword of artificial intelligence

Artificial Intelligence (AI) has revolutionised many aspects of our lives, from automation to healthcare, but one of its most controversial applications is deepfake technology.

Deepfake AI refers to artificially manipulated audio, video, and images that make people appear to say or do things they never did. This technology, which utilises deep learning and neural networks, can create hyper-realistic digital forgeries that are nearly indistinguishable from authentic media.

Therefore, understanding deepfake technology becomes imperative to help individuals and corporate bodies identify them. Deepfake AI leverages deep learning techniques, particularly Generative Adversarial Networks (GANs), to create synthetic content. GANs consist of two neural networks—the generator and the discriminator—that work against each other.

The impact of deepfakes in our ever-evolving world is far-reaching, and it is spreading rapidly like a wildfire. Its first major ramification is political and social manipulation. The technology can be weaponised to spread false information and influence public opinion. So beware of many videos that were spread during the last election; some of them may not have been real or legitimate.

Another key impact of deepfake technology in our rapidly dynamic society is its latent ability to fuel cybercrime and fraud. Fraudsters use deepfake AI to impersonate individuals, enabling identity theft, financial fraud, and scams.

There has been a rise in cybercrime based on AI, with people using white military men’s faces/images to deceive unsuspecting partners and using fake AI-generated voices and prompts to convince the listener to take specific actions. These are becoming more rampant than ever, and many have had their belongings carted away by the users of this technology.

While technology is expected to make life easier and worth living, the wrongful use of deepfakes could cost one their privacy due to non-consensual content. One of the most disturbing applications of deepfake technology is in the creation of non-consensual explicit content. There has been a rise in sexually explicit content that is machine-generated to resemble known or familiar people; we need to always think first before believing or acting, as some of the videos making rounds on social media might just be a product of deepfake.

Read also: Artificial Intelligence (AI) –What impact will it have on education?

This AI technology also poses challenges for journalism and media integrity. The mainstream media has also been culpable of not verifying media sources and broadcasting the same. There have been instances where social media handles of media houses spread AI-generated content without knowing it was fake. This is a call to action for all media outlets to ensure content is thoroughly checked to avoid falling into the trap of deepfakes.

But how then can we identify deepfakes? While deepfake technology continues to evolve, several telltale signs can help identify manipulated content. All you need to do is pay close attention to unnatural facial movements, glitches and artefacts, audio mismatch, background inconsistencies, and reverse image search, and you may deploy other AI detection tools.

For those curious about how deepfake technology works, several platforms allow users to experiment with creating deepfake media such as DeepFaceLab (https://github.com/iperov/DeepFaceLab); FaceSwap (https://faceswap.dev/); Reface (https://reface.ai/); MyHeritage Deep Nostalgia (https://www.myheritage.com/deep-nostalgia) among many others.

Some websites can equally help to detect deepfake content. They are indispensable, especially when you are in doubt of the authenticity of a content. To help combat misinformation, several platforms offer deepfake detection tools that analyse videos and images for signs of AI manipulation: Deepware Scanner (https://deepware.ai/), Sensity AI (https://sensity.ai/), WeVerify (https://weverify.eu/) and the host of others.
Fighting deepfake threats requires all hands on deck. One of the most important things to be done is to strengthen legislation and regulation. Governments and regulatory bodies must establish stringent laws to curb the malicious use of deepfake technology given its far-reaching consequences for society and its threat to peaceful coexistence.
There must be a call to advance deepfake detection technology. Tech companies and researchers are developing advanced AI-based detection models to counter deepfake threats.…
Raising public awareness in the form of advocacy and increased digital literacy becomes more important than ever in order to combat the menace of deepfakes. Also, ethical AI development and responsibility are needed to mitigate impact.
AI deepfake technology represents both an incredible innovation and a significant threat. While its applications in entertainment and creativity showcase its potential, its misuse in misinformation, fraud, and privacy violations demands immediate attention. To navigate the deepfake era responsibly, society must invest in AI ethics, regulation, and digital literacy, as this will guide individuals to better understand its risks and underpinnings.

Yemi Enitan is a Cybersecurity, Infrastructure & M365 Engineer based in the UK with almost a decade practical experience. You can reach him at [email protected].

Join BusinessDay whatsapp Channel, to stay up to date

Open In Whatsapp