• Tuesday, September 10, 2024
businessday logo

BusinessDay

AI voice cloning: the good and the frightening prospects

AI voice cloning: the good and the frightening prospects

The introduction of artificial intelligence (AI) technology was hailed for its seemingly limitless application across all human fields: science, education, environment, agriculture, healthcare, religion, fashion, arts, music, politics, governance, business, and even relationships. Its dark side, its application for the commission of crime and manipulation, however, remains a frightening prospect. The development of AI voice cloning, for instance, was momentous; it offered inconceivable possibilities in everyday life, for good or for evil.

The two sides of deepfake

Two AI-related voice cloning incidents occurred in the US in the past week to demonstrate the bright and the dark sides of AI. The incidents also highlighted the increasingly blurry line between what’s real and what’s fake and the fact that regulators are still struggling to proffer adequate regulations to protect people. To be sure, deepfake is not new. In 2017, for instance, a Reddit user posted a manipulated video of celebrities in compromising poses. But lately, due to improvements in AI, which can produce remarkably realistic videos/audios, accessibility, and the absence of robust legislation, deepfake is on a frightening rise.

On Thursday, 25 July, Virginia Congresswoman Jennifer Wexton addressed the US House of Representatives using AI. Wexton suffers from speech difficulties due to progressive supranuclear palsy disease. But an AI voice cloning software developed by ElevenLabs gave the congresswoman “…the ability to speak as [she] once did.” ElevenLabs said it hopes to give everyone a voice with the technology. A day later, on 26 July, Elon Musk reposted a deepfake video of Vice President Kamala Harris on his X handle without putting, as required by X, that it was AI generated.

AI voice cloning software was used to mimic Kamala Harris’ voice in an advert where she was supposedly ridiculing her principal President Joe Biden. It wasn’t until two days later on Sunday, 28 July, that Musk clarified that it was a deepfake video. In that period, between Friday and Sunday, millions of people had seen the video, and it’s conceivable that many believed it was a real video. Musk is a known supporter of Donald Trump so posting such a video without the appropriate caveat is manipulative and likely intended to rile Americans who love Biden. The posting had a political undertone, with the American general elections, where Kamala will likely spar with Trump for the presidency, about 90 days away.

Increasing application of AI voice cloning for crime
AI voice cloning is increasingly being used by criminals to scam unsuspecting victims. Popular Nigerian journalist and content creator based in the US, Adeola Fayehun, narrated how she was almost scammed twice this year with AI voice cloning. First, somebody used deepfake posing as her uncle to try to fleece her and another time it was her cloned voice the scammers used to try to scam a family member.

Early this year, fraudsters manipulated video footage of Aliko Dangote’s appearance on a Channels TV programme and used same to promote a fraudulent investment scheme. An unsuspecting consumer who sees the video where Dangote was supposedly speaking about the scheme on a credible platform like Channels TV will immediately believe it. Channels TV caught on to the deepfake early and released the original footage.

Fraudsters are becoming emboldened and creative in the use of deepfake AI voice cloning for investment scams, cyberbullying, smear campaigns in politics and against prominent figures, and phone scams. The 2023 general elections in Nigeria, for instance, was defined by the widespread use of deepfakes by politicians and their supporters to gain advantage over their opponents.

Read also: Artificial Intelligence in Nigeria: How ready are we?

Reason for the rise

Accessibility is one reason for the rise in the use of AI voice cloning for criminal enterprise. High-quality AI software are free and easily accessible online. And, the tools are so easy to use that anyone with basic AI knowledge can generate a convincing deepfake using AI. The manipulative technique used is called Generative Adversarial Networks and it only takes a few minutes for AI to clone a voice once a sample of that voice is imputed.

We are all at risk

Everybody is at risk of these deepfakes, no matter your societal status. An angry or envious colleague at work, for instance, can use AI voice cloning to make you say unprintable things about your bosses to discredit you. Or imagine a jealous spouse AI coning the partner’s voice saying some derogatory things against somebody they hold in high esteem. Someone could also clone your parent’s, your children’s, your brother’s/sister’s voice to defraud you. The scariest part and what makes these manipulated videos/audios sometimes believable is that the scammers are usually very thorough. They investigate their target’s unique situation and tap into it.

Imagine your son is in school somewhere, and you get a deepfake call from him – the clone is 100% his voice – that he just ran into some trouble. You probably will immediately go into anxiety mode and drop whatever you are doing. For Adeola Fayehun the fraudsters knew her uncle created a WhatsApp group of family and friends where he regularly posted links to important information or for virtual meetings. They rode on that using her uncle’s cloned voice to get her to click on a malicious link. Companies are also victims as well as perpetrators of unethical deepfakes through the use of deepfake advertising which could be deceptive. Predictably, deepfake is also deployed in war. Recently, Russia cloned Ukrainian TV and used it to propagate misinformation.

What law protects you against deepfake?

In Nigeria, there is no robust law yet against the unethical use of deepfake; so for an aggrieved party, there is little legal recourse. However, there are laws like the Cybercrime Act of 2015 and the Nigeria Data Protection Act of 2023 that can be piggybacked on. The lack of adequate legal protection against deepfake attacks is not peculiar to Nigeria though. The US, the UK, and other developed economies are still struggling to develop a comprehensive policy against deepfakes, especially its application in politics.

Legislations that will mandate adding sources of digital content, watermarking, and or disclaimer are being considered. Social media platforms have also tried to protect users by establishing some safeguards against manipulative deepfakes. The policy on X, for instance, is that “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” However, Musk’s recent posting of the Kamala video clearly contradicts that, highlighting the fact that such safeguards can easily be bypassed.

How to protect yourself against deepfake

There’s very little one can do to protect oneself. However, one can take a number of steps to be less vulnerable to attacks. Be very circumspect about what you post online about yourself or your family. Be wary of accepting friends’ requests from strangers or people you hardly know. Avoid suspicious-looking sites and don’t give your details on little-known sites. You should have a unique identifier with members of your family; a secret code or phrase only known to you and your family that you use when you communicate. And, when in doubt, make a direct call to the person for confirmation.

AI voice cloning will continue to get advanced and unscrupulous people will continue to take advantage to scam or manipulate unsuspecting targets. Legislation, unfortunately, may give little comfort to victims. As digital content consumers, we are the first gatekeeper; we have a duty to scrutinize and probe the source and authenticity of contents before we accept them. We also need to act responsibly in the digital universe.