As much as I love a deepfake Anakin Skywalker, it might be wise to get more AI-focused firewalls in place prior to the next federal election.
Many have raised red flags around AI (artificial intelligence), with numerous tech types having signed a statement referring to it as one of our greatest existential threats.
鈥淢itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,鈥 reads the statement, shared by the Center for AI Safety.
I hadn鈥檛 thought of deepfakes as being part of that threat, at least not I recently started seeing ads that didn鈥檛 sit right.
Deepfakes use a type of AI to create imagery and audio. If you鈥檙e a Star Wars fan like myself, you got a taste of deepfake AI in Gareth Edwards鈥 movie Rogue One, where it was used to recreate the likenesses of the later Carrie Fisher and Peter Cushing. Continuing on that theme, I love comedian/impersonator Charlie Hopkinson鈥檚 YouTube videos in which he critiques Star Wars films while deepfaked to look like disheveled characters from the movies (his Anakin Skywalker and Obi-Wan Kenobi slay me).
Curiously, while watching these Hopkinson gems and other Internet content, I recently started seeing scam ads featuring likenesses of other real-life celebrities. One had Elon Musk rabbiting on about how you need to invest in, er, whatever. (Apparently, artificial Elons have been shilling get-rich schemes for a while now online.) Another scam circulating on social media features a deepfake of YouTuber Jimmy Donaldson, aka Mr. Beast.
鈥淟ots of people are getting this deepfake scam ad of me 鈥 are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem,鈥 Donaldson commented on X (formerly Twitter).
Lots of people are getting this deepfake scam ad of me鈥 are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem
鈥 MrBeast (@MrBeast)
Regardless of what you think of Mr. Beast, he is right; it is a serious problem, and not just in the threat it poses to celebrities. Deeper, darker deepfake concerns have been raised around exploitation and intimidation.
And then there鈥檚 politics.
This week, the Canadian government鈥檚 Communications Security Establishment (CSE) issued an alert stating 鈥渃yber threat activity is more likely to happen in Canada鈥檚 next federal election than in the past.鈥
鈥淐yber threat actors are increasingly using generative artificial intelligence (AI) to enhance online disinformation. It鈥檚 very likely that foreign adversaries or hacktivists will use generative AI to influence voters ahead of Canada鈥檚 next federal election.鈥
Though the statement doesn鈥檛 speak specifically to deepfake AI, it is an increasingly accessible tech tool that I wouldn鈥檛 be surprised to see utilized to manipulate perceptions and sway voters one way or another.
And in a country where, according to a recent Leger poll, 11 per cent of us believe the earth is flat, those hacktivists don鈥檛 exactly have their work cut out for them.
While CSE assures it鈥檚 doing what it can to protect Canada鈥檚 democratic process, the Canadian Bar Association argues criminal law doesn鈥檛 go far enough to protect the public from harm posed by targeted deepfakes.
Unfortunately, technology advances quickly, and unless a concentrated effort is made to put necessary protections in place, we鈥檙e soon going to find the line between reality and deepfake much more difficult to discern, let alone agree upon.
Sign up for our newsletter to get Salmon Arm stories in your inbox every morning.