Recent Posts
Connect with:
Thursday / November 21.
  • No products in the cart.
HomeBusinessDeepfake ads featuring artificial intelligence-generated images of Liberal finance minister Freeland spotted on YouTube

Deepfake ads featuring artificial intelligence-generated images of Liberal finance minister Freeland spotted on YouTube

Bob Mackin 

It is too good to be true. 

Canada’s Liberal Deputy Prime Minister and Minister of Finance, flogging a scheme to earn at least $20,000 a month, with a minimal investment of $350. 

A company, whose website is registered to a service provider in Chicago, is behind ads on YouTube and related websites that portray Chrystia Freeland as an endorser. The videos are packaged as reports on CTV News Channel and CBC News Network, complete with anchor commentary.

Liberal Finance Minister in deepfake videos seen on YouTube (YouTube)

One of three clips that this reporter spotted by chance on May 31 on YouTube shows Freeland at a news conference April 7 in Toronto. That is where she announced $2.4 billion in grants to boost Canada’s artificial intelligence sector.

But the words on the video were not those that she spoke at the pre-budget photo op. Nor were the words spoken by the TV anchors. 

The videos are actually artificial intelligence-generated deepfakes. What, if anything, are Freeland’s staff and the Liberal government doing  to combat the misuse of the rapidly evolving technology? 

“The videos and websites you reference are fake and present false and misleading information,” said Shanna Taller, media relations advisor for the Department of Finance.
“Cases of unauthorized image use are handled by law enforcement agencies, such as the Royal Canadian Mounted Police and the Canadian Centre for Cyber Security. For further questions on whole-of-government action, please contact the Canadian Anti-Fraud Centre.”

The Canadian Anti-Fraud Centre (CAFC) media phone line voice mail was full and emails were returned as blocked. YouTube has not responded for comment.

In March, the CAFC website contained a new technology bulletin. It warned that deepfake technology uses “machine-learning algorithms to create realistic-looking fake videos or audio recordings. This is most commonly seen in investment and merchandise frauds where fake celebrity endorsements and fake news are used to promote the fraudulent offers.”

“If you see a celebrity or trustworthy figure promoting merchandise or crypto investments, remember that the video can be a deepfake, created with AI technology. Do your research before you buy anything,” the CAFC bulletin said. 

An expert in spotting disinformation said this may be more than a fraudulent investment scheme, but someone seeking to advance their political interests. 

Marcus Kolga (MLI)

“There may be not that many Canadians who are falling for this specifically right now. But, it’s a very worrying sign of what may yet be to come,” said Marcus Kolga of DisinfoWatch.org. 

Kolga said a close look at the videos revealed the audio was not seamlessly synchronized with the movement of lips. But that may not be detected by casual viewers who do not follow Canadian politics on a daily basis. He is concerned that the technology is advancing so rapidly that deepfake videos could become undetectable. 

That would open the door to financial and geopolitical manipulation and disruption on a mass scale. Governments need to enforce existing laws and enact new ones to prevent chaos, he said. In the case of the Freeland videos, Kolga said YouTube and others have a major role to play and should not be earning advertising revenue from carrying deepfake ads. 

“Today, it may be Chrystia Freeland, but tomorrow it could be a Jagmeet Singh, the next day, it could be Pierre Poilievre,” Kolga said. “You just never know, especially when we’re talking about foreign regimes. We know that China was pretty intensely using deepfakes during the Taiwanese presidential election. They weren’t great, if you knew what to look out for, you could tell all the telltale signs of a deepfake were there, again, with the synchronization issues and such. But this technology is only improving and it’s improving not yearly, it’s improving every month.”

In May, the U.S. Federal Communications Commission proposed a $6 million fine for political consultant Steve Kramer who was behind robocalls two days prior to the first-in-the-nation presidential primary in New Hampshire. Those robocalls featured deepfake audio using President Joe Biden’s voice to encourage citizens to abstain from the primary and save their vote for the November presidential election.

Kramer was also arrested in New Hampshire on bribery, intimidation and voter suppression charges. 

Support theBreaker.news for as low as $2 a month on Patreon. Find out how. Click here.