A Disinformation Attack Took Place During Prime Minister Trudeau’s Resignation Speech

A Disinformation Attack Took Place During Prime Minister Trudeau’s Resignation Speech

On January 6th, 2024, Prime Minister Justin Trudeau addressed the country to announce that he would step down as Prime Minister after the Liberal Party of Canada decided who their next party leader would be. That speech was available everywhere, played live and without interruption over the radio and television, and it was also possible to stream it live over digital platforms like YouTube, which is where I was watching it.

Just before Prime Minister Trudeau’s speech was set to begin, there was a targeted disinformation attack conducted through YouTube’s advertising services. More specifically, bad actors were targeting Minister Chrystia Freeland with AI-generated images of Minister Freeland in handcuffs, with ridiculous text stamped over those images and with real-looking logos at the bottom, including the CBC News logo. Then, clicking on the image would spoof an identical-looking website of the Toronto Star. These attack ads continued to persist for more than three weeks, expanding to include other political party leaders in Canada.

Looking into the YouTube attack ads only required a quick toggle for me to figure out how easy it was to exploit and weaponize digital advertising. Then tracking the disinformation attacks that were happening over X (formerly Twitter), X was taking down AI-generated disinformation attack posts that were being made by verified accounts, but bad actors had managed to identify a grey area loophole. Countless bot accounts were posting screenshots of those same posts (by verified accounts) that X had taken down, effectively reposting those images.

Both the attacks that were taking place over YouTube and X were disinformation attacks, but how many Canadians were likely to be aware of go-to strategies and capable of seeing the disinformation attacks for the garbage that they were? That question and others like it made it worth connecting with Google (owners of YouTube) as well as with the Canadian Security Establishment (CSE), Global Affairs Canada (GAC), and even checking-in with those targeted by disinformation attack ads.

The Story Behind the Disinformation Ads

As it related to the disinformation attacks targeting Minister Chrystia Freeland, there were at least five different photos portraying Minister Freeland being taken away in handcuffs (in court and in public settings), with at least 6 different advertising accounts being manipulated to appear responsible for running these ads: “TravelTrekkify”, “LufaTech”, “BerinaTravel”, “GossipFad”, “TravelJetSets”, and even “CBC News” with their logo.

All the photos had messages that read, “Chrystia Freeland – Is this the end of her career? A scandal that shocked the entire Canadian nation.”, “The real reason behind Freeland’s resignation.”, “She didn’t notice that the camera was filming.”, “She thought it was safe to say what she thought.”, “The nation of Canada was surprised.”, and “Good news for Canadians.”. Specific to the fake “CBC News” ad, there was only a photo of Minister Freeland and no text in the image, but beneath the image was the CBC News logo and two lines reading, “The nation of Canada was surprised. The real reason behind Freeland’s resignation.”, and in the sponsored section beneath that ad, it showed “Sponsored · CBC News”. Then clicking the image would spoof an identical-looking website of the Toronto Star – it was scary how good it was.

One of the great things about Google ads, though, is that each ad features a transparency center that allows users to look over the advertising history for advertisers whose ads appear over Google’s different platforms. Clicking on the “My Ad Center” option for those AI-generated images, it only took two seconds to figure out the “how” behind the disinformation attack.

Clicking on the “My Ad Center” section, there was a disclaimer about how Google had “verified” the identity of the advertiser, “AirCruise B.V.” who was in “Netherlands”. Every single one of the disinformation attack ads targeting Minister Freeland that I clicked on went back to this company in the Netherlands. The only variation between them related to the “topic” category for the ad, “travel booking services”, “legal”, “clothing”, and “gardening”. Clicking on the option to “See more ads this advertiser has shown using Google” showed a bunch of fake looking ads about random things related to business, cartoons, fashion, traveling and nutrition, used to build up their “trust score”, with an occasional “surprise” every few hundred fake ads.

What came as the biggest surprise was the discovery of additional disinformation attack ads that targeted non-elected public figures like Wayne Gretzky, and even Brazilian public figures were targeted, with those ads reading in Portuguese. Those disinformation attack ads that targeted the likes of Wayne Gretzky as well as the ones that were done in less advanced countries seemed to be beta tests to determine how YouTube would respond, in preparation for the disinformation attack ads that would eventually take place during Canadian elections.

Google Leading the Fight Against Deepfake Technology

Google’s press office responded to my query about the disinformation attack ads targeting Minister Freeland by stating that those and other similar ads were removed and that they had suspended “AirCruise B.V.” Additionally, Google was looking deeper into this specific ad vulnerability, and it should be safe to assume that Canadians are not going to have to worry about disinformation attack ads through Google’s ad services for the upcoming election.

Google referenced their “misrepresentation policy” that prohibits the use of manipulated media to deceive, defraud or mislead users around issues related to politics, social issues, or matters of public concern, and ads that use a public figures likeness to deceive people are permanently suspended. Also prohibited are advertisers concealing or mistaking information about their business, product or service, nor is it permitted to imply affiliation with or endorsement by another individual, organization, product, or service without their consent.

Google detailed how they have been investing heavily into detection and enforcement programs to counter deep fakes ads through image recognition technology and having models to detect public figure depictions in videos. Google suggested they literally had a team of thousands working around the clock to create and enforce their policies at scale. The driving force behind nearly all the enforcement successes was Gemini (Google’s AI), which is how they were able to block billions of ads, many before a person ever sees them. Google also cited their recent “Ads Safety Report”, highlighting how Google has removed over 5.5 billion ads (slightly up from last year), restricted over 6.9 billion ads and suspended over 12.7 million advertiser accounts (double from last year).

In 2023, Google blocked or removed 206.5 million ads that violated their misrepresentation policy, 273.4 million advertisements for violating their financial services policy, and over 1 billion advertisements for violating their policy against abusing the ad network, including some that promoted malware. As it relates to elections, Google had verified more than 5,000 new election advertisers and removed more than 7.3 million election ads that came from advertisers who did not complete verification. Additionally, Google was the first tech company to launch a new disclosure requirement for election ads containing synthetic content to account for advertisers who may leverage the power and opportunity of AI, that such content was labelled and transparent. So, Google was all-in on ensuring that its services were not getting misused, especially for election interference purposes.

The CSE, Global Affairs Canada, and Those Targeted

Despite that the CSE has been involved in a national-wide advertising campaigns to raise awareness regarding multi-layered disinformation attacks and other scams, it is not responsible for monitoring Canadian social media for online disinformation as it is not a part of their mandate. Rather, the CSE Act prohibits the CSE from directing its activities at Canadians or at anyone in Canada, and those responding reported that it was focused on monitoring global trends and drawing attention to emerging cyber threats. However, I believe that those prohibitions are likely to change seeing how the Foreign Interference Commission has suggested Canada needs to actively monitor all sources of open-source information (OSINT) for disinformation, which is occurring year-round.

As it related to the “screen capture” loophole over X, that disinformation attack-style was shared with Global Affairs Canada (GAC) and it seemed to be one that was not on anyone’s radar. Additionally, GAC confirmed that they would be interested in providing more insight into their Rapid Response Mechanism (RRM) which works to counter foreign interference – for an upcoming issue.

The Toronto Star responded by stating that it knew bad actors were spoofing their websites, but it had been going on over the past year or so, and that other mainstream outlets like the CBC and Globe were also affected, but these media outlets were powerless to do anything about it. The CBC responded by stating it was aware of an increase in spoofs targeting the CBC and CBC-employees over social media platforms and websites, and that they are working to curb this alarming trend to remove the disinformation ads.

Beta-Testing Disinformation

What if the first round of AI-generated “get rich quick” video ads over YouTube that featured AI-generated versions of Canadian media personalities were meant to serve as beta-tests for bad actors to conduct disinformation attacks during elections, aka foreign interference? What if there are similar beta tests being conducted at this moment in less advanced countries that are out of sight for GAC’s RRM, and unlikely to be on anyone’s radar?

The end-goal of all these disinformation YouTube ads may simply be to get a vulnerable person to click on the ad and then to get them to share their personal information, but it is certain to be a go-to strategy to undermine the upcoming provincial and federal elections. And maybe the dis/misinformation attacks that should scare Canadians the most are the ones occurring over X, the resharing of AI-generated photos (dis/misinformation) of posts that X takes down, in the form of screenshots.

If AI Can Create Problems, then AI Can Solve Problems.

A techie friend of mine who works for Shopify once explained how all major tech companies have adopted anti-crime and anti-fraud methodologies that make it practically impossible for criminals to exploit digital services and platforms. None of the methodologies came as a surprise, but what shocked me was a stat he referenced about how nearly every cyber exploitation could be traced back to someone on the “inside”.

For example, Shopify assigns specific code teams to work on specific “links” in the “chain” and no team is aware of the “full chain”, to reduce the likelihood of exploitations. So, when Shopify experienced a “breach” many years ago, the combination of their AI-powered systems and their forensic teams saw everything fixed before a person could blink. Within seconds, the “breach” was patched. Within minutes, the “red alert” reached those who needed to know. Within the hour, there was a list of coders that had worked on the “link”. Within the day, key information was shared with police about the digital footprints of everything.

All the big tech giants have long been involved in the war on cybercrime by leveraging AI-powered tools and cyber forensic teams to prevent the exploitation and misuse of their digital services and platforms, and all will cooperate with policing stakeholders regarding all criminal matters. Despite that most Canadians are unlikely to be aware of those successes, or how they are made possible, these efforts have proven to be extremely successful at helping police stay ahead of criminals that attempt to exploit technology for criminal purposes.

One example of AI-powered benevolence can be seen with how all the major gaming consoles leverage AI to monitor the conversations (text and voice) that take place in online gaming settings between “gamers”. That AI-powered monitoring system was able to help identify a terror attack in the making, and police were able to get ahead and eliminate the threat to public safety.

Another example of AI-powered benevolence can be seen with how all the major cloud storage services leverage AI to scan every photo and video that gets uploaded to the cloud and they can identify files associated with crimes like human trafficking, drug and weapon trafficking, and major violence. That AI-powered monitoring system was also able to identify accounts that uploaded photos and videos of child pornography, and police were able to bust a large pedophile ring.

What should matter most when it comes to today’s AI should be that today’s AI is the dumbest version of AI that will ever exist, still more capable than humans. Eventually, we will get to a point where AI is so sophisticated that it will be impossible to misuse digital services and platforms, making for a much safer world. In the meantime, for upcoming elections across Canada, and the world for that matter, the good guys have two less disinformation attack-styles to worry about. As for hostile state actors, The Voice Magazine might just be behind the “bat symbol” in the sky.