The Foreign Interference Commission—Connecting with Canada’s Economic and Security Partners

Foreign Interference Around the World, Part I

The Foreign Interference Commission—Connecting with Canada’s Economic and Security Partners

In 2025, Canada is set to reassume the G7 presidency.  But the Foreign Interference Commission submitting their findings to members of Parliament on December 31st, 2024, which will likely influence Canada’s G7 priorities.  Foreign interference is a problem that troubles democracies around the world, yet Canada happens to be the only country to ever set up a public inquiry into the matter.  Canada also happens to be responsible for establishing the first-ever framework around countering foreign interference with the Risk-Ready Multilateralism (RRM), during its G7 presidency (2018).  So, understanding the status quo, and then figuring out who is doing what and what is working where, becomes much more important.

For the first time since Canada’s was at the helm of the G7—six years ago—with Italy now at the helm, G7 priorities are explicitly building upon Canada’s RRM, and are attempting to address the threats posed by artificial intelligence.  Outside of the G7, the United States has taken steps to ensure the rest of the world is up to speed on the threats posed due to the advancement of technologies, but also to eliminate the threats posed by domestic interference and complex crimes—what makes foreign interference possible.  Both Italy and the U.S.  are benefactors of legislative thinking that has yet to be tabled in Canada, but which might eventually get embedded into Canada’s laws.

From a G7 Italian Presidency point of view.

Italy’s approach places an emphasis on international collaboration, ethical guidelines, and regulatory standards to mitigate the risks associated with generative AI and adding to the principles of preparedness and collective action that are central to Canada’s RRM.  These efforts have included working to strengthen coordination among G7 members to monitor, analyze and respond to Foreign Information Manipulation and Interference (FIMI).

The G7 Foreign Ministers’ Meeting Communique, a guiding framework of collective action between G7 countries, adopted in Capri in April 2024, contains a chapter dedicated to “Countering hybrid threats, including foreign information manipulation and interference (FIMI).” It identifies foreign interference as “a growing challenge to democratic societies around the world”.  Additionally, a Memorandum of Understanding on Foreign Information Manipulation was signed between Italy and the U.S. at that meeting, though that was driven by the U.S., not Italy, to develop a common understanding of threats and to develop coordinated responses to foreign information manipulation.

Italy’s approach to countering FIMI thus far have focused on three areas:

1) Building societies resilient to disinformation, with actions aimed at digital literacy, fact-checking, and the development of critical thinking.

2) Co-regulation of the digital space through the involvement of public and private stakeholders, as well as independent regulators (the path taken by the EU with the adoption of the Digital Services Act and the Code of Conduct on Countering Disinformation.)

3) Only as a last resort, association of political costs to be applied on a case-by-case basis to information manipulation actions.

What makes the above possible is that Italy participates in monitoring and information activities at all multilateral levels, including in the European Union, through the Rapid Alert System, coordinated by the European External Action Service (EEAS) and that provides exchanges on disinformation narratives and activities; in NATO; within the G7, through the RRM; in the OECD dis- and mis-information Resource Hub; and through U.S.-initiated like-minded formats, such as the Global Engagement Center (GEC).

The EU “AI Act” (approved by the EU Council in May 2024) includes provisions regarding risk levels in the use of AI systems and the obligation to label content generated by AI systems.  This legislation has implications for large online platform providers and very large online search engines.  It requires they identify and mitigate systemic risks that may arise from the dissemination of artificially generated or manipulated content, and particularly the risk of actual or foreseeable negative impacts on democratic processes, civic debate, and electoral processes.

Italy’s focus on AI, however, goes beyond the misuse of AI for the purposes of foreign interference and disinformation, and accounts for outcomes like people leveraging AI to spread terrorist propaganda.  In April 2024, the Italian Council of Ministers approved new rules on AI, which include, among other measures, imprisonment from 1 to 5 years for those who disseminate, without consent, AI-altered videos or images thereby causing unfair harm.

Where Italy differs from other countries may be in how they monitor and counter disinformation.  There is no specific agency dedicated to monitoring and countering disinformation, rather, it is a whole-of-government approach led by The Prime Minister’s Office. The office leads national efforts on disinformation and FIMI, coordinating with all the relevant Public Administrations (Ministry of Foreign Affairs and International Cooperation, Ministry of Interior, Ministry of Defence; Authority of Communication, National Cybersecurity Agency, etc.).  As a result, Italy’s largest Public Administrations have assigned independent advisors to aid in countering “fake news”.

In summary, Italy’s approach might be best understood as a hands-off approach. Their Public Administrations are empowered with self-determining priorities, and are also responsible for self-reviewing their efforts and self-identifying any corrections that need to be made.

From a U.S.  Department of State point of view.

The U.S.  has been the single-most influential nation when it comes to bringing forth global policy efforts aimed at building international consensus and creating for a safer world – diplomatic efforts through the U.S.  Department of State (DOS).  On the topic of AI, DOS has led the efforts to ensure that the world is prepared for the rapid advancements in technology with their Global Engagement Center, the Office of the Special Envoy for Critical and Emerging Technology (S/TECH), and the Bureau of Cyberspace and Digital Policy (CDP).

DOS has been responsible for many of the advancements related to legislation for emerging technologies, and for organizing cooperation through international partnerships like the G7 and the Organization for Economic Cooperation and Development (OECD).  Those policy efforts have focused on developing governance approaches for safe, secure, and trustworthy development, deployment, and use of AI.  Those efforts have also resulted in the OECD revising its landmark 2019 Recommendation on AI, which formed the basis of several key AI governance initiatives worldwide.  Now it accounts for updates on human rights, bias, generative AI, mis and disinformation, and intellectual property.

In January 2024, the Global Engagement Center released a tool to address FIMI called “The Framework to Counter Foreign State Information Manipulation”.  The Framework serves as a tool for diplomatic engagement by establishing a common operating picture base on five key actions areas: 1) national strategies and policies, 2) governance structures and institutions, 3) human and technical capacity, 4) civil society, independent media, and academia, 5) multilateral engagement.

In March 2024, there was another first at the UN General Assembly (UNGA), which adopted a U.S.-led resolution on AI, the first-ever standalone resolution negotiated at the UNGA to establish a global consensus approach to AI governance.  All member states agreed to adopt the resolution without the need for a vote, focusing on safe, secure, and trustworthy AI.  That UNGA resolution builds upon multiple international initiatives that are ongoing related to AI governance, and that also have the goal of ensuring safe, secure, and trustworthy AI systems.

DOS has also been involved in efforts to develop an International Cyberspace and Digital Policy Strategy (2024): an approach to building digital and cyber capacity so that partners are able to build upon a resilient digital ecosystem, respond quickly when incidents happen, and to hold criminal and malign actors accountable.  Those guiding principles focus on an affirmative vision for a secure and inclusive cyberspace grounded in international law, including international human rights law; integration of cybersecurity, sustainable development, and technological innovation; and a comprehensive policy approach that utilizes the appropriate tools of diplomacy and international statecraft across the entire digital ecosystem.

This strategy’s blueprint for action has four key components: first to promote, build, and maintain an open, inclusive, secure, and resilient digital ecosystem; second, to align rights-respecting approaches to digital and data governance with international partners; third, to advance responsible state behavior in cyberspace and counter threats to cyberspace and critical infrastructure by building coalitions and engaging partners; and, finally, to strengthen and build international partner’s digital and cyber capacity, including capacity to combat cybercrime.  These core-essentials serve as a starting point toward building consensus around current, as well as future, technologies that may emerge and so allow for a more favorable starting point for countering misuse of those technologies.

The CDP also uses cyber capacity building programs, including training on cyber attribution and the framework of responsible state behavior, to strengthen our international partnerships, promote right-respecting best practices, and defend the stability of cyberspace.  These are foreign assistance programs that promote global adherence to the framework for responsible state behavior in cyberspace, a key component of DOS’s efforts to promote an open, interoperable, secure, and reliable internet.  Things like raising awareness of the applicability of international law in cyberspace, promoting the adoption of norms of responsible state behavior in cyberspace, and developing and implementing practical confidence building measures.

Nathaniel Fick, the Ambassador at Large for CDP, made comments in February that might best describe how consequential it is to have the necessary legislation around current and emerging technologies.  Essentially, technology will become, or perhaps has become, interwoven with every aspect of society, and it is reshaping industries at a record pace, including health care, transportation, education, finance, agriculture, and energy.  But it has also allowed bad actors to operate remotely, anywhere with an internet grid, far away from a targeted country’s policing and public safety stakeholders.  For those reasons, a global framework for tech governance, with collaboration against threats and digital solidarity, is only possible through international cooperation, where everyone is protected.

The U.S.’ Unique Approach

One of the biggest advantages that the U.S. has over all other countries when it comes to safeguarding institutions is their policies, a mix that includes blending whistleblower protections with awards.  That whistleblower award approach, which set a new precedent by increasing payouts to 30% for the voluntary reporting of information related to wrongdoing, was introduced by the IRS in 2006.  Since then.  the “30%” approach has been adopted by many other U.S. agencies and departments related to taxes, government contracting, motor vehicle safety, health insurance, environmental standards, and more.

For example, the whistleblower award program as it relates to the Security Exchange Commission (SEC).  Since making amendments to its policies on whistleblower awards, the SEC has recovered approximately $3.5 billion in sanctions and paid out over $1 billion.  The SEC has also been responsible for issuing some of the largest-ever payouts, including a recording-breaking payout of a $279 million.  As effective as the U.S.’ approach to rooting out corruption and keeping its institutions clean is, such an approach would be incompatible in today’s Canada.

Perhaps the best way to understand the current landscape is by revisiting some of the things that have been said about Canada by other countries think tanks and industry experts.  These descriptions have included descriptions of Canada’s criminal laws as being insufficient at prosecuting complex crimes and our current whistleblower protections as being inadequate.  Meanwhile, the stories about institutional powers getting misappropriated and weaponized by various elements within government continue to come out, including allowing coworkers to go after coworkers, even at the highest levels of public service.

Next week, this short series reviewing Canada’s current whistleblower landscape looks at the steps being taken by Germany, the U.K., New Zealand, and Australia to strengthen their institutions and democratic processes.

A special thank you to the U.S.  Department of State and the Italy’s Ministry for Foreign Affairs and International Cooperation for their contributions and making this article possible.