Two years have passed since Rapid Response Mechanism (RRM) Canada, which detects foreign interference and disinformation, identified the first Spamouflage campaign. First detected in 2023, the Spamouflage campaign refers to a covert disinformation operation that relies on networks of newly created or hijacked social media accounts, frequently seen to amplify narratives aligned with PRC (People’s Republic of China) interests by undermining Canadian institutions and critics towards the Chinese government.
Initially observed targeting prominent Canadian figures, namely Prime Minister Justin Trudeau and Conservative Leader Pierre Poilievre, the campaign has since expanded its scope and method. This evolution increasingly involved targeting Chinese-speaking individuals in Canada, many of whom are members of the Chinese diaspora and active participants in public political discourse.
This campaign took place across multiple platforms, including Facebook, X (formerly Twitter), Instagram, Reddit, Medium, YouTube, TikTok, and LinkedIn. Hence the portmanteau of “spam” and “camouflage,” Spamouflage bot accounts were observed to repeatedly post doctored videos, harassment, fake testimonials, and deepfakes in the comment sections belonging to Canadian government entities, political employees, and media news outlets, intending to steer away criticism from the communist regime.
Of the sixty-five reportedly targeted Government of Canada (GoC) accounts that were most affected include Global Affairs Canada, the Canadian Armed Forces, and the Canadian Security Intelligence Service (CSIS). Major Canadian media accounts predominantly attacked include the Toronto Star, the Ottawa Citizen, and CBC/Radio-Canada. At the campaign’s peak, hundreds of posts were uploaded daily, reflecting a high degree of coordination and automation.
Despite the widespread activity, the Spamouflage campaign appeared to have reached only a limited audience on social media. Minimal engagement in liking, sharing, and commenting on posts indicated that the Spamouflage campaign left an insignificant impact on the opinions of viewers or electoral outcomes. While this may suggest little to no operation, contemporary disinformation campaigns, like Spamouflage, do not need widespread reactions to gain any relevance when their objective is to intimidate. Absence of virality does not mean absence of harm. Their significance lies in their evolution of methodologies to shape political conditions and discourse.
This distinction grew more apparent in 2024, where RRM Canada detected a notable resurgence of the Spamouflage campaign, which expanded its focus to ten Mandarin-Chinese speaking individuals in Canada, consisting of commentators, community leaders, and political figures. The Spamouflage campaign spread deepfakes of an online commentator, who was previously engaged in posting critical information against the PRC. Their public criticism made them a target for the campaign’s harassment, exploitation, and disinformation efforts to intimidate them and others for speaking out against the PRC regime. These deepfakes of the individual were then posted in comment sections of social media feeds of Canadian government entities and media outlets, used to accuse Canadian officials of ethical violations and sexual misconduct as a way to undermine public trust in Canadian democratic institutions.
The campaign also targeted nine other Chinese-speaking commentators in Canada by doxxing their highly personal and sensitive information, such as their phone numbers and home addresses, without their consent on a public website. One individual was subject to bot-like X accounts posting sexually explicit deepfake images of them, followed by their personal information online.
This is the first-known time that the Spamouflage campaign used AI-generated deepfakes to sexually exploit an individual in Canada. This new approach raises concerns over misogynistic abuse online and the safety of women and girls participating in political discourse. Even though the campaign still did not seem to arouse much online engagement from viewers, the victims faced continued harassment, reputational damage, and risks to their personal safety. This leads to the increase of the perceived cost of personal political participation, not only to those directly affected, but also to the observers and bystanders of this online threat.
Political participation relies on a degree of perceived safety; when people witness harassment, they tend to censor parts of themselves or stay completely silent. This is also known as the Chilling Effect, which explains why relying only on virality is not sufficient for determining the severity of a foreign online campaign. This case is significant not only because the campaign broadened its list of targets, but it also evolved its tactics into more coercive and invasive methods that would not just shape public narratives but also pose traumatic personal harm on individual social media users. This evolved strategy demonstrates an attempt to instill public fear of participating in political discussion online, especially in expressing one’s opinions towards the PRC regime. Perhaps online commentators were the intended audience for the Spamouflage campaign all along, not the general public. However, the impact of the campaign is not exempt from any future individuals who have a say against the PRC-regime.
In Canada, free speech not only gives permission to speak, but it also protects the ability to express without fear. The Spamouflage campaign primarily exploits that feeling of safety, leading people to refrain from participating in political discussion, even though they have the right to speak up. This is how intimidation imposed by strategic disinformation campaigns, like Spamouflage, weaken freedom of expression in an open democracy like Canada. The Spamouflage campaign is one of many foreign interferences that threaten Canadian citizens from exercising their right to free speech.
For Canada, the Spamouflage case showcases how democratic resilience is not determined by how explosive public engagement is received online. Even when false accusations and impersonations fail to persuade or trigger reactions on social media, citizens can still become discouraged from exercising their right to free speech. Therefore, safeguarding democracy requires the conditions under which people feel safe to engage in political discussion. Efforts to address foreign interference must go beyond removing disinformative content to prioritizing the safety of those who are repeatedly attacked and at greater risk. Improving the operations of government social media accounts and supporting women and diaspora communities specifically are necessary to prevent fear of the costs that come with political participation online.
NATO has increasingly recognized disinformation and foreign interference as security threats, specifically because they weaken trust and political participation, without requiring mass persuasion. Campaigns like Spamouflage exploit open democracies by targeting politically active individuals, narrowing who feels able to speak. Intimidation challenges democratic resilience by discouraging civic engagement, a cornerstone to maintaining democracy. Therefore, safeguarding democratic norms—informal norms such as mutual toleration and institutional forbearance—are essential to maintaining resilient societies from disinformation. The Spamouflage campaign reflects a broader truth about democracy in the digital age. Participating in democracy is not merely about having the legal right to exercise free speech; it is also about feeling safe enough to speak amidst a sea of false narratives and intimidation. Canada’s democracy and its allies in NATO depend on keeping online spaces safe and open for everyone to participate without fear.
Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.
Photo retrieved from the Government of Canada.




