Connect with us

Science

Canada’s Spy Watchdog Investigates AI Use in National Security

Editorial

Published

on

Canada’s National Security and Intelligence Review Agency is launching a comprehensive examination into the use and governance of artificial intelligence (AI) within the country’s national security framework. This initiative aims to assess how Canadian security agencies define, utilize, and oversee AI technologies in their operations.

Scope of the Review

According to a letter from Marie Deschamps, chair of the review agency, the study has been communicated to key federal ministers and organizations involved in national security. The review will provide valuable insights into the deployment of new and emerging technologies, guiding future assessments and identifying potential risks that require attention.

Canadian security agencies have increasingly incorporated AI into various tasks, including document translation and malware detection. The review agency’s authority allows it to access all information held by relevant departments, encompassing classified materials, except for cabinet confidences. To gather necessary data, the agency may request documents, written explanations, briefings, interviews, surveys, and system access.

Deschamps emphasized that the review may also encompass independent inspections of specific technical systems as part of its thorough examination.

Engagement with Key Officials

The letter concerning the review has been disseminated to multiple cabinet members, including Prime Minister Mark Carney, Minister Evan Solomon for Artificial Intelligence and Digital Innovation, Minister Gary Anandasangaree for Public Safety, and Minister David McGuinty for Defence. Additionally, heads of significant security agencies, such as the Canadian Security Intelligence Service (CSIS), the Royal Canadian Mounted Police (RCMP), and the Communications Security Establishment (CSE), received the correspondence. Agencies less typically associated with security, like the Canadian Food Inspection Agency and the Public Health Agency of Canada, were also included.

In response to inquiries regarding the review, the RCMP expressed its commitment to transparent examination of national security activities. A media statement from the RCMP stated, “Establishing transparent and accountable external review processes is critical to maintaining public confidence and trust.”

Previous Recommendations and Future Directions

In 2024, a report from the National Security Transparency Advisory Group urged Canada’s security agencies to disclose detailed descriptions of their current and planned uses of AI systems. The report highlighted the anticipated growth in reliance on AI for analyzing extensive volumes of text and images, recognizing patterns, and interpreting behaviours. While both CSIS and CSE acknowledged the need for transparency, they indicated there are constraints on publicly disclosing certain information due to their security mandates.

The federal government has established principles for AI usage, which stress the importance of transparency about how and why AI is employed. These principles also emphasize assessing and managing risks to legal rights and democratic norms early in the process. Additionally, training for public officials involved in AI development is recommended to ensure they understand associated legal, ethical, and operational issues, including privacy and security concerns.

In its latest annual report, CSIS confirmed it is implementing pilot AI programs in alignment with the federal government’s guiding principles. The RCMP has outlined various factors crucial for the legal, ethical, and responsible use of AI. These include careful system design to prevent bias, respect for privacy in information analysis, and accountability measures for AI system functionality.

Strategic Commitment from the CSE

The CSE has articulated its dedication to advancing capabilities to address critical challenges through innovative AI and machine learning applications. Its strategy promotes responsible AI use and aims to mitigate threats posed by AI-enabled adversaries. CSE chief Caroline Xavier stated, “We will always be thoughtful and rule-bound in our adoption of AI, keeping responsibility and accountability at the core of how we will achieve our goals.”

Xavier further noted the intention to implement AI safely and effectively, enhancing data analysis capabilities. The strategy advocates for rigorous testing and evaluation, ensuring that expert human oversight remains integral to AI deployment.

This initiative by Canada’s spy watchdog highlights the growing significance of AI in national security and the necessity for oversight to ensure responsible usage. The outcomes of this review will likely influence how security agencies adapt to technological advancements while maintaining public trust and accountability.

This report was initially published by The Canadian Press on January 1, 2026.

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.

Continue Reading

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.