• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Techcouver.com

 
 
  • News
  • Events
  • Interviews
  • Thought Leadership
  • Jobs
  • About
    • Contact Us

Why B.C. Companies Need to Start Protecting Their AI Models

March 26, 2026 by Jeremy Samuelson Leave a Comment

In late February, Anthropic disclosed industrial-scale campaigns had been coordinated to distill its Claude models. The attackers did not need a classic breach or a stolen database to try to reproduce those capabilities. They used access, scale and repeated interaction. This is a warning every company deploying AI should take seriously: the threat is no longer only that data can be stolen. The model itself can be copied, probed and made to leak.

For years, the security conversation around AI has focused on the data that goes in: how it is collected, stored, governed and encrypted. Those questions still matter. But once data is used to train or fine-tune a model, the model becomes part intellectual property, part decision engine and part attack surface. In effect, it becomes a crown-jewel asset. For companies building with AI, that changes the conversation entirely.

This matters in Vancouver and across B.C., as the number of companies focused on AI innovation has doubled since 2023. Companies in AI, fintech, healthtech and enterprise SaaS are turning models into products, workflows and competitive advantage. As adoption continues to accelerate, leaders need to ask harder questions. Can a deployed model be copied through repeated interaction? Could sensitive training information be inferred from outputs? Are collaborative training or fine-tuning workflows leaking more than teams realize? And are organizations building security programs for a world in which both current adversarial attacks and long-horizon cryptographic risks have to be addressed together?

This is especially true in sectors such as finance, healthcare and enterprise software, where models are increasingly being used for underwriting, fraud detection, triage, forecasting, personalization and core operational workflows. When those systems are compromised, the damage is not solely technical. Rather, it can include exposure of sensitive information, loss of proprietary capability, regulatory consequences and a direct hit to customer trust.

The Expanding Threat Landscape Around AI Models

The threat extends beyond the familiar image of a hacker breaking into a system. For example, some attacks aim to steal the model. Other attacks aim to make the model leak. Model inversion and reconstruction attacks can expose patterns or examples a system has absorbed. And, then there are attacks designed not to steal or reveal, but to manipulate: poisoning training data, degrading outputs or steering a model toward unsafe or unreliable behaviour.
These threats are already real, and no longer theoretical. What makes them especially urgent is that several of these attacks do not begin with a traditional breach.

That is why it is no longer enough to think of AI security as a perimeter problem. Protecting the cloud environment is necessary. Protecting the database is necessary. But once AI becomes part of the product, new risks appear inside the system itself. Companies need to consider security systems that cover the full model lifecycle: what data enters the system, how models are trained and fine-tuned, how they are exposed, how access is monitored, how abuse is detected, and how privacy leakage is tested before systems reach production.

Companies don’t need to slow innovation, but they do need to treat model security as a priority. Practical questions for leadership teams to ask: what controls can be deployed now—access controls, query monitoring, abuse detection and privacy testing—to reduce exposure? What systems do we need to minimize the risk of misuse?

Model security is also tied to the upcoming quantum era, and this needs to be considered in strategic planning. Many of the encryption methods used to protect AI systems today may become vulnerable as quantum computing advances. This emerging threat is called “harvest now, decrypt later”, and it involves a cyber attacker strategy where attackers collect encrypted data today with the intention of decrypting it in the future, once more powerful technologies, like quantum computers, can break the encryption. To protect against this, companies need to recognize that quantum is not science fiction, and start planning and building strong systems to withstand attacks.

The next phase of AI competition will not be won by performance alone. It will be won by trust, resilience and defensibility. The companies that stand out will not simply be the ones that build the most capable models. They will be the ones that prove those models can be deployed, protected and governed responsibly. In AI, the next competitive advantage will not just be intelligence. It will be defensibility.

Jeremy Samuelson, Executive Vice President of AI and Innovation at Integrated Quantum Technologies, is the Inventor of the AIQu™ platform and product VEIL™. By trade he is a data scientist and mathematician. Samuelson is former Equifax and while there, he served as the Principal Data and AI Scientist for Digital Identity Engineering. He’s also held senior AI leadership roles at Mastercard and VICI Capital Partners, and led large-scale optimization at a Coca-Cola subsidiary. Currently he teaches graduate-level executive programs in AI and management at leading institutions, including John Hopkins and the University of Texas.

Filed Under: News, Thought Leaders Tagged With: Integrated Quantum Technologies

 
 

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

 
 

Stay Connected

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter

Community Partners

About Us

Techcouver provides real-time reporting and analysis of emerging technology news in Vancouver and throughout British … READ MORE... about About Us

Copyright © 2026 Incubate Ventures | Calgary.tech · CleanEnergy.ca · Decoder.ca · Fintech.ca · Legaltech.ca · Techtalent.ca · | Privacy