All AI Security & AI Safety Posts
-
AI Security
Marin’s Statement on AI Risks
The rapid development of AI brings both extraordinary potential and unprecedented risks. AI systems are increasingly demonstrating emergent behaviors, and in some cases, are even capable of self-improvement. This advancement, while remarkable, raises critical questions about our ability to control and understand these systems fully.
Read More » -
Post-Quantum
Cybersecurity Negligence and Personal Liability: What CISOs and Board Members Need to Know
“Could I personally be sued or fined if our company gets breached?” This uneasy question is crossing the minds of many CISOs and board members lately. High-profile cyber incidents and evolving regulations have made it clear that cybersecurity is not just an IT problem - it’s a corporate governance and legal liability issue. Defining “Reasonable” Cybersecurity: From Learned Hand to Global Standards What does it…
Read More » -
AI Security
AI Security 101
Artificial Intelligence (AI) is no longer just a buzzword; it’s an integral part of our daily lives, powering everything from our search for a perfect meme to critical infrastructure. But as Spider-Man’s Uncle Ben wisely said, “With great power comes great responsibility.” The power of AI is undeniable, but if not secured properly, it could end up making every meme a Chuck Norris meme. Imagine…
Read More » -
Leadership
Why We Seriously Need a Chief AI Security Officer (CAISO)
With AI’s breakneck expansion, the distinctions between ‘cybersecurity’ and ‘AI security’ are becoming increasingly pronounced. While both disciplines aim to safeguard digital assets, their focus and the challenges they address diverge in significant ways. Traditional cybersecurity is primarily about defending digital infrastructures from external threats, breaches, and unauthorized access. On the other hand, AI security has to address unique challenges posed by artificial intelligence systems,…
Read More » -
AI Security
How to Defend Neural Networks from Trojan Attacks
Neural networks learn from data. They are trained on large datasets to recognize patterns or make decisions. A Trojan attack in a neural network typically involves injecting malicious data into this training dataset. This 'poisoned' data is crafted in such a way that the neural network begins to associate it with a certain output, creating a hidden vulnerability. When activated, this vulnerability can cause the…
Read More » -
AI Security
Model Fragmentation and What it Means for Security
Model fragmentation is the phenomenon where a single machine-learning model is not used uniformly across all instances, platforms, or applications. Instead, different versions, configurations, or subsets of the model are deployed based on specific needs, constraints, or local optimizations. This can result in multiple fragmented instances of the original model operating in parallel, each potentially having different performance characteristics, data sensitivities, and security vulnerabilities.
Read More » -
AI Security
Outsmarting AI with Model Evasion
Model Evasion in the context of machine learning for cybersecurity refers to the tactical manipulation of input data, algorithmic processes, or outputs to mislead or subvert the intended operations of a machine learning model. In mathematical terms, evasion can be considered an optimization problem, where the objective is to minimize or maximize a certain loss function without altering the essential characteristics of the input data.…
Read More » -
AI Security
AI and Canada: Pioneering Innovation, Searching for Homegrown Success
It’s easy to forget, amid the hype around Silicon Valley’s AI giants, that many of the foundational breakthroughs of modern AI were born in Canada. In fact, two of the three “godfathers of AI” - Yoshua Bengio and Geoffrey Hinton - built their careers at Canadian universities (Université de Montréal and University of Toronto, respectively). The third, Yann LeCun, did seminal work at Bell Labs…
Read More »