AI Security
PostQuantum.com – Industry news and blog on Quantum Computing, Quantum Security, PQC, Post-Quantum, AI Security, AI, ML
-
Model Fragmentation and What it Means for Security
Model fragmentation is the phenomenon where a single machine-learning model is not used uniformly across all instances, platforms, or applications. Instead, different versions, configurations, or subsets of the model are deployed based on specific needs, constraints, or local optimizations. This…
Read More » -
Outsmarting AI with Model Evasion
Model Evasion in the context of machine learning for cybersecurity refers to the tactical manipulation of input data, algorithmic processes, or outputs to mislead or subvert the intended operations of a machine learning model. In mathematical terms, evasion can be…
Read More » -
AI and Canada: Pioneering Innovation, Searching for Homegrown Success
It’s easy to forget, amid the hype around Silicon Valley’s AI giants, that many of the foundational breakthroughs of modern AI were born in Canada. In fact, two of the three “godfathers of AI” - Yoshua Bengio and Geoffrey Hinton…
Read More » -
Securing Machine Learning Workflows through Homomorphic Encryption
Homomorphic Encryption has transitioned from being a mathematical curiosity to a linchpin in fortifying machine learning workflows against data vulnerabilities. Its complex nature notwithstanding, the unparalleled privacy and security benefits it offers are compelling enough to warrant its growing ubiquity.…
Read More » -
Understanding Data Poisoning: How It Compromises Machine Learning Models
Data poisoning is a targeted form of attack wherein an adversary deliberately manipulates the training data to compromise the efficacy of machine learning models. The training phase of a machine learning model is particularly vulnerable to this type of attack…
Read More » -
Semantic Adversarial Attacks: When Meaning Gets Twisted
Semantic adversarial attacks represent a specialized form of adversarial manipulation where the attacker focuses not on random or arbitrary alterations to the data but specifically on twisting the semantic meaning or context behind it. Unlike traditional adversarial attacks that often…
Read More » -
The AI Alignment Problem
The AI alignment problem sits at the core of all future predictions of AI’s safety. It describes the complex challenge of ensuring AI systems act in ways that are beneficial and not harmful to humans, aligning AI goals and decision-making…
Read More » -
A (Very) Brief History of AI
As early as the mid-19th century, Charles Babbage and Ada Lovelace created the Analytical Engine, a mechanical general-purpose computer. Lovelace is often credited with the idea of a machine that could manipulate symbols in accordance with rules and that it…
Read More » -
Understanding and Addressing Biases in Machine Learning
While ML offers extensive benefits, it also presents significant challenges, among them, one of the most prominent ones is biases in ML models. Bias in ML refers to systematic errors or influences in a model's predictions that lead to unequal…
Read More » -
Adversarial Attacks: The Hidden Risk in AI Security
Adversarial attacks specifically target the vulnerabilities in AI and ML systems. At a high level, these attacks involve inputting carefully crafted data into an AI system to trick it into making an incorrect decision or classification. For instance, an adversarial…
Read More » -
Gradient-Based Attacks: A Dive into Optimization Exploits
Gradient-based attacks refer to a suite of methods employed by adversaries to exploit the vulnerabilities inherent in ML models, focusing particularly on the optimization processes these models utilize to learn and make predictions. These attacks are called “gradient-based” because they…
Read More » -
Introduction to AI-Enabled Disinformation
In recent years, the rise of artificial intelligence (AI) has revolutionized many sectors, bringing about significant advancements in various fields. However, one area where AI has presented a dual-edged sword is in information operations, specifically in the propagation of disinformation.…
Read More » -
The Unseen Dangers of GAN Poisoning in AI
GAN Poisoning is a unique form of adversarial attack aimed at manipulating Generative Adversarial Networks (GANs) during their training phase; unlike traditional cybersecurity threats like data poisoning or adversarial input attacks, which either corrupt training data or trick already-trained models,…
Read More » -
“Magical” Emergent Behaviours in AI: A Security Perspective
Emergent behaviours in AI have left both researchers and practitioners scratching their heads. These are the unexpected quirks and functionalities that pop up in complex AI systems, not because they were explicitly trained to exhibit them, but due to the…
Read More » -
How Dynamic Data Masking Reinforces Machine Learning Security
Data masking, also known as data obfuscation or data anonymization, serves as a crucial technique for ensuring data confidentiality and integrity, particularly in non-production environments like development, testing, and analytics. It operates by replacing actual sensitive data with a sanitized…
Read More »













