AI can be at risk of adversarial attacks. Our comprehensive Adversarial Robustness toolkit includes Advertorch, our well-established adversarial robustness research code, which implements a series of attack and defense strategies that can be used to protect against risks.


Bias has long existed in our society – and so it exists in our data. We focus on how to eliminate bias and ensure a fair and ethical approach to AI.

Model Governance

How do you ensure the algorithms you build are reliable and effective? Our blogs on model validation explain what steps are needed to test and validate your AI models.


Understanding the inputs that are influencing decisions is a critical step in the adoption of machine learning and AI. Our research provides deeper insight.

Data Privacy

Data privacy is paramount in building responsible AI. Our toolkit on synthetic data generation allows developers to gain insight without compromising integrity and data privacy.