You just unlocked $/£/€ 150 off a workshop. Use code BFCM26 at checkout to reserve your spot at the lowest price yet.
Unlock our largest short course discount of the year. Use code BFCM26* during your call with admissions. Start now. *T&Cs apply
You just unlocked 4 new courses. Apply by Dec 31 and we'll waive your $/£/€100 registration fee*. Start now. *T&Cs apply
As AI systems become more central to products and services, technical teams are under pressure to ensure they’re not just functional — but also fair, robust, and transparent. In this one-hour session, we’ll walk through how you can assess AI systems using open-source tools designed for practical, code-based risk evaluation.
We’ll explore how dynamic assessments work, run a sample use case on a classification or language model, and show how to interpret outputs to support product reviews, audits, or internal documentation. This session is ideal for product leads, system architects, technical auditors, or data science managers beginning to formalise AI risk checks in development.