Module 1 — The AI Trust Gap
- Why high-performing AI can still fail in practice
- Differences between research performance and real-world reliability
- Implications for scientific and strategic decision-making
Module 2 — The AI Reliability Stack
A structured framework for understanding trustworthy AI:
- Model and algorithm considerations
- Data quality and representativeness
- Validation and external testing
- Security and adversarial risks
- Human interpretation and bias
- Organizational governance
Module 3 — Common AI Failure Modes in Pharma
Introduction to the Pharma AI Failure Atlas, including:
- hallucinated or fabricated evidence
- shortcut learning and spurious correlations
- lack of external validation
- data leakage and bias
- biologically implausible findings
- emerging risks in agentic AI systems
Module 4 — Human Factors in AI Use
- automation bias and over-reliance
- confirmation bias in AI interpretation
- risks of cognitive offloading
Module 5 — Case Study: AI as an Attack Surface
Analysis of a recent real-world incident involving AI agents:
- how AI systems can be manipulated
- implications for security, data integrity, and trust
- lessons for pharma and regulated environments
Interactive Component
Participants will:
- map AI risks in their own workflows
- identify potential failure points
Reviews
There are no reviews yet.