Good AI Practice in Drug Development (cont.)

Good AI Practice in Drug Development (cont.)

In my previous post, I argued that the medical technology sector offers valuable lessons for building reliable AI systems—particularly through Good Machine Learning Practice (GMLP). With the “Guiding Principles of Good AI Practice in Drug Development”, jointly published by the European authority EMA and the US-american FDA in January 2026, this perspective is extended beyond medical devices.

These principles span the entire drug‑development lifecycle: preclinical research, clinical trials, and manufacturing. This expansion is not only regulatory in nature—it also has important implications for how organisations learn, govern knowledge, and stabilise expertise around AI.

Human‑centric by design: making responsibility explicit
The emphasis on human‑centric and ethical design reframes AI as a socio‑technical system rather than a purely technical artefact. Human oversight is no longer assumed; it is a design requirement.
From a KM perspective, this matters because it forces organisations to make responsibility, judgement, and decision authority explicit—and therefore learnable. Tacit assumptions about “what the system does” or “who decides in the end” no longer suffice. AI design becomes a vehicle for clarifying roles, expectations, and accountability structures that are central to organisational knowledge.

AI systems as part of the GxP (Good Practices in Pharma) knowledge landscape
The explicit requirement for GxP compliance positions AI systems firmly within the organisation’s regulated knowledge infrastructure.
For AI‑enabled computerized systems, e.g. analytical decision‑support systems, this implies:

  • structured data governance as a shared organisational memory (not just a technical safeguard)
  • quality management across the entire AI lifecycle, turning development, deployment, monitoring, and change into learning loops rather than isolated events

In KM terms, AI is treated as institutionalised knowledge—codified, governed, audited, and continuously maintained.

Proportionate validation as organisational sense‑making
The call for risk‑based, proportionate validation once again supports a move away from schematic compliance towards contextual judgement.
This aligns closely with organisational learning: validation becomes an ongoing process of sense‑making about risk, impact, and uncertainty, rather than a checklist exercise. Different AI systems require different depths of scrutiny—not because standards are weakened, but because learning is situated.

Performance beyond metrics: learning in use
By extending performance evaluation beyond isolated model metrics to include human–AI interaction, the principles acknowledge an old KM insight: systems only reveal their quality in practice.
Interpretability and explainability are not technical luxuries; they are prerequisites for:

  • shared understanding
  • justified trust
  • reflective use

An AI system that cannot be meaningfully explained cannot become part of an organisation’s collective knowledge—no matter how accurate it is.

Plain language and monitoring: sustaining knowledge over time
Two further aspects strengthen the learning dimension.
First, the requirement for plain‑language communication treats understanding as a quality attribute. Knowledge about AI functionality, limits, and changes must be accessible—not only to experts, but to users and, where relevant, patients.
Second, the focus on continuous monitoring and data drift reinforces the idea that AI systems are never “finished”. They evolve with their context. Managing them therefore means learning over time, detecting change, and deliberately updating both models and organisational understanding.

My conclusion:
Seen through a KM lens, the EMA–FDA principles do more than regulate AI. They provide a framework for embedding AI into organisational learning structures – through transparency, lifecycle thinking, and explicit responsibility. Reliable AI, in this sense, is not primarily a technical achievement. It is the outcome of organisations that are able to learn about their systems, their data, and their own practices—continuously and collectively.

Good AI Practice in Drug Development (cont.)


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

2 × drei =