Vigintake contributed to the CIOMS AI in PV framework report
CIOMS (Council for International Organizations of Medical Sciences) has published the final document of Working Group XIV: Artificial Intelligence in Pharmacovigilance. It is one of the most important global reference documents in recent years for the industry, and at Vigintake we had the opportunity to contribute to it during the public consultation phase.
The Central Argument of the Document
The CIOMS WG XIV central argument is this: The integration of AI into pharmacovigilance is both inevitable and necessary to manage the growing complexity and volume of safety data, but it can only be beneficial if deployed under a shared set of guiding principles, not as technical prescriptions, but as lasting ethical and quality safeguards.
In other words: AI is not the question. How we govern, validate, and oversee it is.
The 3 to 5 Key Sub-Arguments
1. Data volume has outpaced manual human capacity
Global adverse event databases (WHO's VigiBase, FDA's FAERS) have grown exponentially, particularly following the COVID-19 pandemic. Manual processing of ICSRs (Individual Case Safety Reports), including intake, triage, data entry, review, and medical evaluation, is no longer sustainable at scale. The evidence is empirical and quantitative: real longitudinal growth data from global safety databases.
2. A risk-based approach is required for responsible AI deployment
Not every AI application in PV carries the same impact on patient safety. The document proposes calibrating the level of human oversight based on the consequences of the decision and the degree of system autonomy. The evidence is multi-stakeholder expert consensus (regulators, industry, academia), strong for coherence, but not experimental.
3. Human oversight is not optional, it is structural
Whether operating in "human-in-the-loop" mode (decisions result from human-machine interaction) or "human-on-the-loop" mode (the machine decides and a human reviews), human oversight is essential to guarantee quality, auditability, and trust. The evidence is normative and regulatory reasoning, well-argued, but primarily deductive.
4. Transparency and explainability as the foundation of trust
AI systems in PV must disclose how they work, what data they use, and what their limitations are. The document introduces the distinction between opaque "black-box" models and explainable AI (xAI) techniques that offer plausible hypotheses about how a result was reached. The evidence is mixed: conceptually solid, but the limitations of current xAI techniques are explicitly acknowledged.
5. Governance and equity as ethical and regulatory imperatives
Biases in training data can amplify inequalities in safety signal detection, disproportionately affecting underrepresented subpopulations. Clear governance frameworks with defined roles and responsibilities are required. The evidence is primarily normative and policy-driven, referencing legislation (GDPR, EU AI Act, HIPAA) but with limited empirical case studies specific to PV.
Our Role: AI-Assisted Triage and Data Entry Mapping
At Vigintake, we contributed to the public consultation phase by bringing a practical perspective on two of the most critical and time-intensive steps in ICSR processing: automated triage and AI-assisted data entry mapping.
These tasks, seemingly administrative, account for a disproportionate share of PV team workload and are precisely where the risk of human error is highest due to their repetitive nature. We proposed that the document explicitly recognize that automating these steps, when done with adequate oversight and well-defined quality criteria, does not compromise safety. It strengthens it, freeing professionals to focus on tasks that genuinely require expert judgment.
Where We Believe PV Can Improve and How
The CIOMS document is a foundational step. But from the perspective of companies like Vigintake, working daily with the real inefficiencies of the sector, we believe AI's potential in PV is still vastly underutilized.
Literature triage, automated adverse event classification, duplicate detection, narrative generation, and MedDRA term mapping are all areas where AI already demonstrates performance equal to or better than human experts, with the added advantage of doing so consistently and at scale. What holds back adoption is not the technology: it is the lack of clear validation frameworks and regulatory uncertainty.
That is precisely why the CIOMS WG XIV document is so valuable: it gives pharmaceutical companies and MAHs the vocabulary and conceptual framework to say "yes" to AI responsibly. At Vigintake, we build on exactly these principles, ensuring that every automation we offer comes with configurable human oversight, full traceability, and performance transparency. Because AI in PV is not about replacing experts: it is about enabling experts to focus on what truly matters, patient safety.
To learn more about how Vigintake applies these principles in practice, visit our Intake Management platform page.