Pharmacogenomics makes sense — so why isn’t pharma actually using it?
Pharmacogenomics (PGx) has reached an unusual paradox. The scientific evidence is mature, the clinical (and economic) upside is undeniable, and yet adoption remains slow, fragmented, and inequitable.
Particularly in clinical trials, pharmacogenomic profiling is barely ever used. A ClinicalTrials.gov scan identified only 619 PGx‑related interventional trials out of 350,728 total (~0.18%), and fewer than half clearly specified which genes were being studied, even though PGx in trials can do several very practical things:
Cleaner efficacy signal: less variability -> clearer responder/non‑responder story
Fewer safety events: identify high‑risk genotypes early -> fewer avoidable ADRs
Lower trial friction: fewer discontinuations, fewer rescue meds, fewer “fire drills”
Better dose strategy in early phase: PGx helps explain PK/PD outliers before they become Dose-Limiting Toxicity surprises
Stronger story at the finish line: prospectively defined subgroups -> more defensible label strategy for payers
For most development teams, PGx still feels like something that adds complexity without clearly reducing risk: more assays, more coordination, more regulatory questions, more things that can go wrong. When timelines are tight and failure is expensive, the instinct is to simplify, not to introduce another moving part, even if that part is medically relevant. That’s exactly why PGx has struggled to move from “nice idea” to “default infrastructure” in clinical trials.
Clinician experience reports also note that commercial PGx panels may miss key actionable genes defined by CPIC/FDA/DPWG guidance while including low‑evidence variants, making it harder to know which results are actually useful in practice. Even when the relevant gene is included, panels may not consistently capture all clinically actionable alleles (such as copy number variants or hybrid structures), which can lead to metabolizer misclassification across participants.
Even when PGx testing is used, it has to fit into tight trial timelines: programs such as PREPARE required results to be returned within ~7 days to stay clinically relevant, and in real‑world settings deploying PGx involves everything from gene selection and phenotype translation to reporting, CDS logic, and EHR integration across multiple teams. In practice, turning sequencing data into guideline-aligned phenotypes often depends on specialized bioinformatics pipelines and local infrastructure, introducing delays, interpretation challenges, and variability across sites, with clinicians consistently citing time constraints and complex result interpretation as major barriers.
The irony is that the strongest argument for pharmacogenomics has already been made. The PREPARE study showed that pre‑emptive PGx reduces clinically relevant adverse drug reactions by about 30%. That’s not marginal. That’s the kind of effect size pharma usually celebrates.
But PREPARE also quietly showed why PGx still hasn’t scaled: centralized genotyping, multi‑day turnaround times, heavy coordination, data processing overhead, and panels that were never designed for global, fast‑moving trials.
In other words, the biology worked. The logistics didn’t.
This is where DNA ME comes in.
At DNA ME, we’re approaching pharmacogenomics around nanopore sequencing with an efficient, simplified software solution, because that combination finally makes PGx compatible with how trials actually operate.
Nanopore sequencing allows you to generate genetic data close to the trial site instead of shipping samples to a centralized lab. More importantly, long‑read sequencing resolves the pharmacogenes that matter most (like CYP2D6) without the guesswork and misclassification that plague traditional short‑read panels. But sequencing is only half the story. The real unlock is what happens after the data is generated.
DNA ME turns raw reads into standardized pharmacogenomic outputs that are machine‑readable and trial‑ready, with no need for bioinformatics experts to reach the sequencing results. The data can flow directly into safety monitoring, dose escalation rules, or adaptive trial logic. Analyses can also be run locally on a basic laptop; no need for GPUs or expensive computational equipment, and without uploading or sending sensitive participant genetic data.
DNA ME’s nanopore‑based workflow can also detect CpG methylation and allele‑specific methylation directly from the same sequencing run, adding a functional layer to pharmacogenomic profiling without additional assays or downstream processing. This enables identification of participants whose real‑world metabolism may differ from their predicted genotype due to epigenetic regulation of pharmacogenes, helping reduce exposure outliers and improve metabolizer classification within the same streamlined pipeline.
The moment PGx becomes fast, affordable, and operationally invisible (embedded the same way PK sampling or safety labs are embedded), pharma stops asking whether it’s worth doing. The question becomes why they would accept the risk of not doing it.
If you’ve tried to integrate PGx into a trial, what was the biggest blocker: cost, turnaround time, operations, or internal buy‑in?
We are curious what teams are seeing in the wild.
(And if you want a trial‑ready panel + plug‑and‑play nanopore workflow tailored to your asset, message us at DNA ME and we’ll build it with you.)