In a rapidly changing landscape of biomedical research, artificial intelligence, genetics, and cellular engineering are converging to redefine how drugs are discovered, tested, and delivered. This article gives an unfiltered look at how these technologies are reshaping the foundations of pharmaceutical science and where the boundaries still hold firm.
The discussion, spanning everything from stem-cell-derived human models to AI-designed viral vectors, emphasised the extraordinary promise and the persistent complexity of bringing computational power into the deeply biological realm of human health.
Key Takeaways
· The future of drug discovery lies in combining AI, human-cell models, and genetic data to improve accuracy and reduce reliance on animal testing.
· AI is a powerful filter, not a replacement — it narrows options and accelerates discovery but still depends on biological validation.
· Data quality and access remain the main bottlenecks; failure data are as valuable as success data for improving machine learning.
· Human iPSC models offer unprecedented insight into patient variability, though they cannot yet capture environmental and epigenetic factors.
Beyond the Mouse: Human Models and AI in Preclinical Testing
A key theme recurring within biopharma is the effort to reduce dependence on animal testing. Regulatory agencies like the U.S. FDA and NIH are increasingly encouraging the use of New Alternative Models (NAMs) — in vitro and computational systems that simulate human biology more directly than animal models can.
The move is both ethical and scientific. Traditional animal testing, long regarded as the gold standard for toxicity and safety screening, often fails to predict human responses accurately due to differences in physiology. One expert described the striking inefficiency of this paradigm: in many therapeutic areas, only one in fifty patients responds well to an approved drug. That inefficiency is not just a human cost but a systemic flaw in the way we model disease.
In response, human-induced pluripotent stem cell (iPSC) technologies are now being used to create tissue models that capture patient-to-patient variability. By testing potential therapies across thousands of genetically distinct cell lines, researchers can map how a drug might behave across an entire population before it ever reaches a clinical trial, which saves time, resources, and cost.
AI is serving as the connective tissue in this new ecosystem. Machine learning models can analyse large-scale genetic and cellular datasets, triage chemical libraries, and identify promising drug targets far earlier than manual screening could. Still, experts cautioned that AI’s current capabilities are often overstated. As one representative from Greenstone Biosciences put it, “The computer is not there yet.” Models can rank likely candidates, but biological validation remains indispensable.
A Layered Approach to Drug Discovery
Industry leaders have described a three-tiered framework emerging across the field. First comes the computational screening of millions of potential compounds — a purely digital narrowing of the field. The second tier, using NAMs such as iPSC-derived cells and organoids, tests a smaller subset across many genetic backgrounds to capture real-world variability. Finally, a handful of the most promising candidates move to animal studies for safety and pharmacology testing.
This hybrid approach is neither an outright rejection of animal work nor a blind embrace of AI. Instead, it represents a pragmatic recognition that no single model can capture the full complexity of human disease. But by intelligently integrating these layers, researchers can both reduce animal use and improve predictive accuracy.
Designing Genes with Machines
Nowhere is this synthesis of biology and computation more evident than in the fast-moving field of gene therapy. Gene therapies rely on viral vectors to deliver corrective genetic material into specific tissues. Traditionally, these viral capsids were modified through slow, trial-and-error experimentation. Today, AI models are redesigning them with unprecedented precision.
One researcher explained how machine learning systems are trained on vast datasets of in vivo experiments, identifying how minute changes in viral structure affect tissue targeting, immune response, and toxicity. By generating and testing millions of sequence variants, AI can evolve vectors that reach their intended destinations such as the brain, muscle, or eye, while avoiding off-target effects.
Yet even here, the message was one of measured optimism. The models are powerful, but their reliability depends on the quality and breadth of the underlying data. Importantly, failure data are just as valuable as successes when it comes to grounding AI models. Some companies now treat negative results as critical training material, teaching AI systems what not to design. This approach is already reducing the number of primate experiments needed to develop each new therapy.
The Data Dilemma
The discussion inevitably turned to data. With data comes the issue of tackling its abundance, its messiness, and its ownership. Many agreed that the bottleneck in AI-driven drug discovery is not the algorithms themselves but the data that feed them.
Large pharmaceutical companies have accumulated vast repositories of clinical and preclinical results, including countless failed compounds. These negative datasets could be invaluable for training machine learning models to recognise early warning signs of inefficacy or toxicity. Yet most of that information remains locked behind corporate walls, constrained by privacy agreements and intellectual property rules.
There is growing support for a cultural and regulatory shift toward more open data sharing. Even anonymised or aggregated failure data, if made accessible, could accelerate the entire industry’s progress. Still, practical and ethical hurdles remain: patient consent forms, proprietary trial designs, and the sheer heterogeneity of biological data all complicate integration.
Companies like Lilly are allowing selected biotech and pharma companies access to some of their platforms and LLMs in the hopes that this will better train their models and foster collaboration. Observers note that it will be interesting to see whether other pharma and biotech companies will follow suit.
From Genes to Patients: Tackling Translational Challenges
Another recurring theme was the complexity of interpreting human genetics. Large-scale sequencing efforts have linked millions of genetic variants to diseases, but causality is difficult to prove. Many associations are confounded by lifestyle, socioeconomic, and environmental factors. While human data are invaluable, they are also messy.
One proposed solution is to bring genetics back into the dish. By creating iPSC-derived cells from individuals with known genetic variants, scientists can observe how those mutations affect disease pathways in a controlled environment. This “genetics in a dish” approach strips away the noise of real-world confounders, offering a purer window into how genes shape cellular behaviour.
Still, even the cleanest genetic data tell only part of the story. Epigenetic modifications (chemical marks that switch genes on or off) are erased when iPSCs are created, meaning that models built from them capture heritable DNA but not life experience or other environmental factors. That tradeoff, as one clinician noted, is both the power and the limitation of reductionist human models.
Promise and Paradox
By the end of the discussion, a clear picture had emerged: AI and human-based models are transforming the front end of drug discovery, enabling faster, cheaper, and potentially more ethical innovation. But they have not yet rewritten the basic rules of biology.
The algorithms are accelerating discovery, not replacing it. They are helping researchers make better first guesses, not perfect predictions. And while these tools can shrink the odds of failure, unfortunately they cannot yet eliminate uncertainty from medicine.
In that sense, the conclusion is not a declaration of victory but a recognition of balance. AI, cell biology, and genetics are complementary instruments in the same orchestra. The future of drug discovery will depend not on which tool dominates, but on how harmoniously they are played together.







