Lykke Pedersen, a distinguished bioinformatician from Copenhagen, delivered a compelling presentation on Explainable AI. With a PhD in Biophysics from the University of Copenhagen and extensive experience at Roche, where she led drug design projects, Lykke brought a wealth of knowledge to the discussion. She currently serves as the Chief Pharma Officer at Abzu, focusing on RNA therapeutics and the discovery of medicines for diseases like Alzheimer's and various cancers.
Lykke began by emphasising the importance of understanding the decisions made by AI models. She used the analogy of teaching children about the dangers of fire to illustrate the need for explainability in AI. Just as children need to understand why they should not touch fire, scientists need to understand the underlying reasons behind AI predictions.
She explained that explainable AI allows for greater trust in AI models by providing insights into the factors influencing predictions. For instance, knowing why a drug is toxic to the liver or why Alzheimer's develops can help mitigate risks and find treatments. Lykke highlighted that traditional machine learning methods often operate as black boxes, making it difficult to discern the relationships within the data. In contrast, Abzu's algorithm could identify and explain the features most relevant to predictions.
Using examples from RNA therapeutics, Lykke demonstrated how their algorithm could handle various data types and predict outcomes with high accuracy. She discussed the application of their model to siRNA data, showing how it could classify and predict the activity of siRNAs based on features like duplex energy and target binding energy. The model's ability to validate predictions with external data further underscored its robustness.
Lykke concluded by discussing real-world applications, including collaborations with companies to design compounds and predict patient responses in clinical trials. She emphasised the iterative nature of their work, continuously learning and optimising models to improve predictions. Her presentation underscored the transformative potential of explainable AI in advancing scientific understanding and improving healthcare outcomes.