GSK AI/ML - Synthetic Lethality in Oncology
At the close of 2021, my team and I moved on from the world of Pharma Tech at GSK, into the new realm of AI/ML for R&D. More specifically for me, apart from managing a small UX team, I’ve been working on crafting experiences for synthetic lethality scientists, and surface various datasets and ML models that intersect with biology, which enable our scientists to do things they haven’t been able to do and work more efficiently than ever before.
Framing the Problem.
Problem Statement: As high quality targets (genes) are needed to feed the pipeline, robust cases need to be built for specific targets based on experiments, literature, database-mining and model prediction. Furthermore, proper vetting of such scientific hypotheses for biological plausibility tractability and commercial suitability does not currently happen as efficient part of exploring predictions about synthetic interactions between genes. Overall, this underscores a major need for progressing precision oncology medicines leveraging synthetic lethality.
Constraints
1) Front/back-end Engineers are at a premium here. We won’t have access to them for a full quarter until things are more readily prepped in terms of research and design. So collaboration here is initally low.
2) We don’t have an exhaustive panel of SL RU scientists, so we need to be cautious and respectful of their time for research and testing.
Who
Internal GSK users: SL RU Scientists (primary), Computational Biologists, AI Engineers.
Success Metrics
True metrics are difficult here. Our lagging metric is target(s) through CT2V and onwards through the pipeline and time saved whilst doing so.
Leading metrics in terms of product usage:
Unique users, return visits, adding SL pairs (genes) to the report builder, report downloads & NPS.
Discovery
During Q2 of 2022, my user researcher, product manager and I crafted an approach for discovery and interviewed several SL RU scientists. In addition to interviewing our stakeholders to understand what success meant and our constraints.
The main purpose being to uncover the current methodology and workflow of SL RU scientists to uncover new SL predictor target pairs.
We discovered various themes that we attributed to the relevant persona and how many times these themes occured. We did this via an affinity mapping session on Miro.
What worked well: It was a great crash course and broad overview of what they did and how they did it to achieve their goals.
What could be improved: We lacked the necessary depth and had to go back to our SL RU scientists for follow ups to further validate our assumptions. Such as the detail of their workflow, something highly complex which needed validating.
“Having shared the findings with our stakeholders and team, we knew we needed further artifacts to articulately explain certain aspects of the research, such as detailed workflows and customer journey maps…”
Customer Journey Mapping
This wasn’t a traditional cross functional team setup. We had AI engineers deep in their models and unfamiliar with our design process and scientists even more so. We need to articulate our findings in a way that would make it easy to follow, something that could be a living document and in addition, something that would make sense of the complexity we had surfaced.
After many collaborative sessions with my PM and I (our researcher was on mat leave at this point). We crafted a customer journey map and workflow that we further validated with our scientists. This was very well received and even was something the research units highlighted as valuable as they hadn’t visualised it before.
What this did highlight for us a product and design team, was where all our pain points and opportunities arose…
Alignment Workshop
At this point, having not engaged enough with my fellow AI engineers, I wanted to bring them in for some collaboration. After all, I was to craft an experience to bring their models front and centre for SL RU scientists to use. This brough some challenges as these AI engineers had different agendas and were very unfamiliar with the design process, not to mention based in Tel Aviv. So remote workshop it was!
I ran an initial 4 hour introductory, level setting and ideation workshop. We tackled the why behind this particular strategy, covering the the business problem statement, our personas, and research to date, in the form of the newly validated scientist journey map and workflow. I engaged with the AI engineers for feedback and then opened the floor to their own pain points in the current service setup. A couple pain points surfaced:
“Unclear on how our results enter their decision-making process.”
This being a pertinent call out as there were inefficiences already being detected at the senior level, hence new solutions to be explored alongside our other strategies.
“How to communicate clearly and persuasively about how much and in what way a user should trust a model”.
Trust was another huge call out for us and became a key part of our challenges to solve.
“I had hoped to sketch and ideate with the AI engineers but after certain constraints were surfaced, I decided to challenge that part of my process, sometimes democratising design works, sometimes it doesn’t, you just have to know when…”
Sketching, flowing & prototyping
The design process was highly collaborative between my PM and I and our SME’s. It required daily check-ins, not only to get feedback from the team but for me and my PM to double check our biology. This is something I found incredibly tough, generally i’ve been someone able to visualise a few different posibilities when presented with a problem but due to the complexity of what we were crafting, I had to heavily rely on my PM to guide me. This required trust on both our parts.
Everyday was a day of revelation, one discussion you’ve worked it out, and the next you’ve gone down a rabbit hole of questions. It required a lot of discovery and investigation, continuously challenging what we thought we knew but also being able to drag yourselves back out and reflect on your constraints.
To Conclude.
At the time of writing, we’re drafting a user testing script and method to understand how valuable and usable our new product is for our SL RU scientists. Ultimately, we have a set of hypotheses (as listed below), of which we need to prove out:
Hypothesis 1:
We believe that more SL targets of higher quality will be advanced if SL RU scientists can accurately predict and identify good candidates by using Saffron, that surfaces various predictor-target SL relationships.
Hypothesis 2:
We believe that by combining our model predictor-target SL relationships with the various available datasets out there, such as TCGA & Depmap, we can enable the SL RU scientist to more easily reach target qualification without reliance on the Ketoret engineers and/or a computation biologist.
Sub hypothesis 3:
We believe that by crafting an end to end experience, that mimics their current workflow which combines multiple tools, into one platform. Will save SL RU scientist time and improve efficiency.