AI: driving drug development from effective to remarkable – transcript

Bioworld Insider

[music]

VO: The BioWorld Insider Podcast.

Lee Landenberger: This is the BioWorld Insider Podcast. I’m Lee Landenberger, a writer at BioWorld and your host for today. Each year, a group of trailblazers, disruptors, and forward-thinking executives converge at the BioFuture conference to evaluate and mold the future of healthcare. This year, the BioFuture conference is in New York City. It will be held Oct. 4th through the 6th. If you attend, you’ll have the chance to hear panels and join workshops and fireside chats with key opinion leaders.

It’s also a great opportunity to meet with investors and potential partners during networking receptions and one-to-one meetings. If you’re interested, there are more details at biofuture.com. You can register on the site. AI will no doubt be a part of the discussion at this year’s BioFuture conference. Artificial intelligence has the potential to radically transform this industry and even reshape how it gets funded.

It could help reduce failure rates, and risk, perhaps even change the financing model, and it could also go beyond drug development into general healthcare. Our guest today to talk about all this is Scott Penberthy. He’s a member of Google Cloud’s Chief Technology Officer team. Before coming to Google, Scott worked at IBM at the research department in the chairman’s office. He moved on to several web startups, then became managing director in the CEO’s office at PwC. He’s here to talk about not only the future but what’s happening before our very eyes day-to-day. Scott, thanks for joining us.

Scott Penberthy: Hey, great to be here, Lee. Thanks for having me among such an august crowd of your fellow podcasters.

Lee Landenberger: Thanks. It’s a pleasure to have you here. I’ve been looking forward to talking to you about this for a couple of weeks now.

Scott Penberthy: Oh, same.

Lee Landenberger: Much of AI is used in the less flashy areas of healthcare, like helping doctors do their day-to-day jobs. I wanted to talk today about the future and if we can, maybe a span of five to 30 years ahead, looking at the opportunities and challenges that AI presents in drug development. My first question is about reducing risk for patients and investors. I want to ask you, what changes you think will be generated in the next 5 years or so that will change the business of drug development.

Scott Penberthy: The business of drug development. I think AI is now really becoming a tool of science for the first time. Also, right now, it’s becoming a very useful tool of drudgery and pajama time. [chuckles] What I mean by that is think of the business of producing a drug. We document very carefully clinical trials. Different regulations require us to document how the drugs develop, the process, protocol, and whatnot. That takes an awful lot of time away from the actual science or manufacturing of these drugs. It’s very important.

Now, AI can come in today and we’re already seeing it today. I’m on board with some oncologists and they’re using it to help summarize patient reports. They’re looking at, “Well, can I use this for clinical trials too? How do I gather data and report and summarize?” I think that’s where we’re going to see it. We’re seeing it today and I think regulators and others are rushing to the forefront, figuring out, “How do we make this effective?” Once the doctors use this, it’s remarkable.

They’re like, “I don’t want to go back because I can start to produce the documents that I required to do of my job well at 10 times faster.” It’s like when word processors first came out or word came out and can you imagine writing or doing a podcast today or writing anything by typewriter? No. I think in the next five years, it’s very practical, which means the language quality, the sophistication, and the ability to write these documents crisply, that’s going to get tightened up across the industry and take a lot of that cost and time significantly down.

That’s going to help us in terms of ensuring we can communicate well. Let’s say English is not your prime language and a lot of researchers come from other parts of the world. Now, you can write as well as someone a Harvard-educated scholar. That thing is going to really help us communicate well among all the collaborators on these drugs and really help in just the day-to-day business of drug development.

That’s I think the first place it’s going to hit. We’ll see this mainstream easily within the next five years, if not the next two. We can talk later about the more sophisticated science-y part of AI, which is from the simple observation, if I can do this in a computer and I can explore things in silicon, I can then take what the computer suggests and to use that to test. If I do, that, could actually reduce the search space of finding some new compound? The answer is, yes. We did that with AlphaGo a few years ago. That’s becoming a big push. That’ll probably lag the administrative part though. That’s going to hit first, because that’s just such a pain. It’s just going to make our lives spend more time with patients, more time with science and less time with the things we need to do to document and be safe.

Lee Landenberger: There’s a huge upside in saving time. I wonder though, when you’re talking to doctors and researchers, do you come across a distrust at all?

Scott Penberthy: Say more about distrust.

Lee Landenberger: They’re using technology that is new, and they’re not used to it. They’re used to doing something the way they’ve been taught and the way they’ve been doing it for years. This is brand new. To get past the fear of the unknown, do you encounter people that are like that and what do you say to them?

Scott Penberthy: Before I was working in medicine, I spent some time with folks at NASA and this team called the NASA Frontier Development Lab. We wanted to bring AI into NASA, and that’s 2016. The answer was no. Just no, Scott. “What do you mean no?” I said. “We’re a very conservative agency” Gosh, we did a few dozen projects with them that they sponsored, and over time they started to warm up to it. The way they warmed up to it, Lee, is they started to- and this is one of the CTOs of one of their centers said, “I get it now.” I said, “What do you mean?” He goes, “AI is like an idiot savant.” I said, “What?” Dave, he said, “No, Scott. I get it now.”

I said we use closed form solutions, we don’t use AI to control our ships or rockets, but AI can help suggest things and we can then use that suggestion and test it the way we would test anyone else’s idea. I’m like, “Well, that’s cool.” Now what they’re doing is they’re using AI and pockets where it suggests an answer that they then take as, just someone across the lunch table says, “Let’s go try this,” and they then use that and they don’t answer where it came from. They take that and they use all the systems they’ve done for years, just like anybody has an idea.

I’m starting to see that now in doctors and patients and insurers and researchers starting to do the same thing, which is a doctor will take a record and summarize it and go, “Thanks, but let me read it.” They’re finding that being the editor and the final arbiter is much more efficient than being the initial creator. That leverage is what they love. It’s not like they take the AI and say, just hand that to a patient. There’s not that trust, but there is enough trust to take the AI output and use that as an idiot-savant suggestion that they then use to reduce their administrative time, their search time, that sort of thing. I think that’s what we’re going to see much more is where AI’s not going to replace that task, but someone using AI will replace those who don’t use with it. That’s pretty clear right now.

Lee Landenberger: I want ask you about the 90% failure rate that drug developers face. I wonder if you see a reduction in that that we could attribute to AI coming in the future?

Scott Penberthy: You mean reduction in the failure rate of new compounds being only 10% succeed? Is that what you’re looking for?

Lee Landenberger: Yes.

Scott Penberthy: Where I’m seeing it is AI is being used by a few firms to suggest compounds, starting with small molecules, because a lot of these AI models the computation costs goes up to the fourth power of the size of the input. Small peptides, small molecules are already being used. There’s some interesting work that came out of MIT and now McMaster’s University where they use these neural networks basically to infer chemical properties of a molecule and they use that to help identify new antibacterials from existing compounds. Now, I think what that does is that dramatically reduces the number of molecules you have to try before you go to trial. I think what you’re looking at here is that, I think what we’re going to see is the ability to explore the space is going to go up manyfold, a couple orders of magnitude.

Now we have a human endeavor that it takes those molecules and they’re now saying, “I now found this molecule that’s much better according to my tools. Let’s now go through the drug trial process.” The human body is one of the most complex organisms or machines we’ve ever encountered and probably ever will. There’s two parts to your question. One is, is the human ability to find a molecule that will actually thrive in the human ecosystem going to improve in the next year with AI? Maybe. I think definitely the ability to find the molecules that we think are going to work, that’s going to improve dramatically. 90% is an interesting question, because will the science improve so that we understand the human body better so that we have a molecule when we go through the trials, that 90% starts coming down? Theoretically, yes. I think what we’ll see at first though is the speed and efficacy of finding that molecule we think is going to work that will rapidly change.

We’re already seeing a few companies do this going from seven years to some claims, seven weeks, seven, seven months finding the antibiotic the paper talks about the candidates were produced in 48 hours. That’s amazing. You got to be careful with the 90%. Is the ability of our understanding of the human system, is that going to improve dramatically in the next few years? Take it 90% down. Theoretically, I think yes, because now we’re analyzing it from first principles, but the improvement really is going to be in finding the first molecule and getting them into trial. That’s really where it’s going to see at first.

Lee Landenberger: What about eliminating the need for humans in clinical trials? Is that a possibility?

Scott Penberthy: No. Well, depends how you answer that. What we can do now is that we can now create these things called digital twins. They’ve been around for a while. Digital twins, until recently, were all about probabilistic models based on a number of measures. The sophistication that is going to go way up, because now we can use these AI models which are basically piecewise, non-linear, probabilistic models. Very cool. They can reason with multiple modalities of different kinds of data. Much more sophisticated. If you’re exploring a molecule, what you’ll see – or molecule or some new chemical compound, what you’ll be able to do in the computer is take the molecule and suggest that this might work.

Then run some probabilistic models to say, how might humans respond if I’ve got a set of digital twins? What that’s going to do is help you reduce your search space. I definitely see digital clinical trials as part of the molecule generation process, but then they’ll, like in NASA, they’ll step back and say, “Interesting idea. AI. Let’s now take the molecule for the processes we understand, do it in vitro, and get confidence enough this is actually helpful for humans.” See what I’m saying? You still need the humans. I just don’t foresee us in the near term for brand new molecules using AI for that.

We are seeing very early things today with mRNA-based drugs where we can do personalized cancer vaccines for them. Her two, like when you’re identifying an antigen, because that is basically a copy and paste of a like 1,253 nucleotides into a harness that’s an mRNA-based drug. As long as the protocol’s the same, you could approve the platform and you could run a simulation on that. They still use animals in the process to verify it, but that makes sense, because now you’ve got a molecule that’s well studied, it’s well understood. Basically what you’re changing is a payload. You can do the same thing for monoclonal antibodies and mAbs. The tool tip itself may be different, but the basic structure and process is the same. That’s where it may, I think be more useful.

It’s going to be a number of years until we’re comfortable, because the living organisms, we’re just- and we can just now see their source code. If you talk to Nobel Laureates, we don’t really understand it that well. We’re starting to see correlations and we find interesting correlations, those are like Nobel prizes. That’s an exciting time, because now we can see your source code, we have now tools to help us understand it. It’s going to be a really interesting few decades ahead.

Lee Landenberger: Will AI be able to better help identify the best participants and how they’re managed during trials?

Scott Penberthy: I think ideally what you want for a drug is just the right number of participants to verify it. We have the minimal number of complications. The tools we have today are largely conversations between a doctor, a patient, a medical record, and a number of metabolites or other techniques. It’s doctor’s intuition matching these, because you can use matching intuition and try to find something similar, but it really comes down to a discussion with the patient, the doctor, and the scientist of this is the right candidate.

I think what we’re going to see now is since you have more sophisticated modeling of patients and if they release their data and you’ve got essentially what Lee Hood calls the Phenome, which is a combination of your genome and epigenome, those models have much more intuitive or much more precise models of humans to help reduce the search base for finding candidates. In that sense, you have to talk to fewer candidates to find the right ones. That’s where I see it going.

Lee Landenberger: How about target identification? Meaning finding targets in the biology and the chemistry of drug development. Does AI have a voice in how that may change in the coming years?

Scott Penberthy: Absolutely. Think of it like a microscope, or an STM, or MRI. It’s now a tool increasingly of science, and part of that, could you start to identify– I think we may see it first with antigens and structures. Are there unique structures that our immune system could recognize for immunotherapies? That’s already happening. In the sense of immunotherapy targets and finding antigens, I think that is definitely an area. We have things such as AlphaFold and others that can start to imagine and quite accurately, the shape of these things.

I definitely see it in that region. In other areas of biology, I’m still learning from the experts to see where else it might be helpful, and molecular processes, and metabolic processes. People are now looking at AI techniques to predict the outcomes of those processes. I think it’s going to be used as a great tool for that to assist scientists.

Lee Landenberger: I want to get your thoughts about cost. Does it cost a lot to use this in drug development? Do you use AI and do you see the cost dropping in the future?

Scott Penberthy: That’s a great question. I think a couple things are going on. One, right now, I heard somewhere, I think it’s right, there’s about a hundred papers on machine learning every day on archive where they publish these papers. A lot of them are peer-reviewed. That’s insane. That’s a lot of research. People are chasing this thing. A number of the people are thinking as we start to figure out how to model human language function, which is what this is all about. The first ones that can really get it right may be- I think John Carmack said this- may be a 1,000 bucks an hour. Now, that may be in our space very, very severe conditions where you tried everything else and to do a personal vaccine or maybe there’s a pandemic, something else that requires that level of sophistication.

In the parallel, as they’re figuring out how to do this language function, people are figuring out new computing substrates. There’s a rash of startups looking at how do you get the watts per inference to drop much like the cost of the human genome, factors at 10 to the 7th, which causes you to radically rethink things. When that happens– Quantum’s another one. The idea there is that the first truly amazing AI may be that expensive, but at the same time, the computing cost is coming down. We expect, I’m aligned with them, in the sense that you look at the 2050, a couple bucks an hour? Oh, yes, I see it going there, but not until we get it right first. Get the math right, get the model right.

It’s an approximate of our language function, now we can go, “Oh, this is the rocket equation. I get it.” Now we have that, then how do you reduce that over time? I think that’s where a lot of innovation will happen. We’re going to start to see new types of computing substrates, not only electronic, but electronic and biological. Already seen some of that today.

Lee Landenberger: All these technological advances produce probably a lot of challenges that I’m curious, I want to get your take about ethical issues. Do you encounter ethical issues that people pose to you or that you come up with on your own in using AI in clinical trials?

Scott Penberthy: At this for years with Google, starting with Search, and Google Photos, and whatnot, we take that very seriously. I think the community at large is also thinking about that. Now, you can go touch and play with these tools that we’ve had in research for a couple years and the things before that. That’s really important. We think about is we drafted initially a set of AI principles, and we use that initially– For example, someone has a research idea. Before product, before math, before science, it’s like, “I think I can use this. I want to study this area of AI.”

We use that against the principles to say, “Is this ethical? Are you preserving the privacy? Are you respect the individual? How’s the data being treated? Where is it coming from?” Only then does it get approved, could you actually do the research. Then before we published the research, we’re like, “Hang on.” “I want to get it out. I want to be one of the hundred papers.” Not so fast. Let’s look at the research and say, “Is it still appropriate to publish this research?” Because our attitudes change by time and location as humans.

There are cases where in the past we’ve seen research and said, we’re not going to publish this. It’s not the right time. Then they do same thing for products. I think that careful consideration, my hope is that more and more adopt that in their use of AI, because it’s a very powerful tool. Governments and others are starting to lean in saying, “Well, how do we do this in a methodical way?” Much like any new technology, because it can definitely be used for great benefit, but used improperly, can have unintended consequences. That’s where we need us all to lean in and figure out, how do we do that methodically over time as a community?

Lee Landenberger: Last question for you, though. I could go on for hours. This is fascinating. The last question for you is, the changes that you’re seeing day-to-day just in drug development, do they spill over into the larger healthcare model? Could you expand on what it is you’re seeing that is a spillover effect into generalized healthcare?

Scott Penberthy: Yes. If you look at a couple areas that fascinate me. At least in the US, there is a lot of documentation and process. The business of healthcare itself has a significant amount of overhead for most of the participants. That’s where AI can help right off the bat. For example, learning about your care and benefits from an insurer, going to their website and having a chat. That’s an obvious area to go after. Are there areas where you can start to understand your benefits and your partner’s benefits and how they come together, or you need to have an operation- let’s say in the US, knee operations are very common.

Have you done physical therapy beforehand? Have you taken everything else before you actually go with the surgery? That requires a justification. I can help with those as well. On the flip side, those that receive the documents, they have humans reading those and they’re saying, “Hey, can you help me read these, AI, and show me the ones that you believe are green lighting, but ones I should pay attention to. Help me focus my time so I’m more effective and I can help move patients through the system faster.” That’s happening. Now, there’s other parts of the world that are really pushing the envelope, where there’s not as much of a chain of liability like we have in the Western world.

For example, in India, they’re looking at using these tools as diagnostic aids. Where their approach is these people aren’t getting care. I don’t have enough doctors to reach them. How can I give them something? They’re starting to look at AI as a tool to actually online and search, help you understand what might be a common ailment, and the equivalent of like DoorDash you the antibiotic afterwards. That’s a lot harder to do in the Western world. In Asian states where the attitude is, we need to get them some kind of coverage, and that I think is going to be leading the world in terms of how do you use these in a diagnostic area.

I think in the near term, we’ll see it as healthcare is going to get a lot less onerous for most of the participants. I think over time, when that happens, my hope is that starts to take some of the cost down as that burden basically lessens across all the participants in the healthcare system.

Lee Landenberger: Fascinating. Was there anything that we didn’t touch on that you want to mention while we’re here?

Scott Penberthy: I think you’ve mentioned before when we talked about ethics and privacy, and that is paramount in all of this. We need to get that right. We spend a lot of time– If you’ve been in AI for a while, there’s an old adage that 80% of AI is really thinking hard about the data that governs the data, the privacy of the data, how the data is transported, where it goes. Then the other 20% is complaining about data. The 1% is the actual AI. What we’re seeing now, though, is that as doctors, as life scientists, as researchers get into AI, the conversation very quickly turns to how do you do this and preserve privacy?

Do these systems work within the privacy controls I have within my secure envelope that’s already been through my CSO? My Chief Security Officer. Those are the pragmatic issues that I find most of the conversations very quickly shift to. One, there’s the fascination. It’s infinitely cool, very powerful, but I think now we’re thinking very carefully, you said earlier. I’m glad you asked that about that, because I think that’s where we can really, as a community, figure this out. That can only be done with the technology companies in partnership with the bio companies to figure this out together.

As a technology provider, you all know about science and you’ll forget in an hour what I could ever learn about this space, but likewise we can bring the technology to bear, and then together we’ll figure out, how do you do this in an ethic, privacy preserving way so we all benefit? That to me is the real hard work in a lot of this.

Lee Landenberger: Well, speaking of cool and powerful, this has been a cool and powerful discussion. Thanks a lot for your insight and your time, Scott.

Scott Penberthy: Well, Lee, thank you so much for the opportunity, and I’m really looking forward to the conference coming up here.

Lee Landenberger: It’s our pleasure. That is our show for today, and as always, BioWorld helps keep you informed of all the most important scientific, clinical, and business updates. We are a daily news service that covers drug development. If you need to track the development of drugs, turn to Bioworld.com. You can also follow us on Twitter or X, and you can email us newsdesk@bioworld.com. Also, we look forward to meeting you at the BioFuture conference in midtown Manhattan that will be held Oct. 4th through the 6th.

There are details at biofuture.com. We’ll see you there, I hope. Also, if you’re enjoying this podcast, don’t forget to subscribe, and thanks everyone for joining us today.

[music]

VO: BioWorld, published by Clarivate, is a subscription-based news service that delivers actionable intelligence on the most innovative therapeutics and medical technologies in development.