AI and data are fueling innovation in clinical trials and beyond | So Good News

[ad_1]
Laurel: Speaking of pandemics, it has shown how important and difficult the race is to provide new treatments and vaccines to patients. Can you explain what evidence generation is and how it fits into drug development?
Arnaub: Certainly. Evidence-based drug development as a concept is not new. It’s the art of integrating data and analytics that successfully demonstrate the safety and efficacy and value of your product to various stakeholders, regulators, payers, suppliers, and most importantly, patients. Today, evidence generation is not just from trial readings, but there are different types of research conducted by pharmaceutical or medical device companies, and these can be studies such as literature reviews or observational data studies or analyses. indicate disease burden or even treatment patterns. If you look at how most companies are structured, clinical development teams focus on developing the protocol, executing the trial, and they’re responsible for successful studies in the trial. And most of this work is done in clinical development. But as drugs move closer to commercialization, health economics, outcomes research, epidemiologic teams are helping to determine what the value is, and how do we better understand the disease?
So I think we’re at a very interesting inflection point in the industry right now. Producing evidence is a multi-year activity, both during and, in many cases, after the trial. We’ve seen this especially with vaccine trials, but also with oncology or other therapeutic areas. In Covid, the vaccine companies put together their evidence packages in record time, and it was an incredible effort. Now, I think it’s a difficult balance for the FDA to navigate in that they want to promote the innovation that we’re talking about, the advancement of new therapies to patients. They have designed vehicles to speed up treatment, such as accelerated approvals, but to really understand the evidence and understand the safety and efficacy of these drugs, we need confirmatory trials or long-term follow-up. And that’s why the concept we’re talking about today is so important, how can we do it faster?
Laurel: That’s certainly important when you’re talking about life-saving innovation, but as you mentioned earlier, the rapid pace of technological innovation, combined with the amount of data being generated and reviewed, is where we’re at a particular inflection point. So how has the generation of data and evidence evolved over the last couple of years, and then how feasible would it have been five or 10 years ago to have developed this vaccine and all those evidence packages?
Arnaub: It is important to distinguish here between clinical trial data and so-called real-world data. The randomized controlled trial is, and remains, the gold standard for generating and presenting evidence. In clinical trials, we know that we have a really tightly controlled set of parameters and focus on a small group of patients. There’s a lot of detail and granularity to what’s being captured. There is a fixed interval for evaluation, but we know that the test environment does not necessarily represent how patients will perform in the real world. And that term, “real world” is a bunch of wild west of different things. These are claim data or payment records from insurance companies. These are increasingly new forms of data that can be seen from providers, hospital systems and laboratories, electronic medical records, and even devices or even patient-reported data. And RWD, or real-world data, is a large and diverse set of different sources that can determine patient performance as patients move in and out of different healthcare systems and environments.
Ten years ago, when I first started working in this space, the term “real-world data” didn’t even exist. It was like swearing, and it was mostly done by the pharmaceutical and regulatory sectors in recent years. So, I think the other important piece or dimension that we’re seeing now is that regulatory agencies have quickly started and advanced how we can use real-world data through very important pieces of legislation like the 21st Century Cures Act. introduced to expand our understanding of treatment and disease. So there’s a lot of momentum here. Real-world data is used in 85%, 90% of FDA-approved new drug applications. So this is the world we have to navigate.
How do we maintain the rigor of a clinical trial and tell the whole story, and then bring in real-world data to complete that picture? This is an issue that we have been focusing on for the past two years and we have come up with a solution to this problem during covid called Medidata Link, which links clinical trial patient-level data with all non-trial data. exists in the world for the individual patient. As you can imagine, it made a lot of sense during covid and we started this with the covid vaccine manufacturer, we can study long-term outcomes so we can pool trial data. we are seeing the aftermath of the trial. And does the vaccine make sense in the long run? Are you safe? Is it effective? It’s something that I think is emerging and has been a big part of our evolution over the last couple of years in terms of how we collect data.
Laurel: Collecting this data can certainly be part of the challenge of producing high-quality evidence. What other gaps have you seen in the industry?
Arnaub: I think the elephant in the development room in the pharmaceutical industry is that despite all the data and all the advances in analytics, the likelihood of technical success or regulatory success for a drug is still very low. The overall probability of Phase I approval is consistently below 10% for a number of different therapeutic areas. It’s less than 5% in cardiovascular, just over 5% in oncology and neurology, and I think the lack of data to show efficacy is at the heart of these failures. Here, many companies offer or include what regulators call flawed study designs, inadequate statistical endpoints, or, in many cases, insufficient trials, meaning the sample size was too small to reject the null hypothesis. This means that if you only look at the test itself and some of the gaps where data should be more involved and influential in decision making, you’re dealing with a number of key decisions.
So, when you’re building a trial, you’re thinking, “What are my primary and secondary endpoints? What inclusion or exclusion criteria do I choose? What is my comparator? What do I use the biomarker for? Then how do I understand the results? How do I understand the mechanism of action?” It is a transition of many different choices and different decisions that must be made in parallel, all of this data and information coming from the real world; we talked about how valuable an electronic health record can be. But here is the gap, the problem, how is the data collected? How do you check where it came from? Can it be trusted?
So, while the volume is good, the gaps actually contribute and there is significant scope for variation in different areas. Selection bias means there are differences in the types of patients you choose to treat. There are a number of issues with performance variance, detection, and the data itself. So, I think what we’re trying to navigate here is how do we do that reliably while we’re integrating these data sets while addressing some of the key issues around drug shortages that I referred to earlier? Our proprietary approach leverages the historical clinical trial data set hosted on our platform and uses it to contextualize what we’re seeing in the real world and better understand how patients respond to therapy. This, in theory, and what we’ve seen in our work, should help clinical development teams adopt a new way of using data to design a trial protocol or improve some of the statistical analysis work they do.
[ad_2]
Source link