Session V Transcripts September 22nd (AM) Session V: Predictive Chemical Biology Panel Discussion Stuart Friedrich, Tom Raub, Geoffrey Ginsburg, and Richard Kim Question: Especially with regard to Professor Kim, there has been some suggestion— for example, William Pardridge has suggested that to get compounds into the brain we might take advantage of uptake transporters. So, I'm wondering if you have an opinion of the viability of that approach, first of all in general and second of all, perhaps from some other members of the panel, whether that would be a productive or efficient strategy to take in drug discovery to try to utilize these transporters to get compounds in that we normally can't get in by passive diffusion. Richard Kim Response: I think that's an excellent question. In reality, I think that drug companies have been doing this without understanding transporter biology. If you look at some of the most successful drugs ever marketed, which include the statins, the reason the statins are so useful is that they're able to attain very high intrahepatic liver levels and that's the target of the HMG-CoA and the companies must have iterated their drugs against this intrahepatic concentration, which could only be obtained because of the presence of these transporters. So that shows that these kinds of approaches have worked even without really knowing about them. And now we can have a mechanism for that. I think that in terms of CNS drug delivery you can certainly take advantage of some of these nutrient transporters that some people have talked about, but we're also starting to see that some of the transporters that we think of as xenobiotic, broad substrate-specific transporters are also highly expressed at the level of the blood-brain barrier. So, for example, in humans, OATPs passes the blood-brain barrier; it's a transporter that can recognize anionic, neutral, and cationic compounds, and one wonders whether you could do a proof of principle study. Optimize the drug against that to see if in vivo you get a better response. The problem will be that there are marked species differences between the OATPs. Therefore, you may not really know until you actually give the drug to humans, but it's a great reminder that you may be able to rescue what may look like a very polar, non-CNS penetrant drug to actually have access to the CNS. Tom Raub Response: That scenario actually scares me, because when we're working through SAR and we're trying to eliminate an efflux like PGP, I oftentimes see compounds I wouldn't expect to get in as rapidly as they do. And, not having the tools in hand to sort through that I don't know what to do with the information. So, the species differential is a potential problem. Question: I'd like to follow up with that question and ask Richard, in patients that have this variant OATP-C *15 *15, they're going to have unusually high blood levels of these statins, right? Could that contribute to those patients ending up getting the side reactions that you normally get with statins? Richard Kim Response: Well, that's the implication so this is where again understanding transporter biology is useful because given that OATP-C is highly polymorphic, you start to wonder should you take advantage of more than one liver transporter—is OATP-C the only pathway? Also, if OATP-C turns out to be the only pathway for some of these statins then, yes, that is highly likely and it could contribute to this problem of rhabdomyolysis. So, again, it's very important to understand not only the type of transporter but the extent of different types of transporter involvement. And, yes, there are a lot of questions regarding whether mutant transporters will give you a) lack of efficacy because you're not getting the drug into the liver and b) also additional toxicity because now you have high systemic exposure. No one has actually shown that polymorphisms cause that, but a number of groups are trying to look at whether people who are more prone to rhabdomyolysis are more likely to have these fits. Question: I have two questions for Stuart about PK PD modeling. When you take these sort of probabilistic approaches and add the uncertainly into each of the variables and kind of do the stoichiastic simulation of what your outcomes might be, you tend to get these log normal distributions in your probabilities—you know, you've got these tails that go way out. So there's a fair amount of uncertainty in these models. Now I understand the second example that you gave when you said the projected dose is 12 g, we shouldn't eve n bother thinking about that. But in the first example it looked like the projected dose was around 150 or 200 mg with maybe a fair probability of being a lot less than that. When you've got that kind of information, how do you use that to make a decision to go forward? It's easy to see when you kill it, but how do you use it to go forward. The second question, that's kind of related, is when you've got these models that you've developed, at what point is it worthwhile to actually take something into the clinic to validate that? That's a large expense, but what's the thought process that you would have to go through to say it's worthwhile doing this? Stuart Friedrich Response: You're right, in the first example that 200 mg is really not an unreasonable dose. It was really a combination of that dose estimate, which was higher than what we expected, and every molecule has a cost of production and a price that we think we're going to get in the marketplace. And that, of course, really influences what a marketable dose really is. So, 200 mg itself is not a high dose, but the overall assessment for the molecule, including the cost of production and the price for the molecule, influenced the decision. But, also, it wasn't just that but a combination of the results of the PK PD modeling and the toxicology, where we had difficult establishing dose-limiting toxicity ADME properties that were not optimal and so on. These all added together. It was really a combination of output from the different functional areas on the product team that helped contribute to the decision to terminate that molecule. And regarding the uncertainty as to how it is used, this is still related to the first question, when you come up with a range of plausible doses you hopefully are able to take into account that uncertainty when you're designing your early clinical studies so that you fully study not only what the expected dose is, looking at the PK and toxicity at very close to the expected dose, but also the full range of doses. So, if possible, go right from your lowest dose to your highest dose such that it incorporates or encompasses that total range of uncertainty. As to the second question, it depends on how knowledge-rich you are in the therapeutic area or even in the target itself. If you, for instance, are developing a compound where there are two or three other compounds in the marketplace, then you can actually internally validate this method of this empirical scaling. So you take those two or three other competitor products, you do the experiments in your preclinical pharmacology model, you see how well it predicts internally the response from one molecule to the next, both being in the clinical marketplace. And then you can say with fair certainty how well that will work for your own clinical candidate. That's one method and, of course, as you have less knowledge you have more uncertainty and that has to be taken into account. If you, for example, have a compound where you have no other clinical information for a competitor compound, then you tend to fall back on more mechanistic-type modeling approaches where there are more assumptions being made and more uncertainty. But, in the end, you want to ask the question specifically as to, for example, dose—what's the probability that the dose will be higher than this specific dose that is unmarketable—and go from there. Question: I'd like to follow up with Stuart on this question. At this meeting we have an interesting mix of big pharm and smaller biotech-type companies. So, my question to Stuart is, when Lilly looks to in-license compounds, do they actually do this kind of analysis in determining whether they see this as a viable candidate? I ask this because so many times I see small companies thinking they have a drug candidate when, in reality, the industry looks at it as a lead. Stuart Friedrich Response: I've been on due diligence teams at Lilly for in-licensing, and part of my job is to understand what factors may influence the overall outcome for the product. If there are missing pieces of information you have to understand what is the possibility of that impacting the overall outcome. A good example is the tornado plot I showed. If the missing piece of information does not contribute a lot to the uncertainty in your overall predicted outcome, then it's not as critical in your in-licensing to have as part of the package. But if it was a parameter that you thought, based on the modeling, would heavily influence the outcome, then that would help make the decision on whether it's a good idea to in-license the compound, how many resources would be required to fill those uncertainties in the compound; those all come about in the inlicensing. Question: Tom, esoteric question. In your autorads it really looked like there were other structures in the head that were labeled that correlated with uptake into the brain. Was one of those the salivary gland? Tom Raub Response: Yes, the Harderian gland frequently labels. But even more importantly, in the one brain image I showed there were actually two such regions that I didn't discuss that you have to be cognizant of. One could involve transporters actively accumulating compounds into the choroid plexus epithelium. It's ten times more perfused than the brain endothelial vasculature and these compounds tend to get in there and stick so you may have a background brain level if you don't eliminate that in your analytical process. Thus you have to account for that or you may have a drug that's sequestered all in one place and possibly not in the area of activity. Question: The answer to that esoteric question allows me to ask one that may be more interesting in this context. Since the salivary gland is such an active secretory organ and likely contains a lot of these transporters, is this an opportunity for a surrogate means to look at secretion of drug into the saliva as a way to understand how transport function might be working in drug elimination? It would certainly be a very accessible fluid. Tom Raub Response: It's an interesting question. Tear drops have also been suggested as a surrogate for free fraction with all the caveats, but I don't know of anyone looking at that. Question: This question is for Stuart. With your knowledge and experience in the kind of case studies you talked about in PD/PK relationships, could you comment on the experiment that's often done earlier in drug discovery where, especially if you have an expensive or time-consuming efficacy animal model, you dose that model and then you take a couple of samples, say at 1 hour and 4 hours, and try to correlate that exposure level with a pharmacodynamic response you're getting. What are the pitfalls in that kind of experiment? First of all, is that a good experiment to do and, secondly, what things should we watch out for in looking at or interpreting that data in terms of selecting leads or optimizing leads? Stuart Friedrich Response: I think that any information that you get on exposure in your pharmacological model is important, whether it be a single time point or multiple time points. In the example I gave, the sampling that was possible in that pharmacology animal model was sparse, and that required me to analyze the data using a population PK PD modeling approach, which makes the assumption that all the animals behave as a population of individuals that are centered on a population value with variations on that. So, even with very few samples, I think it's still valuable to collect that information and use the modeling tools that are available to estimate the exposure in each animal and correlate that with your pharmacodynamic response. The other thing that I didn't actually get into is there are different measures of exposure and in these cases we were relating the average steady-state levels to the pharmacodynamic response, whereas in other cases it could be that Cmax is more related to your pharmacodynamic response. And those are different questions that have to be answered with different experiments, but, getting back to the initial question, I think it's always useful to collect exposure information in the animal species or in the pharmacology model where you're actually collecting your dynamic data whenever possible. Question: Is there a certain number of time points that would be advisable to take? Stuart Friedrich Response: That all depends on the variability between animals and also the time course and so on, but I've seen instances where even a single time point will give you a reasonable estimate of the exposure in that animal. Say, if you want to calculate an AUC for that animal over a particular time course, a single time point can sometimes do that. A single time point across 20 animals in the experiment, say, will sometimes do that. Because, again, that single time point is combined with all the other data from the other animals in the population analysis approach to give you an estimate of the population clearance and also the individual animal clearances. Question: Would there be a minimum number of animals? Stuart Friedrich Response: I can't really give a number because it's a case-by-case basis. What you can do is do simulations and then analyze the simulated data to say, OK, let's do a simulation to say we only collect one time point per animal, how accurately are we able from that one time point to estimate the exposure in that animal? And then you can keep adding time points until you get the accuracy that you require in your estimate. Question: A follow-up question for Stuart. You see this less and less frequently, but it still occurs. You go into some smaller companies where there's not a lot of DM/PK support and they will actually have a pharmacological model and they will have dosed animals and know what the maximal response is and then they begin to try to use that information to say that the bioavailability of the compound is 50%. They have an IV dose response curve and an oral dose response curve and then they'll estimate and say it's 25% bioavailable or 50% bioavailable without measuring anything. Is that a good idea or a bad idea? Stuart Friedrich Response: So they're basing it on the pharmacological response. That's a reasonable first cut. I guess you'd have to understand where you are on the dose-response curve when you're making that assessment to understand how accurate that assessment is, and if you're on the steep part of the dose-response curve there's probably more uncertainty in your estimate than if you're on the flat part of the doseresponse curve. Ron Borchardt Response: I was hoping you'd say it was a bad idea. Because often I see people getting in trouble doing that. Stuart Friedrich Response: It's obviously not as good; the best is a true crossover PK study to estimate your bioavailability. But any knowledge that will help you, I mean, if you didn't even have that information available, you'd have a total uncertainty on bioavailability, whereas that experiment decreases your uncertainty in what the bioavailability could be and, therefore, it decreases your uncertainty in your outcomes. Question: A question for Richard Kim. What is the real impact of transporters? You have this great example with rifampin with OATP-C and induction of PXR and PEP to eliminate it. I worked with intestinal tissue so P-gp and CYP-3A are proteins of interest. Should we be looking at CYP-3A and P-gp expressed alone in cells or should we look at these proteins expressed together in the same cell line? Richard Kim Response: It's a difficult question. At least in the transporter field a lot of people have tried to have multiple transporters expressing cell lines. The layer of complexity becomes fairly severe with even a couple of systems. So, the problem with trying to predict with key players, not with just having the key players alone, you have to understand the relative expression of the key players and how do they vary in people. And, of course, in the intestines there's regional differences in expression so I think most people find P-gp is higher in the ileum compared to the duodenum and vice versa for CYP-3A. We know very little about uptake transporter expression, although we know some of the OATPs are expressed. So the model system would have to incorporate a physiologically relevant level of these transporters, validate them, and then you would have to use fairly robust and complex models to simulate various drug concentrations outside the cell, inside the cell, in the presence or absence of a drugmetabolizing enzyme or transporter. Because the question that's still a daunting task is, when you have more than one transporter or P450 present, is the clearance really 1 + 1 or is it something different depending on the extent of metabolite formation. These are still important but very difficult questions. It's going to keep us in business for a long time. As a first step you have to use systems that have the major players, and at least multiple key systems are likely to give you better predictions than just studying individual systems alone. Question: This question is for Dr. Ginsburg. In your biomarker program you developed at Millenium, I wonder if you considered the lack of appropriate efficacy models for human responses and formulated a strategy very early to take a compound into the clinic based on biomarker, ability to hit target in a preclinical efficacy model, as opposed to a biological endpoint? Because we know we're losing efficacy late often because our preclinical models are just very poor predictors of human outcome. And, if you've done that, what has been the response of the agency and how did you shepherd that strategy through, going forward toward those goals? Geoffrey Ginsburg Response: The first question centered on taking compounds forward but having something that was more of a surrogate? Jim Stevens Response: Yes, say, for example, in oncology, you know that the xenograft growth response is sometimes a very poor predictor of outcome in the clinic. So if you use just the ability to hit the target in the xenograft, and say we have very good data that this molecule is very selective for the target, we have a good biomarker strategy. Have you tried to take a molecule forward into the clinic based on the biomarker data as a surrogate for efficacy, arguing that the preclinical model is not a good predictor of outcome in human. Geoffrey Ginsburg Response: We have not taken that strategy, but I think in the area of oncology you're on better ground to try to do that simply because the chances of achieving efficacy and the consequences of not drive the decisions to move the compounds forward into development. We think that, particularly for a number of the molecules that are moving into clinical development, the biology is so unvalidated or there's not sufficient data to even convince ourselves that we wouldn't go to the agency unless we had the preclinical data to support the biomarker work as well the biomarker strategy to support the clinical work. So, given that, we haven't really tried to move things into clinical development in concert with the agency that didn't have those kinds of data to support the program. Question: So, oncology seems to be the example where there's some movement in that direction. Since you have such a well-developed program in indications such as RA, did you see flexibility? Because even in RA, particularly in neurosciences, we have a very difficult time saying that the preclinical efficacy model is likely to predict outcome in human. Yet, if we have good data on receptor occupancy or inhibition of a kinase, phosphorylation, say, of a specific antibody, do you see movement in that direction and other indications outside of oncology in your experience? Geoffrey Ginsburg Response: I haven't seen enough of what the agency has seen from other places to give you an answer of what their thinking is. They're staking the ground on innovation and stagnation. The white paper that the agency put out earlier this year indicates to me that their expectation is that it's more in the realm of human studies where there might some more flexibility if there were appropriate tools to make these disease measures. I believe that because there have been so few successes and the number of NCEs approved has gone down so dramatically, that there will be, at least in early stage development, more opportunities to do those types of studies and not necessarily supported by a neat preclinical package. Jim Stevens Response: It will be an interesting ongoing discussion to see if the proposals in that stagnation paper are followed up by a regulatory path that allows companies to move forward with different strategies. Question: A philosophical questions for all of the panel and anyone else that wants to pipe in. We tend to design drugs toward the mean values, you know, the mean pharmacokinetic parameters, the mean response to a drug, pharmacodynamic parameters, but also what ends up killing a drug is the variability. Dr. Kim centered on SNPs as one source of variability; there were also questions that were brought up about the composite effects of variability for different elements. I'm wondering when we advance a drug, whether it's really a manageable thing to anticipate what the variability is going to be when we get into a large population. We tend to go ahead with a compound into safety based on what we anticipate its property to be. But what can kill a drug or, even more important, do damage to people is what's going to happen in the one in a thousand to one in ten thousand individuals that are out there in the population, and variability is basically an aspect of the population, everybody's different. Geoffrey Ginsburg Response: There are well-identified variants that are low frequency in the population. Some companies maybe in the room as well as Millenium are developing essentially cohorts of individuals that harbor these variants with a very directed strategy of recruiting them into clinical trials so that we understand their response to exposure. But even that may not get at the kind of gene frequencies that you alluded to where idiosyncratic reactions may occur or where the real frequency of some of these variants may be too low to detect, at least in the kinds of studies that we would entertain. The NIH, Francis Collins specifically, is contemplating establishing a cohort of 500,000 people for a variety of issues, including this one, which is to get at some of the very low-frequency events and variants that could play a role in extremes of toxicity and to be able to establish that as a resource for some studies such as this. So, I think you raised and important issue; I think the FDA will be looking for companies to address what's known about PK variability through genetics. If they're not already, they will be in the not-too-distant future. Question: Richard, I'm wondering if the polymorphisms that you're observing in the OATs have translated into a clinical changes in the catalyzing enzymes. If you induce with rifampin, do you see a 3A level difference in the polymorphisms? And, sort of related to that, I'm wondering if you've had the opportunity to correlate the SNPs or haplotypes that you're seeing in the OATs with haplotypes in P-gp because of the large problems of functional observations on the P-gp side. Richard Kim Response: The data relating to induction and transporters regulating some of these things are really quite modest if there are any. Even with rifampin, for example, there's really no effort by people to monitor rifampin levels, so we really don't know. People are given 600 mg once a day and it's assumed that at that dose everything will be maximally induced, and it may be. We're looking at some of our own data as well as actually designing some studies with lower dose rifampin to see if we can pick up kinetic differences as well as subtle 3A level differences. Question: But now that you have the polymorphisms identified, you could do a prospective study with people that have been selected. Richard Kim Response: Right, and that's exactly what we're hoping to do. With rifampin it is a bit of a bugger in that it is very light sensitive—the assays have not been well validated—labile, and we think the 600 mg/day dose may be very high because, typically, whatever has been published seems to reach concentrations in the low micromolar even at trough levels, so you may have to lower the dose and get the proof of principle. I think we may be seeing a lot of what we think are variations in induction that may actually be due to variations in intracellular drug level because of either polymorphisms or expression differences that dictate that. Some people have shown some data in vitro where if you throw in with, for example, OATP-C, you can get totally different gene expression patterns in the cells with and without this transporter because a lot of the ligands are also hormones or hormone conjugates that are ligands for other nuclear receptors as well. So you may have almost a master regulator of gene expression because it will affect multiple nuclear receptors, but it hasn't been proven in people. Question: Any correlation between or linkage to equilibrium between haplotypes? Richard Kim Response: We've not carefully looked at the OATP-C versus MDR1. We have a lot of samples, some of them retrospective, but the frequencies are quite different so I don’t know. Question: Richard, has this field of transporters come to a point where investigators like yourself can recommend to the industry that they should routinely screen certain transporters, like they do CYP enzymes, like they screen the CYP 3A4 and 2D6 and 2C9? Has it reached a point where you would recommend people in DNPK groups focus on certain transporters? Richard Kim Response: Most of the companies, in fact, do this one way or another, whether they're using a direct system or cells lines or knockouts or whatever, I think it becomes kind of a de facto transporter. There are other efflux transporters and uptake transporters, they way I view it is it's inevitable. The key thing is you have to have people in industry who will have to understand the biology of transporters and pick the key players and bring it in-house because it is clear the field at least over the past 5–10 years have not been told and some transporters are becoming much more important. And you do have to remember that, unlike P450s, with transporters there's this tissuedependent expression of different types of transporters, a subset in the kidney, different set in the brain, different set in the liver. And if you're making different types of therapeutic targets one would think both of delivery of the compound or maybe making the drug more favorable in terms of pharmacokinetic profile. One would start doing designs for transporters. We have done studies with companies who have iteratively removed the metabolism as a way of predicting enhanced or more favorable PK profile. In fact, for some of those compounds, the clearance becomes greater because then the transporters come in. So as you design away from 2D6, 3A4 and other P450s, I think you'll see more transport problems. Question: This question is directed to Tom, but I'm sure there are several other people that would know about this. With regard to P-gp efflux blood-brain barrier, have you in working with discovery teams seen situations where once you identify whether a compound was a substrate for P-gp, and that was a major problem, they've been able to actually structurally modify a series in order to take that efflux component away? And, if so, what were some of the key structural changes that allowed that reduction in P-gp efflux? And, then, kind of corollary question is, there's a number of P-gp inhibitors that pharmaceutical companies have developed and they're very potent. So, why haven't we seen the introduction of therapies where a P-gp inhibitor is coadministered with a drug like taxol or something that we want to try to get into the brain in order to swamp out the P-gp transporters to try to get the drug where we want it to be? Tom Raub Response: With regard to the SAR, and Jerome spoke to this too so he might want to add, it's more difficult to move away from P-gp when it's part of the scaffold responsible for the activity-dependent portion. However, it's doable. I don't believe you need to actually engineer away from P-gp interaction since you can also get around it by improving passive diffusion. Here you're affecting pump efficiency, not necessarily molecular interaction between pump and compound. So, success is easier if it involves a non-critical component of the scaffold. It's like the example I gave where you have a hydroxyl group that's not critical to the activity, but the substitution marked changes physical-chemical behavior. I've also been in cases where we couldn't move around it because of the complexity of the molecule, like a peptidomimetic, is such that the global properties require you to move too far away. Having said that, I have another issue to throw out for consideration. You can have P-gp effects and still certainly have an effective CNS agent, and that may, in my opinion, actually improve it's activity. This I'll suggest as a potentially controversial issue particularly with regard to PET ligands. If P-gp is not rate-limiting for a compound to get into the brain, but rather is limiting in removing it from the brain, then reducing its distribution volume via P-gp-mediated efflux could be an advantage. Efflux could decrease nonspecific background without impacting specific binding. I'm confused by the lack of success using inhibitors to improve delivery, I'll admit. I think it's just been a combination of poor clinical design, inability to attain efficacious exposure, and concerns about liabilities with respect to letting other potentially harmful things in that have contributed to an underutilization of such an approach. Question: One last quick question for Tom. What I think you've done in terms of developing this model for studying the brain barrier permeability is excellent. But I wonder if you're at all concerned about the possibility that you're going to optimize the delivery of drug candidates to mouse brain. Tom Raub Response: I obviously don't have a lot of experience beyond mouse excluding some correlation to rat and maybe even the dog, but I've not yet seen a species difference excluding any kind of active transport. That's certainly where you have to be careful, I think, in the implementation of such in vivo or in vitro assays at the lead optimization phase. I remember an example years ago where somebody had developed a CNS drug using Caco-2 to optimize brain exposure and they ended up selecting a lead series that used an active transporter in Caco-2 cells that didn't exist at the blood-brain barrier in vivo. So, you definitely have to be careful. Ron Borchardt Response: I asked that question because we had a recent experience where we've looked at P-gp substrates, and in rat they show very little brain exposure and in guinea pig they show significant brain exposure. Tom Raub Response: Assuming that species-dependent serum protein binding differences are not at play here, I’m unsure why this difference would exist with regard to passive diffusion. There certainly could be species differences with regard to active transport, but less so for P-gp I would say. I’ve also seen big differences with certain scaffolds with respect to the dose route —IV vs. PO or IP—so one also needs to be careful there when interpreting data.