Brock Reeve is Executive Director of the Harvard Stem Cell Institute, whose mission is to use stem cells, both as tools and as therapies, to understand and treat the root causes of leading degenerative diseases.
Brock Reeve: It started about 10 years ago to take advantage of the new technology of stem cells and explore how to use them as curative tools for disease. At that time, federal policy was limiting funding amounts for research in embryonic stem cells. So Harvard said: Look, this technology is at an interesting stage. We have a huge research capability here between the schools of Harvard and the Harvard-affiliated hospitals. We also have a unique footprint within the Boston life sciences ecosystem, with a critical mass of both clinicians and researchers. Let’s organize around that opportunity.
Rather than building new labs, we used existing labs and formed the Institute as a virtual research organization. We raised money from private philanthropy to fund work across the network, and our goal was not just to do basic research and publish papers in scientific journals, but ultimately to get beyond the lab and focus on finding cures. As a result, we’ve been able to reach beyond the purview of single departments and disciplines. These aren’t just developmental biology questions. They’re not just clinical care questions. We bring together biologists, chemists, clinicians, bioengineers, etc., in order to tackle these inherently multidisciplinary problems together.
Reeve: A couple weeks ago, there was all that publicity about youthful bloodreversing aging in mice. Two of the people who were mentioned in that are involved with Harvard. One was Lee Rubin, and the other was Amy Wagers. Lee did the neuroregeneration piece, and Amy did the muscle regeneration piece. Actually, a year before that, Richard Lee also published on heart regeneration. And all three of them were working together on a common project to understand aging processes in different organ systems.
That project was an example of several things. One is the value of collaboration, because Rich is at the Brigham and Harvard, Amy’s at the Joslin and Harvard, and Lee’s at Harvard. Amy’s a developmental biologist working in muscle, Rich leads our cardiac program, and Lee is a neuroscientist with a deep knowledge of chemistry.
The project originally came out of work Amy did years ago at Stanford using a parabiotic mouse model where you’re joining a young mouse and an old mouse together to share circulatory systems. She had also done some work looking at skeletal muscle repair, and Rich said, “Let’s look for commonalities. Let’s think about how this plays out in the heart.” It’s an example of taking a model that had been used in one disease and asking if it can be applied to another disease, and whether this will reveal any underlying factors in these different systems.
Now, as we start to go down the path toward therapeutic applications, we’re working with one of the local venture firms to make it happen. It’s at the project stage right now, but if it’s successful, it could turn into a company spinning out of this work.
Reeve: Not in this case, because the PI’s (principal investigators) all knew, liked, and respected each other, and they had the right attitude towards sharing. It’s really driven by that. The IP will reside with three different organizations: Brigham has some IP out of it, Joslin has some IP out of it, and Harvard has some IP out of it, which can get complicated, but the tech transfer officers agree if the PIs agree on how the scientific contribution should be divvied up. It only gets problematic when people start saying, “Wait a minute. My contribution was 80%, and yours was 20%, right?”
Reeve: Exactly. You have to establish that the default assumption is that it’s all equal, unless we agree otherwise. Eight years ago, some of our junior faculty wanted to work on a joint project together, and one of them asked me: “Am I better off building my lab the old-fashioned way?” In other words, should I make it all about me instead of being part of a team?
Eventually, he and the others realized that as part of a team, they could share data earlier and publish earlier, and they all did better work as a result. When other junior faculty saw them, they wanted to do the same thing. So we’ve organized a whole set of junior faculty projects that way, doing team-based science. And it’s working because we didn’t force it from the top down. It bubbled up from the ground. In truth, we’re not only working a science experiment here. We’re working an organizational experiment, too.
Reeve: I guess what surprised me initially is how many different organizational affiliations people have here. Sometimes the same person will have four or five affiliations: they might be a Howard Hughes investigator, a hospital employee who belongs to a certain department, and also be a member of, say, stem cell programs at Boston Children’s Hospital or Mass General, all in addition to being part of HSCI. Because of that, getting people to feel that they are a part of a larger whole is sometimes difficult. So we’ve had to do a lot to help reinforce a sense of community.
It’s not all about the money funding these projects, because you’ll never have enough money to fund everybody. But you can get enough to grease the wheels, and you can lower the barriers to people sharing ideas across the network. We hold events like Chalk Talks and Think Tanks, different ways for people to learn science from one another and do better work as a result, in addition to being part of a larger community. And one of the lessons for me, particularly within an institution that has historically been known for being very siloed, is that we’ve been able to change some of that. But it’s an ongoing effort. The virtues of this kind of collaboration aren’t always as self-evident as you might think.
Reeve: Well, we’re the first organization at Harvard that spans all of the Harvard-affiliated institutions. Harvard had never done that before. And you could argue that the jury is still out on whether that speeds up the research process as a result. But I think we’re getting there. Ultimately, what we’re saying at the end of the day is that we can do better science and faster science than we did before. Those are the two big benefits.
We’re in the third year of a project right now, for example, with four different labs working on Parkinson’s together. The first year, our funders said to us, “Hmmm. We’re not sure how this is going.” But we got together again last week, and they said, “We never thought it would move this far this fast.”
MF: That’s fantastic. We hear figures all the time like, “It takes 15 years to turn a scientific discovery into a new medical solution,” or “It takes a billion dollars to bring a new therapy to market,” or “only one out of every 10,000 discoveries make it to market.” Do you think what you’re doing could impact those numbers?
A lot of pharmaceutical discovery is based on using either hamster cells or cancer cell lines. But we set up a stem cell-based screening center seven years ago at Harvard, and now other groups have done that as well. When you combine this with reprogramming and other technologies, what you can ultimately do is put human cells of a particular type in a dish. We’ve now done high-throughput screening on human motor neurons, for example, from both healthy people and those with ALS. You couldn’t do that five years ago. Now, you can do it in 384-well plates in automated fashion, using different chemical libraries, so you can identify which drugs keep motor neurons alive longer from ALS patients with a particular genetic background. And in theory, you can now identify drugs that work on those patient populations. If you have an existing drug that you want to test, you can now do clinical trials only on patients with the right characteristics.
Last year, one of our scientists published a paper in which he studied two drugs that had been pulled off the market in phase 3 because they were found to be ineffective when they went out to broader patient populations. At this point, they had become expensive experiments, because they’re getting up to your billion-dollar mark. And in the paper, he demonstrated that with this in-vitro model, he could have shown up front that both of them were going to be ineffective.
So yes, we could have saved hundreds of millions of dollars for someone that way. You’d never go into clinical trials with drugs like this in the first place. At the same time, we’re about to do a trial—it also happens to be in ALS—where we found an existing drug that was able to be re-purposed. It was approved for a different neuroscience disease, but we realized it would actually work on this electrophysiological response and would keep motor neurons alive longer. So we’re going to do a parallel in-vitro trial with the actual clinical trial. In other words, we’re going to make iPS cells from patients in that trial, and then compare the results from the actual trial with the in-vitro trial.
That’s never been done before. The virtue of doing it is not only to better understand this particular drug, and identify for whom it may be effective or not, but to better understand the enormous potential down the road. You’ll never get rid of live human trials, but if you can dramatically shorten the time or narrow the net that you’re casting, you should be able to speed up the whole process, or significantly sharpen its focus, or both. It’s still an open question, of course, but this kind of thing has the potential to hugely change the economics of the whole drug R&D pipeline.