Q&A with Dr. Neville Sanjana on his career (so far) in CRISPR – PART 1
Dr. Neville Sanjana, one of Twist Bioscience’s early customers and current collaborator, was recently interviewed for the publication Front Line Genomics. Neville is currently a Core Faculty Member at the New York Genome Center and an Assistant Professor at New York University. Before moving to New York, he was a postdoctoral researcher in the laboratory of Dr. Feng Zhang, where he worked on developing new approaches for gene editing, including advance CRISPR technologies, and applying them to neuroscience applications. Click here for more information on his research.
Read on for a full recap of Neville’s interview with Front Line Genomics.
Image credit: Neville Sanjana
Front Line Genomics: You went to Stanford University for your undergraduate degree in, Symbolic Systems and English Literature. It’s been described to me as an interdisciplinary program concentration on ‘The Science Of The Mind’. As intellectually attractive as it sounds, it seems like it’s the kind of program you need to seek out quite specifically. The inner workings of the brain are still something you work on today. Where did that curiosity first come from?
Neville Sanjana: Growing up in San Diego, I had a school friend whose father was a cognitive scientist at the University of California, San Diego. In fact, his father was the founding chair of the first cognitive science department in the United States. So through being in that house, and hearing what kinds of problems a cognitive scientist thinks about, it stayed with me and gave me that early exposure.
Of course, many schools have a cognitive science major, but at Stanford they really created their own version of it with the Symbolic Systems Program. It included a lot of traditional cognitive science like cognitive psychology and neuroscience but also had this whole different side to it with logic, linguistics, and computational linguistics. It was very computationally focused and the underlying idea was that the mind is a computer. One of the major questions was how do we design experiments to understand the computations that the mind performs
FLG: Where did the English Literature side of things fit in?
NS: I think in science it always helps to write. One day I stumbled into a class on Charles Dickens, and really just enjoyed the enthusiasm of the professor, Christine Alfano, who was an expert about Victorian era literature. It was a wonderful department with master writer-scholars like short story author Tobias Wolff and James Joyce scholar Brett Bourbon. An amazing faculty — in fact, I think they were ranked as the best PhD program in English Literature in the United States during that time. So, it’s something I just stumbled into through traditional English literature, and it developed into a serious interest. It was fun for me but also very useful as science involves so much writing and critical reasoning.
FLG: Your next stop was your PhD in Cellular and Computational Neuroscience. How did that come about?
NS: I was very lucky that the Symbolic Systems lab in which I did my undergraduate thesis — Josh Tenenbaum’s lab — also moved from Stanford to MIT that same year. It is no exaggeration to say that, in large part, I came to MIT by following Josh, who was working at the cutting edge of reverse-engineering the algorithms that the human mind uses to learn. With Josh, I was working on probabilistic models of how humans make decisions, and how human learn new concepts. Once at MIT, I joined Sebastian Seung’s lab, a non-traditional yet very innovative neuroscience group in the department of Brain and Cognitive Science. It was a perfect place for somebody like me who wanted to hedge their bets between studying cognitive science and neuroscience.
Over time, I veered more into the neuroscience side of things and got really into molecular and cellular neuroscience. Through being in his lab I became interested in developmental neuroscience, which actually carries forward to what I do in my own lab today actually. For my PhD thesis, I built a time-lapse microscope to image fluorescently-labeled axons from rodent brains over very long time period. I was able to do is track the trajectory of these growing axons as they go out and make synapses and find other neurons to connect to. And a lot of the work that I’d done in probabilistic modelling really came into play here because we ended up using the same kind of models to understand the trajectories of these nascent axons wiring up the brain.
FLG: How did you find swapping California for the East Coast, with MIT and the Broad?
NS: I grew up in San Diego, so the East Coast took a few years to get used to. But, in the end, I really enjoyed my time in Cambridge. It’s a fantastic environment with a real density of science. The Broad Institute, where I did my post doc, was a fantastic place to do genomics and genetics. It felt like being at the center of the genomics universe.
FLG: CRISPR is one of the sexiest buzzwords out there at the moment. How have you seen the field develop over the past six years?
NS: Well, I feel incredibly lucky to have been a postdoctoral fellow with Feng Zhang, my mentor at the Broad Institute. A lot of things are about timing. Gene editing has a long history with previous technologies like zinc fingers nucleases and TALENs. CRISPR itself has a long history within microbiology, dating back to the late 1980s.
When I started working in Feng’s lab, we were using TALE proteins, which are programmable nucleases like CRISPR. In the case of TALEs, you had to clone a new protein every time you wanted to target a new sequence. Similar to zinc fingers, TALEs are not terribly easy to use because they are very repetitive protein domains that need to be arranged in a specific order. In contrast, CRISPR is both a more efficient and also much easier to program by virtue of its RNA-based targeting. It’s a quantum leap forward.
FLG: The development of relatively fast and accessible sequencing tools has been pretty exciting. But there are plenty of people out there who see genome editing as the thing that’s really going to be flicking on the power for genomics. We’re going from sequencing everything, to going into the genome and testing what all of this stuff actually does. What’s your take?
NS: I agree — there is a huge potential for these technologies and much of it has not been explored yet. If you make the analogy with computers, it seems obvious that writing data is equally important as reading data. Really, a lot of the power of personal computers and other devices in our life is that they can both read and write data very, very easily and that’s something that we don’t currently have in genomics. It’s very limited. Previously you had the ability to manipulate the DNA of traditional genetic model organisms like yeast, fruit flies, or mice, to some degree. Now with the advances of CRISPR, the idea of a genetic model organism is itself outdated. Now, almost anything, including human cells and human genomes, can be considered genetic model organisms.
FLG: There’s another story that you’re involved in that made the headlines in a big way a couple of months back. HGP-Write. How did that all come about?
NS: Well, my involvement in that is probably smaller than many of the other authors, but for me that came about through a meeting organized by Jef Boeke, George Church, Nancy Kelley and a few others. And it just was a really interesting, thought-provoking meeting. Significant progress has been made in synthesizing entire chromosomes for other organisms like yeast, so we’re thinking about what would be the potential applications and what technology is needed to synthesize an entire human chromosome?
You could think of this as a parallel technology to genome editing. Genome editing takes an existing chromosome, or genome, and puts in particular changes. But it’s a different capability to really be able to just create an entire chromosome, or string of DNA. They’re both technologies that are going to enable new kinds of science. As a scientist, you need to be able to dream about the future if you want to eventually get there.
FLG: What do you think the project’s lasting legacy might be? Is it more about what the problems and opportunities you guys end up identifying along the way?
NS: When the human genome project was completed, a lot of people thought a lot of answers would be evident. Like what causes cancer or the mechanisms of neurodevelopmental disorders. We only started to get that about ten years later. So a lot of people have been questioning what we actually learned. But for geneticists and genome scientists, the discoveries have been continuous and are still coming. Sequencing the human genome is like landing a man on the moon: it’s a tremendous shining achievement for humanity but it is in many ways a first step towards greater explorations.
So while sequencing one genome didn’t give us all the answers, the technologies that came from sequencing that one genome have enabled a lot of science in the following years. If we compare HGP-Write to the timeline of the Read project, I think we’re at that planning stage now, like in the mid to late 80’s when people were making gene sequencing a priority. At that time, sequencing an entire human genome still looked like a ridiculous task. It’s a multi-generational effort where this work might benefit people 20, 30, 40, or even 50 years from now. As we look ahead into the distance and start dreaming big, it’s these first steps that are pivotal in encouraging you to reach those aspirational scientific goals.
Click here for the conclusion of the interview to read about Neville’s current research in his own lab and his interaction with Twist Bioscience regarding genome-wide CRISPR screens.
Interview reproduced with permission from Front Line Genomics.