Welcome, Guest: Login or Register My Cart (0)

Dr. Na Ji is a group leader at the Howard Hughes Medical Institute’s Janelia Research Campus, a pioneering research center focused on neuroscience and imaging. A neurobiologist with a background in chemical physics, she develops adaptive optical methods to improve in vivo imaging and applies these methods to the structural and functional imaging of neural circuits. She will move her lab to University of California, Berkeley in fall of 2017.

Your research at the Ji Lab is currently focused on understanding the input-output relationships in neural circuits. Can you talk more about what this means?

Understanding input-output relationships in neural circuits is the biological focus of our research. To understand the brain, we want to know everything about its circuits, from individual neurons all the way to complete circuits. Neurons are individual computational units. On average a neuron receives 10,000 inputs, and based on the inputs, decides whether or not to fire action potentials. Our goal is to understand what kind of inputs it receives, what kind of computation the neuron does with these inputs, and how it generates an output signal.

This input-output question can also be asked among a population of neurons forming a circuit. Particular regions of the brain―the neural circuit and the neurons within―oftentimes are devoted to particular types of information processing. Within these circuits, what kind of information is received, how the circuit transforms the information it receives, and what are the outputs from the circuit? We want to find answers to these questions as well.

The success of this research depends largely on the ability to correct optical aberrations and improve resolution for in vivo imaging. Can you talk about the imaging methods you have developed to address these challenges?

The structures that neurons use to communicate with each other are called synapses, and the synapses are typically the size of 1 micron or less. If you want to observe any kind of physiological event that is happening on such a small length scale, you need to have very high spatial resolution, which requires microscopy. The most popular optical microscopy method used for neurobiology is two-photon fluorescence microscopy. Instead of using visible light to excite the molecules so that they will generate fluorescence, you use longer wavelength, usually, near infrared light. When two photons get absorb by a fluorescent molecule at the same time, their combined energy is sufficient for the molecule to go to its excited electronic state, which then returns to the ground state and emits a fluorescence photon.

Because the probability for two-photon excitation is small, you need to use very bright light. Practically, that means that the two-photon fluorescence generation happens at the laser focus formed through a microscope objective. When you take a picture using a camera, you are taking a picture of all sample positions at the same time. But with two-photon, people use the point-scanning method. We scan the laser focus in the sample in 3D and record the brightness at each position, and that generates the image. This approach is particularly useful for many types of brain tissue that are opaque. Since you cannot see through them, you can’t use a camera to take pictures of things behind a scattering material. For example, if you want to take a picture of the moon but if there’s a heavy cloud in front of it, you won’t be able to take a sharp image. But with point-scanning two-photon microscopy, we don’t actually take pictures. Instead, we record the brightness coming from behind this opaque tissue. This allows people to image much deeper in the brain.

Unfortunately, conventional two-photon microscopes do not achieve its optimal performance when imaging biological tissue such as the brain. When the two-photon excitation laser propagates in the brain, it accumulates optical aberration which distorts the focus. I’ve been developing adaptive optical technologies to pre-shape the excitation light so that the brain-induced aberration gets cancelled out and a sharp, bright focus can be obtained inside the brain.

Speaking of the moon, you and fellow scientist Eric Betzig developed a unique imaging technique inspired by astronomy. How did that idea come about? Your article in Nature Communications offers interesting insight.

In astronomy, when you use a telescope that is on Earth to look at a star, the starlight has to go through the atmosphere. In the atmosphere we have weather such as wind or rain, generated by air masses with different temperatures and humidity, which changes its optical property and how it interacts with the light. As a result, the wavefront of the starlight gets distorted and the image of the star taken with a ground-based telescope gets aberrated. With human eyes, you can see the stars twinkle, which is caused by the temporally changing distortion of the star image on our retina by the atmosphere.

Astronomers discovered that they could cancel out that aberration using a mirror whose surface shape they could control. They distorted a mirror in a way that was opposite to how the atmosphere was distorting the wavefront, so that one can now have a perfect image of the star. And that’s what is called adaptive optics. In the Nature Communications paper, we used methods very similar to those employed in astronomy to measure and correct brain-induced aberrations, and obtained sharp images of neurons at more than 700 microns below the surface of the brain.

How have Semrock products helped you in your work?

We use dichroic filters to separate the fluorescence signal from the excitation light in our two-photon fluorescence microscopes. Because the fluorescence signal is very weak and the excitation light is very bright, we need filters that are very high throughput and very selective, meaning that they should pass as many fluorescence photons as possible and reject the excitation light very effectively. Semrock makes filters with very high performance, and improvements in filter technology have generally been really helpful for microscopy.

Overall, how has the field of neuroscience advanced since you got started?

I started to work on brain about 10 years ago, and in the past 10 years I think neurobiology has made great strides in terms of its toolboxes. The wide adoption of optical imaging methods like two-photon fluorescence microscopy has helped neurobiology labs worldwide study neural circuits at unprecedented spatial resolution. Genetically encoded activity indicators have enabled these studies to be carried out in neurons of known cell types. The many molecular tools in optogenetics also make it possible to optically activate or silence the activity of neurons.  I think imaging methods and optogenetic methods combined are really pushing neurobiology forward.

What do you consider your biggest achievement to date?

I’ve been working on adaptive optics for the past 10 years. The methods that we have developed allow us to resolve single synapses at very large depths in the brain. But ultimately if you want to make an impact in the field, if you want people to actually start to use your technology, you need to show people that by using adaptive optics, you can discover new biology. A lot of techniques have been invented, but oftentimes they stay in the development stage.  Biologists are busy. To stop what they have been doing and give up what they are familiar with and to start using a new method requires a leap of faith which is much more easily achieved if we can demonstrate the utility of this method in real-life neurobiology questions. We are committed to make that happen. In my lab, 50% of our effort is on answering neurobiology questions. What I am most proud of so far is a recent neurobiology experiment where we showed that adaptive optics allowed us to discover new biology.

What is the next neuroscience challenge you hope to tackle?

There are many challenges. To address the problems caused by tissue scattering, we’re working on microendoscopy, which uses very small optical probes that can be embedded in the brain of a mouse to circumvent the scattering problem. We’ve been working on ways to improve the resolution of those microendoscopy methods with adaptive optics.

We’ve also started doing three-photon fluorescence microcopy, where the molecules have to absorb three photons at a time. This method was pioneered by Chris Xu at Cornell University, based on the fundamental physics principle that the longer the wavelength, the less scattering there will be. We’re combining adaptive optics with three-photon so that we can image very deep, deeper than what two-photon allows but maintain single synapse resolution.

We also have been working on improving the imaging speed of two-photon fluorescence microscopy, in addition to improving resolution and depth. We had a paper published this year in Nature Neuroscience, where we improved the imaging speed by wavefront shaping. We are now helping other groups to adopt this method.