Sunday, November 20, 2011

Granule cells in awake animals - Strange results

SfN seems a distant past. Two days in Boston have seemed to eradicated all memory about Washington. Now, after one day in New York even Boston begins to fade. There is however, still so much to tell about the SfN. Especially the last afternoon poster session was very interesting for cerebellar physiologists. There were two sessions at the same time dedicated to the cerebellar cortex and nuclei. Unfortunately, I was presenting my own poster at the opposite side of the hall, which resulted in me sprinting across the whole thing twice to catch a glimpse of cerebellar research. Nice....

Mossy fibers seem to be much more active in awake animals than in anesthetized animals. This off course has two effects: granule cells receive both stronger excitatory input and stronger feedforward inhibitory input via Golgi cells. This causes granule cells on average to be more active, especially during movement when they could spike at sustained rates as high as ~5Hz! But, in my opinion, here are some serious concerns with the study as it is now. Clapping resulted in a strong excitatory response in all cells. The authors claimed this was a direct auditory response. This seem strange since in that case all granule cells should receive auditory information. And with only ~4 mossy fiber inputs per granule celll, this seems strange. The auditory input could of course evoke a startle response, which activated granule cells via movement.
Also, the firing rate during movements seems very high for granule cells. This was never recorded extracellularly with for example vestibular stimulation during rotation, where the average maximum firing rate was only 0.7Hz. There is a more scary interpretation of the results. It would be very interesting to see for how long after startle responses or after movement the cells could be recorded and with what quality. Maybe the movement is just too much for such a little cell, and it begins to leak during movement.

I wrote a long time ago a blogpost about patching in awake and getting your article in Science. I think this study would need a great deal of improvements before it can be published. Still, it is very exciting that some people can patch granule cells on a daily basis in awake animals. Much respect...

Wednesday, November 16, 2011

Bias of the ages

Yesterday I had an interesting exchange at the optogenetics social (which by the way was the best social I've attended, thank you Ed Boyden!). Someone there was upset with how neuroscience works at the moment (or rather doesn't) and I guess he has a point.

Jerry Simpson is also saying this to me all the time, although in a different way. Yesterday I ran into him at the posters (like I do five times a day). When he said goodbye he said: "I'm going to have a look what things are presented that have already been done in the 60's!"

The current focus on optogenetics brings back memories doesn't it? In the early 2000's the human genome project was officially finished. The predictions for the results and benefits of the project were huge. Francis Collins, the director of the human genome project actually said that in ten years there would be genetic tests for many common conditions. Some more enthousiastic people even stated that cancer would be eradicated within the next ten years or so. How wrong has history proven them to be.

Now in the optogenetics era we have the risk of falling for the same false ideas. Like I wrote in a previous post, science is not about techniques, it should always be about questions.
There probably is much redundant research. People compete for the hottest, newest results in the most high-impact journals. The people who arrive at the same conclusions, but just a month later have big problems to publish the data while their study might actually be better! Also, people tend to repeat older studies with new techniques without any clear reasons to do so. Then the newest study is marked as new results and older material is lost in the ages. Google and Pubmed also play a role in this novelty-bias. Which results do these search engines show? Indeed, the newest ones.

If anyone has a good idea how to avoid redundancy in research, how to convince people to take into account all those older studies, how to convince journals that 'new' does not mean 'good', please tell me. We should all be able to work out a solution.

Cell-specific markers in the Vestibular Nuclei!

Yesterday I met my heroine in Cerebellar electrophysiology. Sacha Du Lac runs a lab at SALK that has basically done all identification of cell types in the vestibular nuclei. This started back in the early 2000's and is continuing since. They used the GIN, YFP-16 and GlyT2 mouse lines to label three neuronal classes in the vestibular nuclei. These mouselines have been used before and provide labeling of (putative) GABAergic, glutamatergic and glycinergic neurons. However from early morphological work in the 1960's from Chan-Palay it is clear that there are more than three classes of neurons, probably five or six classes. How to resolve this discrepancy? Electrophysiology is not the way to go since the classes show a continuum of electrophysiological parameters. In other words, there are no clear electrophysiological markers for these cells.

The people at SALK used a different approach: they used single cell RT PCR to construct single cell cDNA libraries of ~100 genes for ~150 individual cells. This library was then used to construct a clustering of cell types and genes. From this analysis six clear subtypes emerged. All six subtypes can be easily described by the exclusive expression of one or two genes. Of course this is awesome news since it seems possible to make cell-specific transgenic lines (Did I hear anyone scream optogenetics again?). Unfortunately, since the mouselines are not available yet, they didn't have any electrophysiology or morphological data on the subclasses, let alone a wiring diagram of the nuclei.

It goes to show that things are often more complicated than we thought. Or, just as complicated as people suspected decades ago.

Tuesday, November 15, 2011

There's no such thing as a free pen!

I've got a pen. I don't know what company gave it to me, but I got a pen. It's already broken and I didn't need the pen to start with, but that's probably not that important. What is important is that I got something for free. Also I've tried to win an Ipad a few times. I didn't win it, I got another pen.

As you've already probably guessed, I've been visiting the exhibit today. I wanted to check out a few companies and see some products. The danger is in the people with the badge-scanners. They will hunt you down and in return for a pen they scan your badge so they can spam you and make money off of you. Or as someone once cleverly put it: If you’re not paying for it, you’re not the customer. You’re the product being sold.

Monday, November 14, 2011

Undergrads are like a supercomputer

Yesterday I saw the talk from Winfried Denk. Why is Winfried Denk awesome? He practically invented the two photon microscope. He now invented a technique to image large brain volumes and trace all neurites in it. How he does it is as interesting as the results. Computer tracing did not provide the results he wanted. There were a lot of misses, partially traced neurites, wrong combinations, etc. Manual tracing performs much better, but is very time consuming since all tracing at the ultrastructural level is done by outlining the neurites in every slice. This is slow, as Winfried showed us by playing a movie of someone outlining neurites section by section. Or as Winfried put it: "This is so slow, I can't look at it!". His new tracing methods only involves clicking every few slices in the neurite you're tracing. This results in a 'skeleton' trace of the neurons. By having a small army of undergrads every neuron is traced multiple times after which the computer can evaluate the differences in tracing and by a 'democratic' process selects the correct 'skeleton'. The really cool part is where the automatic segmentation is overlaid on each skeleton. Now each neuron has a volumetric model. For a small piece of retina it cost ~30.000 hours to trace 1m of neurite from a few hundred of cells.

So, by combining a sloppy computer segmentation with the awesomeness of the paid undergrad brain you can reconstruct sections or a whole brain at the ultrastructure level. He is some sort of scientific Chuck Norris!

Why cheap techniques are not always good

Yesterday there was a whole alley of posters about optogenetics here at the SfN meeting. Of course this technique is very hot, easy and cheap to use. So, it was to be expected that a lot of labs would jump on the bandwagon. Unfortunately, most research was not very interesting. Or rather, very not interesting: "We used optogenetics to look at attention", "We made a new virus that performs slightly worse than what is available", "We introduced optogenetics into the common hedgehog, and guess what; it does what it's supposed to do!", "We used optogenetics to...". You get what I mean....

Fortunately, among all this nonsense there were a few gems. One poster finally solved convincingly the debate how many cerebellar molecular layer interneurons provide input to one Purkinje cell. This question is more complicated than it seems since molecular layer interneurons are electrotonically coupled. When coupling is strong you can have indirect effects of multiple cells through other interneurons. The lab of G.J. Augustine from Duke University used ChR2 expression in interneurons to map the spatial extent of inputs from molecular layer interneurons to one patched Purkinje cell. First they mapped the spatial extent of one interneuron by patching it and scanning the slice looking for direct activation. The spatial extent of one neuron turned out to be ~5500um2. Then, by mapping the input to one Purkinje cell they estimated that five to six interneurons are involved in providing inhibitory input to one Purkinje cell. After blocking gap junctions this number reduced to ~2 interneurons. Interestingly, this effect completely disappeared when coronal slices were used. This confirms that interneurons are coupled in the sagittal plane and interneurons can influence distant Purkinje cells in the same zone.

An other poster was about the difference between somatostatin and parvalbumin positive interneurons in the visual cortex. M. Sur's lab from MIT used celltype specific virus-driven expression of ChR2 to probe the functional impact of interneurons in vivo. Tuning curves for Pyramidal cells were determined by providing moving gratings and imaging the pyramidal cell's responses using calcium imaging. When PV neurons were activated, the pyramidal cell's response showed a scaling of the response. In contrast, SOM interneurons provided a subtractive operation. Clearly the two neuron classes have different functional implications. Since PV interneurons project mainly to the soma of cells and SOM interneurons mainly to the dendrites, it would be interesting to see whether this effect would hold for other brain areas as well where inhibitory input is differentially provided to somata and dendrites.

Clearly, optogenetics is a powerful tool when used correctly. It's always been the same: techniques should never be leading in research, questions should.

Sunday, November 13, 2011

The Cerebellar-Cerebro connection

It's been too long. Still, another attempt to revive the blog. I am at the SfN at this moment and I have to say that the environment is very inspiring. Inspiring for my research and inspiring for blogging.

There are a lot of gaps in our current understanding of the neurosystem. One particular issue concerning the cerebellum is how it fits into the rest of the nervous system. Just ask any cerebellar scientist what the cerebellum does and how it does that. You'll get very diverse answers. Also, nobody knows exactly how the cerebellum codes signals and how the code is composed coming from the cerebellar nuclei. What is the influence of the cerebellar nuclei on the thalamus, on the red nucleus, on motor nuclei in the brainstem? Are there projections from the cerebellar nuclei directly to the cerebral cortex, or to the hippocampus? All these questions have only been very slightly touched upon in the sixties and seventies of the twentieth century using tracing techniques and some basic electrophysiology. Only recently a few papers came out from David McCormick's lab and Dieter Jaeger's lab on how the cerebellar neurons relate to EEG signals from the cerebral cortex. Many scientists I have spoken to are anxious to start and discover how the cerebellum ties in with the rest of the nervous system.

Today I saw three posters from the Detlef Heck's lab that shed some light on how the dentate influences the thalamic and the reticulo-tegmental nuclear pathways to the prefrontal cortex. Apparently, when Purkinje cell input to the dentate nucleus stops (for example due to PC loss in Lurcher mice) the balance between RTN and thalamic pathways shift towards more input from the thalamus to the prefrontal cortex. This shift is probably only visible on longer timescales, so acute pharmacological interventions won't show this shift. The posters were written from the standpoint of an autism model. I find that a bit of a long shot; I had never considered Lurcher mice an autism model. However, the same effects were seen in Fragile X mice making the claim somewhat more believable. Still, all mutants used were global mutants, so no cell specificity here, so the results are quite hard to interpret.

It seems that the cerebellar cortex has a direct effect on the cerebral cortex, but how and what influence this is, is completely unknown. We know the pathways, we know a bit about coding in the cerebellum and frontal neocortex. Let's hope we can find answers to these questions in the coming years. The first steps have been taken, now we need to find the details.