Dr. Korngreen Alon
At the most abstract level, a neuron can be viewed as a simple calculating device. This device constantly receives a sequence of numbers. It takes the numbers received within a certain time frame and sums them. If the sum exceeds a certain number, the output of the neuron will be one. If the sum does not exceed that fixed number, the output will be zero. Viewed from the outside, the output of the neuron resembles a sequence of zeros and ones. This sequence is then fed to the next neuron (or to several neurons) in the chain together with sequences arriving from other neurons and the summation process begins again. In the human brain, there are roughly 1012 neurons. Each neuron can receive sequences of ones and zeros from several thousands of neurons. Thus, the prevailing view of neural computation is that each neuron is a simple computing device, while all the rest (thought, feeling, memory, etc.) are the outcome of the interaction between the vast numbers of neurons. This view lies at the foundation of many investigations of the brain's activities. It is also the rationale behind the vastly successful research of artificial neural networks that revolutionized the field of computer science.
There are chinks in the armor of this theory. More and more experimental evidence, especially from the recent decade, points out that at least part of the neurons in the brain performs complex computations (definitely more complex than simple summation). Now imagine that each unitary computational device in the brain can contain complex functions. Instead of 1012 simple summation devices we now have the same number of not so simple calculators. In a blink of an eye, the complexity of the brain increased several fold. Only a thorough understanding of the brain's building blocks will allow for building valid models of higher brain functions.
My research touches one of the basic yet still unresolved questions in neuroscience: how do neurons process information? What is the neuronal code at the cellular level? During the last four years, we have been developing new techniques that will allow us to investigate these questions. One technique was developed in collaboration with my former post-doc supervisor, Prof. Bert Sakmann, and people from his lab (Schaefer et al. 2003). This method allows extracting the true kinetics and conductance densities of voltage-gated K+ channels from dendritic whole-cell voltage-clamp recordings. We applied this novel technique to the apical dendrite of L5 pyramidal neurons (Schaefer et al. 2007). We were able to show that the density of voltage-gated K+ conductances decreases along the apical dendrite. We are now also expanding this technique to voltage-gated Ca2+ channels.
Another technique under development in the lab is the use of genetic algorithms for constraining compartmental models of neurons with non-homogenous distributions of ion channels (Keren et al. 2005) and voltage-gated channels (Gurkiewicz and Korngreen 2007). In parallel, we have concluded an investigation of a subtype of cortical interneurons which display an interesting coding of synaptic input by the width of the action potential (Korngreen et al. 2005). Another avenue of research in the lab focuses on the activity modes of cortical neurons in excised slice preparations (Bar-Yehuda and Korngreen 2007).