This week, in Nature, the structure of a very important neurotransmitter receptor was revealed. The receptor, the GABA-A, allows the functioning nervous system to avoid the “brain super storms” that constitute epileptic seizures. When the neurotransmitter, GABA, binds to GABA-A, in synapses, in acts to inhibit neural activity. The inhibition of neural activity is critical to brain function because the brain can then compute in a very meta-stable state, quite like a marble on the edge of a saddle. That is one of the key design features of our brains that perhaps can be reverse-engineered someday for more power high performance computing a great energy efficiency.
It’s not surprising then, that the GABA-A receptor is also the target for some key drugs, like Valium and alcohol. Both of these drugs act to inhibit brain activity. When taken together with opioids, the effect is one of synergy and the effects can be deadly.
The paper by Zhu et al. used a technique called cryo-electron microscopy to reveal the detailed structure of GABA-A “frozen in the moment” of binding to a Valium analog. This is very important because it may reveal design hints at how to build a future Valium-like drug that relieves anxiety without sedation.
A very interesting (non-firewalled) paper by Chamberland et al in PNAS reveals a new kind of transfer logic for brain cells. The neural circuit is the first synapse in the so-called hippocampal tri-synaptic loop, an area of the brain that I’ve been very interested in from the standpoint of my own research. The usual suspects for informational transfer between neurons at the synapse are either frequency or timing encoding of action potentials. Here the author’s demonstrate a new type of encoding such that the post-synaptic neuron, a CA3 pyramidal cell actually counts the incoming number of action potentials to determine (decide) if it in turn, will fire an action potential.
Why is this important? How information is gated in brain circuits is crucial to how they compute on information (just as it is for non-biological digital computers). If we want to understand (from a reverse-engineering standpoint) how human brains do the cool things they do, then we have to be on the look out for phenomena like the one described here, because therein lies a clue to brain computation.
I’ll just add, that this neural circuit, the hippocampus, turns out to be crucial to learning and memory–particularly the kind called episodic, or what I describe to my students as “the movie of your life”.
Today’s FT has an interesting article(behind paywall) about AI being deployed into the video game space after its success at Chess and Go. What interests me here is that such video games are more open ended and ‘noisy’. They typically don’t have compact rule sets and strike me as capturing more of the flavor that smart machines are going to encounter in the real world (say when they are autonomously driving on the Washington DC beltway). Of course, the typical algorithm right now involves reinforcement learning and the AI plays against itself. That’s perfectly sensible in a gaming environment, but not really applicable to autonomous robot roaming out in the wild.
There’s a different approach out there and it’s based on reverse-engineering the brain processes that sub-serve hu man child language acquisition. The key idea is that human children acquire language with great ease and not a lot of reinforcement. We know quite a bit about the neurobiology of mnemonic function, both at the molecular level and at the neuro-algorithmic level. That this existence proof manifests so saliently suggests to me that this is where the next paradigm is going to be revealed.
This is a huge result, making NPR and published in Nature, here. Since the discovery of adult neurogenesis in rodent models, it has been assumed by many (but not all) that we humans did the same thing. The assumption was that we grow new neurons every day throughout our lives.
Aside: actually that assumption was contrary to what many of us were taught. Before the discovery of rodent adult neurogenesis, it was thought humans stopped producing new nerve cells with the onset of adulthood.
The latest findings indicate that in humans, the production of new neurons slows down by age 7 and is gone by age 13. That’s shocking. What was the selection pressure for loosing such a phenotype from rats and mice?
Popular neuroscience myths are now considered a risk to k-12 education in the UK, story here. Money quote:
“Teachers have a very enthusiastic attitude towards the brain, but there’s no neuroscience in teacher training at the moment and that makes teachers a little bit vulnerable to the very skilled approaches of entrepreneurs in selling products that are supposedly brain-based but actually are not very scientific in their basis and have not been properly evaluated in the classroom,” warned Dr Paul Howard-Jones, a leading expert on the role of neuroscience in educational practice and policy at the University of Bristol.
John Markoff’s story in this morning’s NYT is here. A terrific plan and whatever happens with sequestration, it sets a policy agenda for the next ten years–neuroscience is at the center of that agenda and it becomes “big science” in the same way that polar science, particle physics and astronomy are “big science”.
In The New Yorker, king of the moderate political pundits, the New York Times David Brooks explains most what’s meaningful in human life using popular neuroscience here. It’s a decent piece, but I’m more comfortable with David sticking to the News Hour.