Episode 138      30 min 42 sec
Grey matter, virtually: Computational neurobiology's insights into the brain

Professor Terry Sejnowski discusses recent developments at the nexus of brain science and computer modeling, enabling new understandings of psychiatric conditions like schizophrenia. With Science host Dr Shane Huntington.

"There's a dynamic range for every process in the brain and if it gets too high or too low, you could disrupt the function of that channel - the information channel." -- Professor Terrence Sejnowski




           



Professor Terrence Sejnowski
Professor Terrence Sejnowski

Professor Terrence Sejnowski is a pioneer in computational neuroscience and his goal is to understand the principles that link brain to behavior.  His laboratory uses both experimental and modeling techniques to study the biophysical properties of synapses and neurons and the population dynamics of large networks of neurons. New computational models and new analytical tools have been developed to understand how the brain represents the world and how new representations are formed through learning algorithms for changing the synaptic strengths of connections between neurons.  He has published over 300 scientific papers and 12 books, including The Computational Brain, with Patricia Churchland.

He received his PhD in physics from Princeton University and was a postdoctoral fellow at Harvard Medical School.  He was on the faculty at the Johns Hopkins University and he now holds the Francis Crick Chair at The Salk Institute for Biological Studies and is also a Professor of Biology at the University of California, San Diego, where he is co-director of the  Institute for Neural Computation and co-director of the NSF Temporal Dynamics of Learning Center.  He is the President of the Neural Information Processing Systems (NIPS) Foundation, which organizes an annual conference attended by over 1000 researchers in machine learning and neural computation and is the founding editor-in-chief of Neural Computation published by the MIT Press.

An investigator with the Howard Hughes Medical Institute, he is also a fellow of the American Association for the Advancement of Science and the Institute of Electrical and Electronics Engineers. He has received many honors, including the Wright Prize for interdisciplinary research from Harvey Mudd College, the Neural Network Pioneer Award from the Institute of Electrical and Electronics Engineers and the Hebb Prize from the International Neural Network Society.  He was elected to the Institute of Medicine in 2008 and to the National Academy of Sciences and the National Academy of Engineering in 2010.

Credits

Host: Dr Shane Huntington
Producers: Kelvin Param, Eric van Bemmel
Audio Engineer: Gavin Nebauer
Episode Research: Dr Dyani Lewis
Voiceover: Dr Nerissa Hannink
Series Creators: Eric van Bemmel and Kelvin Param

View Tags  click a tag to find other episodes associated with it.

Download file Download mp3 (28.1 MB)

VOICEOVER
Welcome to Up Close, the research, opinion and analysis podcast from the University of Melbourne, Australia.

SHANE HUNTINGTON
I’m Shane Huntington.   Thanks for joining us.  The human brain is the most complex organic computer ever seen in nature.  Despite extraordinary advances in medicine and biochemistry over the last century, there remain many conditions of the brain for which we have minimal understanding.
Additionally, concepts we take for granted on a daily basis, such as consciousness, dreams, imaginative ability and memory, remain elusive.
To explore the brain's workings, computational neurobiology combines computer modelling with experimental biology to study its complexity.
To tell us more about computational neurobiology we are joined by Professor Terry Sejnowski, head of the Computational Neurobiology Laboratory and Francis Crick Professor at the Salk Institute for Biological Studies, La Jolla San Diego California.  Professor Sejnowski is visiting the University of Melbourne as a guest of the ICT for Life Sciences Forum 2011.
Terry also delivered the 2011 Graeme Clark Oration while in Melbourne.
Welcome to Up Close Terry.

TERRY SEJNOWSKI
It's a pleasure to be here.

SHANE HUNTINGTON
Let's start with just a bit of a discussion on how computer models actually help us understand the human brain.

TERRY SEJNOWSKI
Well neuroscientists who study the brain are very good at taking it apart into pieces.  We have almost a complete parts list of the different molecules that neurons use to communicate with each other: for example at synapses.  But in order to understand the function of a complex system like the neural networks that interact with each other to produce perception and our understanding of the world, we need to take those parts and put them back together again, and this synthetic enterprise is best done inside a computer which will allow us to explore all those interactions in a way that will give us some deeper understanding.
And we do that in two different ways.  We do that first by simulating the actual processing going on and compare with experiments.  So we can make predictions for experiments.  But interestingly, what emerges from that - and this is something that's been very successful in physics - are general principles for the function of a neural circuit and those general principles can be instantiated outside of the brain and helps us make devices that the public can use.

SHANE HUNTINGTON
When you consider the processes that are involved in the brain, what sort of level are we looking at with this computer modelling?  Is it the brain as a whole or the individual molecular processes - where exactly are you focusing the attention?

TERRY SEJNOWSKI
When you look inside the brain, you find structure at every spatial scale, ranging form the molecular, all the way up to the entire central nervous system and that covers eight orders of magnitude of spatial scale.  Each of those scales represents a different architecture.
If you look at, for example, the interactions between neurons in a small little piece of the cortex - the cerebral cortex - that is on the outside of the brain that provides us with a very large memory store and the ability to think and to reason.  Those neurons are interacting with each other in a very, very tight network which represents recurrent connections within the network and we know is very important for storing the information.
However, if you look at two distance parts of the cortex, the connections between them are very sparse.
So, on a global scale, the connections are very expensive and therefore only a few neurons can be connected over a long distance and then as you get closer - less than a millimetre - the connection strengths start increasing and then it becomes a very tight interacting network.  That has a very interesting structure, because it means that you have local computation, long distance communication and ultimately in order for the whole brain to work together in an integrated way, there has to be some way of - for different parts of the brain to sort of synchronise with each other. 

SHANE HUNTINGTON
Presumably these different sorts of scales and types of functions that you have in the brain are modelled in different ways.  Is this a significant challenge for the computational modelling aspect of the project where one model may work for one scale but not for another?

TERRY SEJNOWSKI
Exactly - it's a multi-scale problem and you need different types of models at different levels.  So for example, at the level of molecules, we are trying to understand how the different parts of, say, a synapse that - the place where two neurons interact with each other - we're trying to understand the conditions under which one neuron will release a neurotransmitter and then diffuse - this is over to the postsynaptic side - the receiving side - and then interacts there with the receptors.
Now that process is governed by the equations that are responsible for the diffusion of molecules - this goes back to Brownian motion and Einstein's discovery of the actual molecular size of matter and it's interesting that those insights now are driving brain communication - diffusion is an important part of chemical signalling.
But we can simulate that using Monte Carlo techniques in physics and that way we can match the actual communication that is observed by neuroscientists, recording from those two neurons, we could match that with the actual chemical interactions that are occurring between the molecules.

SHANE HUNTINGTON
Monte Carlo techniques are used for modelling many-body problems?

TERRY SEJNOWSKI
Oh yes - this was a technique that was developed during World War 2 by Stan Ulam and others who were trying to calculate the diffusion of neutrons and that was important for calculating the critical mass for uranium 235 to create a nuclear fission reaction - a chain reaction.
They realised that sometimes the equations cannot be solved analytically but what you can do is to track every single neutron in a distributional way - you don't actually have to track it all the way as it's wiggling around - what you do is you jump from one place to the next with the same distance that roughly it would take that much time and it gives you the same average results - it's an approximation, but it's a very good one - and interestingly it's very important in the brain because of the fact that there are relatively few molecules that are used for signalling - typically in the order of a few thousand neurotransmitter molecules.
The actual molecules, for example the receptors on the post-synaptic side - 10 to 100.  So we're not talking here about macroscopic numbers.  We're talking about relatively small numbers of molecules and that illustrates that nature has reached the point where it's calculating with small molecules.
It's very, very efficient - energy efficient - and it's extremely powerful in terms of the number of computations that can be done - far exceeding all of the combined computational power of all the computers on the planet.

SHANE HUNTINGTON
Terry, in previous episodes of Up Close, we've often focused on the biological elements of the brain and the neural conditions.  In episode 105 we spoke about multiple sclerosis, in episode 118 we spoke about myelination - how does the computational work that you're doing interact with the biologists?

TERRY SEJNOWSKI
Well, it's interesting that you bring up multiple sclerosis and myelination, because I was recently elected to the National Academy of Sciences and the proceedings of the National Academy allow new members to submit and contribute an inaugural article.  The one that my lab contributed was a computer model of demyelination.
Multiple sclerosis is an extremely debilitating disease - it's progressive and it involves the unravelling of the myelin sheath that the neurons use to help the action potential communicate down the axon. And if too much myelin is lost, then conduction can be blocked and of course that in the sensory area will produce your inability to feel or touch.  If it was the optic nerve it will produce blindness.
But paradoxically in some cases, it also produces acute pain - that is to say, activity where there shouldn't be activity - action potentials being generated spontaneously.  Well in developing models for the axonal transmission, we went back to Hodgkin and Huxley, who are two physiologists who studied the squid axon, and what we did was we added myelin to their model.
Myelin, what it does is it basically separates the regions where the ion channels are located - so the signal jumps from one node to the next - and so what we showed was that if you demyelinate the axon, we could reproduce all of the phenomenological observations and we discovered by varying the parameters in the model that there was one very important parameter, which was the ratio of the sodium current that was responsible for the very fast influx of sodium and the peak of the action potential and another ion current that had been ignored.
It had been ignored because it's in the background of all neurons - it's that voltage sensitive - it's called the potassium leak conductance - and it sets the input resistance.
So it turns out that the ratio of the sodium conductance and the potassium leak conductance - that ratio is kept in normal axons within a narrow range and if it's too high - if there's too much excitability - then you get spontaneous activity - if it's too low, you get conduction block - and so what we are able is now make a very strong prediction - we predict that if you can regulate that leak potassium conductance - and it's a family of ion channels that are genetically coded - the KCNK family - that if you could target those channels and either block them or enhance them, then it should be possible to provide relief for MS patients who are suffering these symptoms.
Of course this is something that now the drug companies - we were very good at being able to target drugs for specific receptors in ion channels that they could now work with.  It's an idea that no-one had thought of before and it's a very promising avenue I think that we can take - again illustrating how computer models can help you understand a basic phenomenon and then once you have that, come up with some new suggestions that you can go back to the lab and test.

SHANE HUNTINGTON
Now Terry, just to clarify that, what you're achieving there with drugs is similar to making sure the impedance speakers in their stereo is matched to what their amplifier is putting out.

TERRY SEJNOWSKI
Well that's a good analogy.  In other words, there's a dynamic range for every process in the brain and if it gets too high or too low, you could disrupt the function of that channel - the information channel.  The idea of a ratio is basically you have an excitable component - the sodium of the channel - and you have the potassium current, which basically sets the baseline.  So the two working together interacting with each other is the ratio.  They could both be high and low.  That's fine, as long as the ratio is the same.

SHANE HUNTINGTON
This is Up Close, coming to you from the University of Melbourne, Australia.  I'm Shane Huntington.  Our guest today is Professor Terry Sejnowski and we're talking about aspects of computational neurobiology.
Terry, there are many complex systems that we model using these sorts of techniques, does the brain compare to any of them.  Is it more complex than all of them or is it comparable with things like global weather?

TERRY SEJNOWSKI
Nature is complex at every different spatial scale.  So if you are trying to understand for example, a single neuron - a neuron is a cell - and has roughly the same components of every cell in your body.  It has some specialised ion channels, but basically it has to create energy - ATP - it has mitochondria to do that.  It has the complete set of metabolic pathways that create ATP from glucose and so it is in a sense is like a little city with the economy. And that economy internally is maintained by thousands of different enzymes and different cell biological mechanisms.  It turns out very, very complex, in terms of all their interactions and one of the exciting areas right now of biology is called systems biology, and systems biologists are trying to create models of all those interactions and understand something about the way that the cell responds to signals from the outside, the way that it can change and the internal genetic expression of the different genes in response to those signals.
Interestingly as they write down their equations for those interactions, lo and behold, they are the same equations that we worked with 20 years ago when we were first writing down simple models of neurons - namely you have interactions occurring - those are like chemical interactions - you have activation functions that look like sigmoids and that's like the firing rate of the neuron - and so interestingly the same mathematics - a lot of the same conceptual framework that we developed back in the 1980s and '90s, really is very applicable today to the work that the systems biologists are doing just to model a single cell.
It's really an interesting situation here where we can sort of think that nature has discovered ways of organising very complex systems and it's using similar approaches at different levels of complexity.

SHANE HUNTINGTON
Now, the biologists when they consider these issues around the brain and other parts of the body will first look at simpler creatures - often something like a mouse model or other alternatives to looking at humans directly.  What's the comparator in computational neurobiology?

TERRY SEJNOWSKI
Well, I think that it made a lot of sense to try to first understand creatures that have many fewer neurons than ours.  We have about 100 billion neurons and so if you can study say a bee - a honey bee - that only has a million neurons, it might be possible to make progress and indeed it has been.
However, what we've discovered along the way is that nature, in having to work with fewer and fewer neurons, has made each one of them more and more specialised.  So in a sense, a single neuron in the bee brain, has to accomplish what in our brain may take thousands of neurons.
It helps because it focuses you on a specific function of a specific neuron, but also what it means is that you really have to really dive down deeply into every single part of the neuron that the different branches, for example, of the dendritic tree of a single neuron could be doing different things.  What we come to appreciate is that the so-called simple brains turn out to be even more complex than ours, with respect to their degree of miniaturisation and specialisation.

SHANE HUNTINGTON
You mentioned before the work you've been doing on MS, are there other conditions that you've also been studying with these techniques, for which we know there are ongoing struggles with for many people in the world?

TERRY SEJNOWSKI
One of the most, very debilitating mental disorders, are ones that involve changes in the way we think, especially one out of every 100 babies is born with the potential to become schizophrenic. And schizophrenia is very, very pernicious - it doesn't manifest until late adolescence to early adulthood - just at a time when someone is becoming an adult, they can be very much affected in terms of both their own thinking patterns becoming bizarre - hallucinations - auditory hallucinations, cognitive problems - but also the impact that has on their families and on society that has to support people who are no longer capable of really making decisions on their own.
This is, I think, an area where we can make some progress and the reason is that unlike Alzheimer's disease and other neurodegenerative diseases, where the neurons are dying, and cannot be resurrected, in the case of schizophrenics, the neurons don't die.  They just malfunction. And we need to find out, first of all, at what point during development does that occur and number two: which neurons are involved and number three: how can we go in and understand the impact that those alterations have on the function of the neurocircuit and then what impact does that have on your ability and perceive?
And we've made progress on all of those.  Just within the last few years, we've identified a particular type of neuron called a fast spiking parvalbumin-positive interneuron called the basket cell, which is very important for feedback inhibition in the cortical circuit and these neurons constitute about five per cent of all the neurons in your cortex and we have a mouse model where we're able to show that we can produce similar behavioural deficits in mice by injecting a particular drug that's a street drug that's used by kids when they're partying - it's called Special K or Ketamine - and if you deliver that just twice on two successive days, at sub anaesthetic doses - it's used in veterinary medicine as an anaesthetic - but with smaller doses it produces hallucinations and out of body experiences, which is presumably what the kids who are taking it are experiencing.  But they often after a party weekend will come into the emergency room with schizophrenic symptoms - psychotic episodes that are identical to what are seen in schizophrenics.  But fortunately that will resolve in a few days and so it's reversible.  But the fact that you can produce similar thought disorders is an indication that we're on the right track and we've reproduced that in the mouse and we've identified this particular type of interneuron and now the question is what's happening?
Well what's happening is very, very intriguing.  There's a gene in that neuron that produces the inhibitory neurotransmitter GABA and this is released from the nerve terminal and it inhibits the post-synaptic neuron and specifically the pyramidal cells that are producing the spikes that travel a long distance.
Now if you knock out the enzyme that produces GABA, the neuron can no longer inhibit the pyramidal cell and that means that you've changed the balance of excitation and inhibition and once you've changed the balance, the circuit is no longer working the way it normally would.
And so now we can go in and say, well why is that gene being turned off?  We've just recently done some experiments - Marga Behrens who's the staff scientist in my lab, has discovered that it may be a methylation mark on that gene.
Methylation is an epigenomic way of regulating genes.  It's known that early in development, when the neuron is being specialised down a certain track, that certain genes will be turned on and others will be turned off permanently and that's done with methylating the DNA.
It looks as if in these neurons, early in development, there's an event perhaps an infection, perhaps something that disrupts the biochemical machinery that makes the neuron go down a path which makes that gene that gets methylated ineffective and unable then to produce a neuron that is able to integrate into the circuit.
Now why is it that it takes so long for that to manifest itself is still a mystery, but we do know that if you do deliver this drug, the Ketamine, not as an adult, but rather in the first or second week of postnatal life of the mouse, that you can permanently disable 40 per cent of the basket cells that are responsible for this circuit - this inhibitory circuit.
So I think we're now beginning to understand a little bit more about the genetic and the molecular mechanisms that could be the very first steps that are involved in making a brain susceptible to a schizophrenic state and now our task is to go and try to prove that - number one and number two - to find out how we can reverse some of those changes.  I think because of the fact that the neurons are still alive; it might just be possible to rescue them.

SHANE HUNTINGTON
I'm Shane Huntington and my guest today is Professor Terry Sejnowski.  We're talking about a range of aspects of computational biology here on Up Close, coming to you from the University of Melbourne, Australia.
Terry, we've been talking about your studies of the human brain.  I was thinking we would now turn that around and talk a little bit about how the brain is teaching us more about computers and computational systems.  What are we learning from the way the brain operates?

TERRY SEJNOWSKI
One of the big challenges right now in computer science is how to harness a large number of processors.  The traditional way that you program a computer is by writing a program that does one step at a time.  Now that's fine if you have one processor, but what if you have 100 or 1000 or 100,000 and how do you take a problem - for example recognising an object in an image - and distribute it over thousands of processors so that each one's doing a little part of the problem?  Well that's how the brain works.
The brain has hundreds of billions of neurons and these are interacting together very tightly in these neural networks. And by going and looking at the architecture of the brain - what's the connectivity pattern - what are the signals - how is the information about a particular part of an object distributed over hundreds of thousands of neurons - that's giving us insights and principles that may allow us then to go back and design better computers to program those large parallel machines that we are now manufacturing and allow them to be able to perceive and see in the same way that we do.
Now it's going to be with a very different material - these chips are made out of silicon - but it's the principle that we're trying to extract from the brain and then program into those chips.

SHANE HUNTINGTON
Many of our listeners of course will have heard of artificial intelligence, but one of the areas of interest in this area is what's called machine learning.  Can you give us an idea of what is meant by that term?

TERRY SEJNOWSKI
Yes, the early days of artificial intelligence, going back to the 1950s, the hope was that you could engineer an intelligent system just by writing a program.  That meant that you had to have complete knowledge of the domain that you were trying to understand.  You needed to be able to define exactly what are the rules that you have to follow in order to be able to say recognise an object like a cup.
Now the problem turned out to be much more difficult than anybody imagined and the reason is that cups come in many different sizes and shapes and you can look at it from the top and the bottom and the side and in order to write down all the rules to recognise all the cups would take your whole lifetime and that's just one object.
So it's clear that it's very inefficient to use these rules that have to be handcrafted for every particular problem that you have to solve.  There are some simple problems where you could do that.  A good example of that is playing chess, where the rules are very well defined and just a matter of going through an testing all the different moves that could be made and that's a brute force way to solve the problem. But a brain solves it in a much more interesting way.  The brain uses pattern recognition.  The brain is able to look and to very rapidly come up with a probability for what the object is and it does that through learning, and that turns out to be really the key component of intelligence that wasn't appreciated early on.
By learning I mean learning from examples, through experience - you've experienced hundreds, thousands of cups in your lifetime and all those examples are then encoded in your brain and used to predict the next time you see something in that category and even if you can't define what a cup is, you sort of know it when you see it and that's really what we've developed over the last 20 years is more and more algorithms that are able to handle very, very large data sets, very high dimensional spaces with many different features and very efficiently to go through and sort and be able to take those examples and then to be able to generalise from them.
That's a field that is really exploding in the last five or 10 years, primarily because of the internet - because of the fact that there are so many images out on the internet - there are so many different data sets that have become available.  For example, faces - faces are very important for social communication, for being able to recognise people. But up until recently, we didn't have very efficient ways for first of all identifying where the faces are located in the photograph and then even after that, come up with some estimate for what are the facial expressions and how old is the person.
That problem has been solved.  That's a solved problem in machine learning.  We have very efficient algorithms.  We can pick out all the faces.
One of my former students, Marni Stewart-Bartlett has written a program that is capable of recognising, in real time, changing facial expressions, and that means a computer now can recognise whether you're happy or sad, surprised, angry or disgusted.

SHANE HUNTINGTON
Terry, when humans make decisions, a big part of that of course is this sort of data that we store over time in our learning, but another part of it is our emotional state.  How do we go about addressing that particular sort of capacity that we have, when we're trying to get computers to do the same sort of thing?

TERRY SEJNOWSKI
Emotions I think have had a really interesting history, both in terms of psychology and also in terms of robotics and machine learning.  So the belief was that intelligence was all about cognition, about thinking and deducing consequences.  Well somewhere along the way that has shifted and now we are beginning to appreciate that a large part of intelligence is not deduction, but induction and by that I mean how do you develop a system that can assign probabilities to different outcomes?  That's of course what learning's all about it, is experiencing the different outcomes and finding out under which context each one of them may happen.
I think that emotions are an indication of your internal state that is responding to the challenges of the outside world and it's a very important part of intelligence to be able to have the right stance.  So if you're in a dangerous situation, you want to be alert.  If you're in a friendly situation, you want to be open and if you're surprised, you need to be able to communicate that to the other people around you, because otherwise they're not going to be able to interpret your actions.
So this is a very interesting part of the brain that is involved in creating emotions and emotional expressions on the face for communication and we're beginning to model those.  It's possible to go in and look at these neuromodulatory systems and - dopamine for example - the reward system - is a very powerful one that is involved in motivation and it's powered by positive experiences like food - having a good dinner, friends, sex - and that then changes the probabilities with which you'll do things in the future.  So this is a very interesting part of the mystery of who we are - trying to understand how the emotions fit into our thinking and affect our decisions.

SHANE HUNTINGTON
Terry, just to finish up, how close are we to getting a computer that essentially mimics the human brain?  Is that going to ever happen?

TERRY SEJNOWSKI
I think that we all have this image of a robot that walks around and talks and acts like a human and I think that to some extent that will happen.  But I don't think that it will necessarily take the path that humans have.  That is to say, the machines we're developing, they're going to get embedded into the woodwork.  In other words, we'll be interacting with machines.  We do all the time - you pick up your iPhone and you dial into something and you're getting information back and forth.
Well at some point, your iPhone is going to become intelligent.  It will be a social robot; it will recognise you. And all the appliances that we have will become intelligent appliances and it's not just things that we interact with.  It will be things like the radar systems that the military use, power systems that regulate electricity flow throughout the world.  That's all going to become much more sophisticated, energy efficient and it will run on the same algorithms that your brain has used.

SHANE HUNTINGTON
Professor Terry Sejnowski, thank you very much for being our guest on Up Close today and giving us such a great understanding of some of the things going on in computational neurobiology.

TERRY SEJNOWSKI
Well thank you very much.  This is a very exciting era that we're living through and I'm really pleased to be here.

SHANE HUNTINGTON
That was Professor Terry Sejnowski, head of the Computational Neurobiology Laboratory and Francis Crick Professor at the Salk Institute for Biological Studies, La Jolla, San Diego, California.
This episode of Up Close was supported by the Melbourne Festival of Ideas 2011.  For more information about the Festival, visit ideas.unimelb.edu.au.  Relevant links, a full transcript and more info on this episode can be found on our website at upclose.unimelb.edu.au.  Up Close is a production of the University of Melbourne, Australia.  This episode was recorded on Thursday, the 10th March, 2011.  Our producers for this episode were Kelvin Param and Eric van Bemmel.  Audio Engineering by Gavin Nebauer.  Background research by Dyani Lewis.
Up Close is created by Eric van Bemmel and Kelvin Param.  I'm Shane Huntington.  Until next time, goodbye.

VOICE OVER
You've been listening to Up Close.  For more information, visit upclose.unimelb.edu.au.  Copywrite 2011, The University of Melbourne.


show transcript | print transcript | download pdf