One of my earliest exposures to Neuroscience was when I read famous DNA scientist, Francis Crick’s book titled “The astonishing hypothesis”, in which he elegantly posited:
“You—your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will—are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.”
As I read the book, I was instantaneously captivated by Crick’s foray into studying visual consciousness as a step to understanding human consciousness. In addition to the commonly mentioned contemporary neuroscientific methods, I was introduced to quite a few interesting novel approaches. One such example was the use of artificial neural networks. In fact, I can still remember the eerie feeling I had when I heard NETtalk (which you can hear here), a neural network developed in the 1980’s, which learnt pronunciation rules without having them explicitly programmed. The prospect of computer-based simulations, in a book about neuroscience, was very new to me. I cannot recall whether it was the child-like voice of NETtalk or just the sci-fi-ness of it all, but it was terribly haunting. Despite the cool demonstration, however, it seemed farfetched to really understand the brain in such a way. After all, the brain is a biological organ and therefore everything needs to be justified in a molecular sense. In Med school when I was studying brain function, I was constantly trying to keep track of which parts connected, and how they work together. But when I was reading Crick’s book, what really struck me was a schematic diagram of the primate visual system (see below)
Having never given much thought to the rise of higher order cognitive functions (such as sight) from single neurons, the complex wiring of the schematic diagram was baffling. The text was littered with terms such as hierarchical processing, bi-directionality, feed-back loops, and many other complicated words which made me feel very uninformed. The feeling grew to become a sinking, desolate feeling deep inside me as I looked down this chasm of the unknown.
On the thought of generating cognition from a multitude of smaller units, I was overwhelmed with a stream of endless questions. For vision, we experience the final product through the lens of our eyes, but how does the brain generate it? Adding to the difficulty—how do we scientifically tackle this problem? Looking at the complex wiring diagram, I contemplated how one could spend an entire lifetime trying to find causal relationships in a handful of these connections. And once the scientific community had produced thousands and thousands of papers about the circuity, I wondered how it would be put together. This is also reminiscent of the difficulty I faced with my masters education. I was in an integrative neuroscience program, one that had a huge faculty, each teaching their set of specialties in the field. It ranged from theoretical approaches to fundamental bench work. Perhaps I was asking for too much, but I felt a strong discord with what the program wanted to achieve and what they were doing. The program was supposed to be integrative, however, it seemed more like a buffet, where we were going around to taste vastly different things. Ultimately, I was walked away from the hypothetical buffet with an overloaded and hodgepodge dish. What I really wanted to know and what I feel is fundamentally important, is how these different approaches coalesce to further our understanding. How do the blind men feeling out different parts of the elephant communicate? How do we come full circle with the enormous amounts of information and form our research programs to develop a cohesive picture of how the brain works? Quite often, my pessimism would emerge and give me a feeling of dismay, giving the feeling that our scientific progress was no more than the search for linearities in a largely complex, non-linear system as a means of harvesting low hanging fruit on a seasonal basis.
Though I have a long way to go to in better understanding the fundamental problems facing neuroscience, I am currently drawn towards approaches that prioritize integration and formalization of neuroscientific knowledge. The central paradigm being an experimentally informed, theory-laden approach. To carry out this approach, big data and theory must go hand in hand. We will need to break model after model, develop the mathematics required to aptly describe the brains complexity and even get inspiration from fields that appear removed from mainstream neuroscience. But one thing is for certain, to deal with the unwieldly amounts of information, we need a means of centralizing our knowledge: this is where modelling and theoretical approaches shine. Recently, I had the pleasure of reading “Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain” by Lindsay Grace which was essentially an overview of the many personalities and approaches concerning the mind from a quantitative perspective. Beyond all else, the greatest gem (at least for me) in the book lay in the opening chapter, which was an exposition on why mathematics is important:
“Mathematics is a form of extended cognition.
When a scientist, mathematician or engineer writes down an equation, they are expanding their own mental capacity. They are offloading their knowledge of a complicated relationship on to symbols on a page. By writing these symbols down, they leave a trail of their thinking for others and for themselves in the future. Cognitive scientists hypothesize that spiders and other small animals rely on extended cognition because their brains are too limited to do all the complex mental tasks required to thrive in their environment. We are no different. Without tools like mathematics our ability to think and act effectively in the world is severely limited.”
Just as physicists use mathematics to understand the complexities of the universe, we (neuroscientists) must use it to understand the universe within the constraints of our mind. The history of physics certainly extends earlier than before we were even sticking electrodes into brains, but it is a time good as any to start formalizing neuroscientific work in search of broader theoretical frameworks. The winds of uncertainty certainly hit hard, however, as Lindsay Grace writes aptly in her final remarks: “But neuroscience is not physics. It must avoid playing the role of the kid sibling, trying to follow exactly in the footsteps of this older discipline. The principles that guide physics and the strategies that have led it to success won’t always work when applied to biology.” Therefore, we cannot just dream of elegant theories and models to justify how the brain might work; we need to know how it works within our skulls. And for that, experimentation is key. The modelling work needs to hold up to the experimentally grounded data that will provide hone our intuition for neural systems. Theorists need to be able to communicate with experimentalists, and vice versa and therefore everyone needs to have literacy of each other’s end of the spectrum. To drill in the point further, I quote Buzsaki on this topic who is currently one of the most highly cited systems neuroscientists:
“I gained a lot more insights about oscillations from models over the past decade than from my own experiments. Experiments provide the constraints, and the model should provide alternatives.”
Piecing together all the different nuggets that neuroscience has on offer may be a real ordeal, but we had better get started if we want to understand the layers of complexity that the brain possesses. With the dawn of the big-data age, marvels like machine learning, we can begin to see structure within data that was previously not humanly possible. This has led to many discoveries that have been counterintuitive from conventional approaches. Working backwards from the data, we have found new insights about neural systems rather than being restricted by our—commonly misinformed—intuition. We can now abstract the intricate (and messy) distribution and activity of neurons as oscillators (dancing blobs, basically) in an immensely connected network by using differential equations and graph theory. Differential equations help us to describe mathematically how ensembles of neurons fire in relation to one another and graph theory connects these masses of neurons with one another for a more holistic perspective. This, of course, is just one approach out of many but the heart of the matter is that wrestling with complexity needs mathematics.
As each day passes, many new revolutionary approaches are emerging on the frontier of scientific progress. Beyond interesting analytical methods and statistics, and entire domain of science known as complex systems has been forming over the last few decades. The investigation of Complex systems aims to tackle the very idea of a multitude of small components working together to give rise to a system which is fundamentally more than the sum of its parts. The application of complex systems is incredibly vast, but neuroscience, bursting with its billions of neurons giving rise our sense of sorrow, ambition, and personal identity certainly hits the criteria dead on. As a final thought, to make sense of the enormous orders of complexity of the brain, we need to move beyond current mainstream brute strength tactics towards identifying architectural motifs using both theory and experimentation. If we find those motifs, that would be great, and if we don’t, we will still have our sights on understanding the theoretical fabric of all the threads of knowledge that we, as a scientific community, weave on a daily basis.
I would like to thank Katrina Deane for her edits and insights on the article.
Featured Image: Human Vectors by Vecteezy