Information

How does the brain train its neural network?

How does the brain train its neural network?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

One question that came up learning how artificial neural networks are working was how the brain can train its neural network?

When we say we have an artificial neural network, the problem behind it is a typical, but uncommon minimizing problem. The math behind it is logical and pretty easy. But it's math, so an computer can calculate it by doing millions of iterations. But the brain can't do that (I would be surprised)

So, how does the brain solve this task. Trial & Error, we don't know or is there an even more complex system behind it?

Thanks in advance. GA


The answer to this question is probably Hebbian Learning.

Hebbian learning can be nicely summarised with "Cells that fire together, wire together". So basically the synapses of neurons are strengthened if they fire in sync, and weakened otherwise.

One can easily see that a) this kind of local learning mechanism makes a lot more sense for the brain then some global method like gradient descent and b) this mechanism leads to stable representations of patterns.

In artificial neural networks this kind of learning is modelled in Hopfield networks and Restricted Boltzmann Machines.

Of course this simple rule barely scratches at the surface of what goes on in the human brain, when we are learning something. A complete picture would probably involve complex feedback mechanisms of inhibitory and excitatory connections between neurons, functional modules of neurons, and different parts of the brain.

But I fear these details are not yet well understood…


Human behaviour: is it all in the brain – or the mind?

Y ou've seen the headlines: This is your brain on love. Or God. Or envy. Or happiness. And they're reliably accompanied by pictures of colour-drenched brains – scans capturing Buddhist monks meditating, addicts craving cocaine, and students choosing Coke over Pepsi. The media – and even some neuroscientists, it seems – love to invoke the neural foundations of human behaviour to explain everything from the Bernie Madoff financial fiasco to slavish devotion to our iPhones, the sexual indiscretions of politicians, conservatives' dismissal of global warming, and an obsession with self-tanning.

Brains are big on campus, too. Take a map of any major university and you can trace the march of neuroscience from research labs and medical centres into schools of law and business and departments of economics and philosophy. In recent years, neuroscience has merged with a host of other disciplines, spawning such new areas of study as neurolaw, neuroeconomics, neurophilosophy, neuromarketing and neurofinance. The brain has wandered into such unlikely redoubts as English departments, where professors debate whether scanning subjects' brains as they read passages from Jane Austen novels represents: a) a fertile inquiry into the power of literature, or b) a desperate attempt to inject novelty into a field that has exhausted its romance with psychoanalysis and postmodernism. As a newly minted cultural artefact, the brain is portrayed in paintings, sculptures and tapestries and put on display in museums and galleries. As one science pundit noted, "if Warhol were around today, he'd have a series of silkscreens dedicated to the cortex the amygdala would hang alongside Marilyn Monroe".

Clearly, brains are hot. The prospect of solving the deepest riddle humanity has ever contemplated – itself – by studying the brain has captivated scholars and scientists for centuries. But never before has the brain so vigorously engaged the public imagination. The prime impetus behind this enthusiasm is a form of brain imaging called fMRI, or functional magnetic resonance imaging, an instrument that measures brain activity and converts it into the now iconic vibrant images one sees in the science pages of the daily newspaper.

As a tool for exploring the biology of the mind, neuroimaging has given brain science a strong cultural presence. As one scientist remarked, brain images are now "replacing Bohr's planetary atom as the symbol of science". With its implied promise of decoding the brain, it is easy to see why brain imaging would beguile almost anyone interested in pulling back the curtain on the mental lives of others: politicians hoping to manipulate voter attitudes, agents of the law seeking an infallible lie detector, marketers tapping the brain to learn what consumers really want to buy, addiction researchers trying to gauge the pull of temptations, and defence attorneys fighting to prove that their clients lack malign intent or even free will.

The problem is that brain imaging cannot do any of these things – at least not yet.

Author Tom Wolfe was characteristically prescient in when he wrote of fMRI in 1996, just a few years after its introduction: "Anyone who cares to get up early and catch a truly blinding 21st-century dawn will want to keep an eye on it." Now, we can't look away.

Why the fixation? First, of course, there is the very subject of the scans: the brain itself, the organ of our deepest self. More complex than any structure in the known cosmos, the brain is a masterwork of nature endowed with cognitive powers that far outstrip the capacity of any silicon machine built to emulate it. Containing roughly 80bn brain cells, or neurons, each of which communicates with thousands of other neurons, the 3lb universe cradled between our ears has more connections than there are stars in the Milky Way. How this enormous neural edifice gives rise to subjective feelings is one of the greatest mysteries of science and philosophy.

Brain scan images are not what they seem. They are not photographs of the brain in action in real time. Scientists can't just look "in" the brain and see what it does. Those beautiful colour-dappled images are actually representations of particular areas in the brain that are working the hardest – as measured by increased oxygen consumption – when a subject performs a task such as reading a passage or reacting to stimuli, such as pictures of faces. The powerful computer located within the scanning machine transforms changes in oxygen levels into the familiar candy-coloured splotches indicating the brain regions that become especially active during the subject's performance. Despite well-informed inferences, the greatest challenge of imaging is that it is very difficult for scientists to look at a fiery spot on a brain scan and conclude with accuracy what is going on in the mind of the person.

Barack Obama shortly after winning the 2008 US presidential election. Research undertaken by neuroscientists suggested that he would fail to engage with voters. Photograph: Tannen Maury/EPA

Years ago, as the 2008 presidential election season was gearing up, a team of neuroscientists from UCLA sought to solve the riddle of the undecided, or swing, voter. They scanned the brains of swing voters as they reacted to photos and video footage of the candidates. The researchers translated the resultant brain activity into the voters' unspoken attitudes and, together with three political consultants from a Washington DC-based firm called FKF Applied Research, presented their findings in the New York Times in an op-ed titled, "This is Your Brain on Politics". Readers could view scans dotted with tangerine and neon-yellow hotspots indicating regions that "lit up" when the subjects were exposed to images of Hillary Clinton, John Edwards, Rudy Giuliani, and other candidates. Revealed in these activity patterns, the authors claimed, were "some voter impressions on which this election may well turn". Among those impressions was that two candidates had utterly failed to "engage" with swing voters. Who were these unpopular politicians? John McCain and Barack Obama, the two eventual nominees for president.

University press offices are notorious for touting sensational details in their media-friendly releases: here's a spot that lights up when subjects think of God ("Religion centre found!") or researchers find a region for love ("Love found in the brain"). Neuroscientists themselves sometimes refer disparagingly to these studies as "blobology", their tongue-in-cheek label for studies that show which brain areas become activated as subjects experience x or perform y task.

Skilled science journalists cringe when they read accounts claiming that scans can capture the mind itself in action. Serious science writers take pains to describe quality neuroscience research accurately. Indeed, an eddy of discontent is forming. "Neuromania", "neurohubris" and "neurohype" – "neuro-bollocks", if you're a Brit – are just some of the labels that have been bandied about, sometimes by frustrated neuroscientists themselves.

Reading too much into brain scans can become truly consequential when real-world concerns hang in the balance. Consider the law. When a person commits a crime, who is at fault: the perpetrator or his brain? Now, of course, this is a false choice. If biology has taught us anything, it is that "my brain" versus "me" is a false distinction. Still, if biological roots can be identified – and better yet, captured on a brain scan as juicy blotches of colour – it is too easy for non-professionals to assume that the behaviour under scrutiny must be "biological" and therefore "hardwired", involuntary or uncontrollable. Criminal lawyers, no surprise, are increasingly drawing on brain images supposedly showing a biological defect that "made" their clients commit murder.

Looking to the future, some neuroscientists envisage a dramatic transformation of criminal law. Neuroscientist David Eagleman, for one, welcomes a time when "we may some day find that many types of bad behaviour have a basic biological explanation [and] eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease". As this comes to pass, he predicts, "more juries will place defendants on the not-blameworthy side of the line". But is this the correct conclusion to draw from neuroscience data? After all, if every behaviour is eventually traced to detectable correlates of brain activity, does this mean we can one day write off all unwanted behaviour on a don't-blame-me-blame-my-brain theory of crime?

Scientists have made great strides in reducing the organisational complexity of the brain from the intact organ to its constituent neurons, the proteins they contain, genes, and so on. Using this template, we can see how human thought and action unfold at a number of explanatory levels, working upwards from the most basic elements. At one of the lower tiers in this hierarchy is the neurobiological level, which comprises the brain and its constituent cells. Genes direct neuronal development, neurons assemble into brain circuits. Information processing, or computation, and neural network dynamics hover above. At the middle level are conscious mental states, such as thoughts, feelings, perceptions, knowledge and intentions. Social and cultural contexts, which play a powerful role in shaping our mental contents and behaviour, occupy the highest landings of the hierarchy.

According to neuroscientist Sam Harris, inquiry into the brain will eventually and exhaustively explain the mind and, hence, human nature. Ultimately, he says, neuroscience will – and should –dictate human values. Semir Zeki, the British neuroscientist, and legal scholar Oliver Goodenough hail a "'millennial' future, perhaps only decades away [when] a good knowledge of the brain's system of justice and of how the brain reacts to conflicts may provide critical tools in resolving international political and economic conflicts". No less a towering figure than neuroscientist Michael Gazzaniga hopes for a "brain-based philosophy of life" based on an ethics that is "built into our brains. A lot of suffering, war and conflict could be eliminated if we could agree to live by them more consciously".

The brain is said to be the final scientific frontier, and rightly so in our view. Yet, in many quarters, brain-based explanations appear to be granted a kind of inherent superiority over all other ways of accounting for human behaviour. We call this assumption "neurocentrism" – the view that human experience and behaviour can be best explained from the predominant or even exclusive perspective of the brain. From this popular vantage point, the study of the brain is somehow more "scientific" than the study of human motives, thoughts, feelings and actions. By making the hidden visible, brain imaging has been a spectacular boon to neurocentrism.

Consider addiction. "Understanding the biological basis of pleasure leads us to fundamentally rethink the moral and legal aspects of addiction," writes neuroscientist David Linden. This is popular logic among addiction experts yet, to us, it makes little sense. Granted, there may be good reason to reform the way the criminal justice system deals with addicts, but the biology of addiction is not one of them. Why? Because the fact that addiction is associated with neurobiological changes is not, in itself, proof that the addict is unable to choose. Just look at American actor Robert Downey Jr. He was once a poster boy for drug excess. "It's like I have a loaded gun in my mouth and my finger's on the trigger, and I like the taste of gunmetal," he said. It seemed only a matter of time before he'd meet a horrible end. But Downey Jr entered rehab and decided to change his life. Why did Robert Downey Jr use drugs? Why did he decide to stop and to remain clean and sober? An examination of the brain, no matter how sophisticated, cannot tell us that at this time, and probably never will. The key problem with neurocentrism is that it devalues the importance of psychological explanations and environmental factors, such as familial chaos, stress and widespread access to drugs in sustaining addiction.

New neuroenthusiasms are endemic to modern brain science, seemingly sprouting up on a daily basis. Aspiring entrepreneurs now bone up on neuromanagement books such as Your Brain and Business: The Neuroscience of Great Leaders, which advises nervous CEOs "to be aware that anxiety centres in the brain connect to thinking centres, including the PFC [prefrontal cortex] and ACC [anterior cingulate cortex]". The mind is ephemeral and mysterious, but the brain is concrete. It promises to yield objective findings that will harden even the softest science, or so the dubious logic goes. Parents and teachers are easy marks for "brain gyms", "brain-compatible education" and "brain-based parenting", not to mention dozens of other techniques. But, mostly, these slick enterprises merely dress up good advice or repackage bromides with neuroscientific findings that add nothing to the overall programme. As one cognitive psychologist quipped: "Unable to persuade others about your viewpoint? Take a Neuro-Prefix – influence grows or your money back."

Ours is an age in which brain research is flourishing – a time of truly great expectations. Yet it is also a time of mindless neuroscience that leads us to overestimate how much neuroscience can improve legal, clinical and marketing practices, let alone inform social policy. The naive media, the slick neuroentrepreneur, and even the occasional overzealous neuroscientist exaggerate the capacity of scans to reveal the contents of our minds, exalt brain physiology as inherently the most valuable level of explanation for understanding behaviour, and rush to apply underdeveloped, if dazzling, science for commercial and forensic use.

The neurobiological domain is one of brains and physical causes the psychological domain is one of people and their motives. Both are essential to a full understanding of why we act as we do. But the brain and the mind are different frameworks for explaining human experience. And the distinction between them is hardly an academic matter: it bears crucial implications for how we think about human nature, as well as how to best alleviate human suffering.

Developments in brain nanotechnology could lead to the possibility of brain prostheses or even synthetic brains. Photograph: Istock


Bioinformatics

6.01.1 Introduction

Artificial intelligence algorithms have long been used for modeling decision-making systems as they provide automated knowledge extraction and high inference accuracy. Artificial neural networks (ANNs) are a class of artificial intelligence algorithms that emerged in the 1980s from developments in cognitive and computer science research. Like other artificial intelligence algorithms, ANNs were motivated to address the different aspects or elements of learning, such as how to learn, how to induce, and how to deduce. For such problems, ANNs can help draw conclusions from case observations and address the issues of prediction and interpretation.

Strictly speaking, most learning algorithms used by ANNs are rooted in classical statistical pattern analysis. Most of them are based on data distribution, unlike rough set algorithms (Komorowski, Chapter 6.02 ). ANNs introduce a new way to handle and analyze highly complex data. Most ANN algorithms have two common features. First, its network is composed of many artificial neurons that are mutually connected. The connections are called parameters and learned knowledge from a data set is then represented by these model parameters. This feature makes an ANN model similar to a human brain. Second, an ANN model typically does not make any prior assumptions about data distribution before learning. This greatly promotes the usability of ANNs in various applications.

The study of ANNs has undergone several important stages. In the early days, ANN studies were mainly motivated by theoretical interests, that is, investigating whether a machine can replace human for decision-making and pattern recognition. The pioneering researchers ( McCulloch and Pitts, 1943 ) showed the possibility of constructing a net of neurons that can interact to each other. The net was based on symbolic logic relations. This earlier idea of McCulloch and Pitts was not theoretically rigorous as indicated by Fitch (1944) . Later in 1949, Hebb gave more concrete and rigorous evidence of how and why the McCulloch–Pitts model works ( Hebb, 1949 ). He showed how neural pathways are strengthened once activated. In 1954, Marvin Minsky completed his doctorial study on neural networks and his discussion on ANNs later appeared in his seminal book ( Minsky, 1954 ). This was instrumental in bringing about a wide-scale interest in ANN research. In 1958, Frank Rosenblatt built a computer at Cornell University called the perceptron (later called single-layer perceptron (SLP)) capable of learning new skills by trial and error through mimicking the human thought process. However, Minksy (1969) demonstrated its inability to deal with complex data this somewhat dampened ANN research activity for many subsequent years.

In the period of 1970s and 1980s, ANN research was in fact not completely ceased. For instance, the self-organizing map (SOM) ( Kohonen, 2001 ) and the Hopfield net were widely studied ( Hopfield, 1982 ). In 1974, Paul Werbos conducted his doctorial study at Harvard University on a training process called backpropagation of errors this was later published in his book ( Werbos, 1994 ). This important contribution led to the work of David Rumelhart and his colleagues in the 1980s on the backpropagation algorithm, implemented for supervised learning problems ( Rumelhart and McClelland, 1987 ). Since then, ANNs have become very popular for both theoretical studies and practical exercises.

In this chapter, we focus on two particular ANN models – Rumelhart's multilayer perceptron (MLP) and Kohonen's SOM. The former is a standard ANN for supervised learning while the latter for unsupervised learning. Both adopt a trial-and-error learning process. MLP aims to build a function to map one type of observation to another type (e.g., from genotypes to phenotypes) and SOM explores internal structure within one data set (genotypic data only).

In contrast to Rosenblatt's SLP, Rumelhart's MLP introduces hidden neurons corresponding to hidden variables. An MLP model is in fact a hierarchical composition of several SLPs. For instance, let us consider a three-layer MLP for mapping genotypes to phenotypes. If we have two variables x1 and x2 describing genotypic status, we can build up two SLPs, z1 = f(x1,x2) and z2 = f(x1,x2), for some specified function f(∘). Based on z1 and z2, a higher level SLP is built, y = f(z1,z2), where y is called model output corresponding to collected phenotypic data denoted by t. x1, x2, and t are observed data (collected through an experiment) while z1 and z2 are unobserved – z1 and z2 are hidden variables. For this example, MLP models the nonlinear relationship between genotypic and phenotypic data without knowing what the true function between them is. Both SLP and MLP are supervised learning models so that during learning, observations of phenotypes act as a teacher to supervise parameter estimation.

Kohonen's net, on the other hand, is an unsupervised learning algorithm. The objective of SOM is to reveal how observations (instances or samples) are partitioned. This is similar to cluster analysis, which however does not infer how clusters correlate. SOM on the other hand can provide information on how clusters correlate. SOM is an unsupervised learning algorithm because it does not use phenotypic data for model parameter estimation.

We will discuss parameter estimation, learning rule, and learning algorithms for both MLP and SOM. The parameter optimization process is commonly based on minimizing an error function, chosen for a specific problem. We will show how the learning rules are derived for MLP and SOM based on their error functions and then discuss some biological applications of these two ANN algorithms.


Relation to reality

Morrison uses computers to simulate neural networks involving around 100,000 cells, in an attempt to model the human brain. However, since it is still not known how the gigantic chaos of cells and synapses in the human brain learns and thinks, Morrison always proposes a hypothesis first. In concrete terms this means: “For example, I think about how a network learns a certain task,” said Morrison. “Then I model this task using the simulation software to find out whether the network can actually learn this particular task.” How should the individual cells be connected with each other? Which activities enable the circuits to consolidate and which cause them to weaken?

Even if the computer model works, this does not necessarily mean that Morrison has discovered something about the real brain. The theoretical system modelled by the computer needs to prove its worth compared with empirical findings. One of Morrison’s projects with her research partners dealt with a network that was able to learn to spatially orient itself. This has an equivalent in the real world. Imagine putting a mouse into a tank of water. A platform that the mouse can step on to is just below the surface of the water where the mouse cannot see it. After a few attempts, the mouse will work out where the platform is and will then be able to find it quite quickly when it needs to. “We have implemented a more abstract version of this learning algorithm into our network,” said Morrison who found that the properties of the individual neurons of the network corresponded neatly with the properties of real neurons characterised in a number of empirical projects.

Approach used to simulate a neural network that is able to learn a spatial orientation task. © Prof. Dr. Abigail Morrison


What is a neural network and how does its operation differ from that of a digital computer? (In other words, is the brain like a computer?)

Artificial neural networks are parallel computational models, comprising densely interconnected adaptive processing units. These networks are composed of many but simple processors (relative, say, to a PC, which generally has a single, powerful processor) acting in parallel to model nonlinear static or dynamic systems, where a complex relationship exists between an input and its corresponding output.

A very important feature of these networks is their adaptive nature, in which "learning by example" replaces "programming" in solving problems. Here, "learning" refers to the automatic adjustment of the system's parameters so that the system can generate the correct output for a given input this adaptation process is reminiscent of the way learning occurs in the brain via changes in the synaptic efficacies of neurons. This feature makes these models very appealing in application domains where one has little or an incomplete understanding of the problem to be solved, but where training data is available.

One example would be to teach a neural network to convert printed text to speech. Here, one could pick several articles from a newspaper and generate hundreds of training pairs&mdashan input and its associated, "desired" output sound&mdashas follows: the input to the neural network would be a string of three consecutive letters from a given word in the text. The desired output that the network should generate could then be the sound of the second letter of the input string. The training phase would then consist of cycling through the training examples and adjusting the network parameters&mdashessentially, learning&mdashso that any error in output sound would be gradually minimized for all input examples. After training, the network could then be tested on new articles. The idea is that the neural network would "generalize" by being able to properly convert new text to speech.

Another key feature is the intrinsic parallel architecture, which allows for fast computation of solutions when these networks are implemented on parallel digital computers or, ultimately, when implemented in customized hardware. In many applications, however, they are implemented as programs that run on a PC or computer workstation.

Artificial neural networks are viable models for a wide variety of problems, including pattern classification, speech synthesis and recognition, adaptive interfaces between humans and complex physical systems, function approximation, image compression, forecasting and prediction, and nonlinear system modeling.

These networks are "neural" in the sense that they may have been inspired by the brain and neuroscience, but not necessarily because they are faithful models of biological, neural or cognitive phenomena. In fact, many artificial neural networks are more closely related to traditional mathematical and/or statistical models, such as nonparametric pattern classifiers, clustering algorithms, nonlinear filters and statistical regression models, than they are to neurobiological models.


3 Answers 3

One probable hardware limiting factor is internal bandwidth. A human brain has $10^<15>$ synapses. Even if each is only exchanging a few bits of information per second, that's on the order of $10^<15>$ bytes/sec internal bandwidth. A fast GPU (like those used to train neural networks) might approach $10^<11>$ bytes/sec of internal bandwidth. You could have 10,000 of these together to get something close to the total internal bandwidth of the human brain, but the interconnects between the nodes would be relatively slow, and would bottleneck the flow of information between different parts of the "brain."

Another limitation might be raw processing power. A modern GPU has maybe 5,000 math units. Each unit has a cycle time of

1000 cycles to do the equivalent processing work one neuron does in

1/10 second (this value is totally pulled from the air we don't really know the most efficient way to match brain processing in silicon). So, a single GPU might be able to match $5 imes 10^8$ neurons in real-time. You would optimally need 200 of them to match the processing power of the brain.

This back-of-the-envelope calculation shows that internal bandwidth is probably a more severe constraint.

This has been my field of research. I've seen the previous answers that suggest that we don't have sufficient computational power, but this is not entirely true.

The computational estimate for the human brain ranges from 10 petaFLOPS ( $1 imes 10^<16>$ ) to 1 exaFLOPS ( $1 imes 10^<18>$ ). Let's use the most conservative number. The TaihuLight can do 90 petaFLOPS which is $9 imes 10^<16>$ .

We see that the human brain is perhaps 11x more powerful. So, if the computational theory of mind were true, then TaiHuLight should be able to match the reasoning ability of an animal about 1/11th as intelligent.

If we look at a neural cortex list, the squirrel monkey has about 1/12th the number of neurons in its cerebral cortex as a human. With AI, we cannot match the reasoning ability of a squirrel monkey.

A dog has about 1/30th the number of neurons. With AI, we cannot match the reasoning ability of a dog.

A brown rat has about 1/500th the number of neurons. With AI, we cannot match the reasoning ability of a rat.

This gets us down to 2 petaFLOPS or 2,000 teraFLOPS. There are 67 supercomputers worldwide that should be capable of matching this.

A mouse has half the number of neurons as a brown rat. There are 190 supercomputers that should be able to match its reasoning ability.

A frog or non-schooling fish is about 1/5th of this. All of the top 500 supercomputers are 2.5x as powerful as this. Yet, none is capable of matching these animals.

What exactly is the obstacle we are facing?

The problem is that a cognitive system cannot be defined using only Church-Turing. AI should be capable of matching non-cognitive animals like arthropods, roundworms, and flatworms but not larger fish or most reptiles.

I guess I need to give more concrete examples. The NEST system has demonstrated 1 second of operation of 520 million neurons and 5.8 trillion synapses in 5.2 minutes on the 5 petaFLOPS BlueGene/Q. The current thinking is that, if they could scale the system by 200 to an exaFLOPS, then they could simulate the human cerebral cortex at the same 1/300th normal speed. This might sound reasonable, but it doesn't actually make sense.

A mouse has 1/1000th as many neurons as a human cortex. So this same system should be capable today of simulating a mouse brain at 1/60th normal speed. So, why aren't they doing it?


Trending AI Articles:

Currently, there are two areas of study of neural networks.

  1. Creation of computer models that faithfully repeat the functioning models of neurons of the real brain. It makes possible to explain both the mechanisms of real brain operation and learn the diagnosis/treatment of diseases and injuries of the central nervous system better. In ordinary life, for example, it allows us to learn more about what a person prefers (by collecting and analyzing data), to get closer to the human creating more personalized interfaces, etc.
  2. Creation of computer models that abstractly repeat the functioning models of neurons of the real brain. It makes possible to use all the advantages of the real brain, such as noise immunity and energy efficiency, in the analysis of large amounts of data. Here, for example, deep learning is gaining popularity.

Like the human brain, neural networks consist of a large number of related elements that mimic neurons. Deep neural networks are based on such algorithms, due to which computers learn from their own experience, forming in the learning process multi-level, hierarchical ideas about the world.

The architecture of the British Deepmind programs, according to one of the co-founders, is based on the functioning principles of the brain of different animals. Having worked in the game industry, he went to get a doctorate in MIT and studied how autobiographical memory works, how hypothalamus damages cause amnesia. The head of Facebook AI Reasearch also sees the future of machine learning in the further study of the functioning principles of living neural systems and their transfer to artificial networks. He draws such an analogy: we are not trying to make mechanical bats, but we are studying the physical laws of airflow around the wing while building airplanes — the same principle should be used to improve neural networks.

Deep learning developers always take into account the human brain features — construction of its neural networks, learning and memory processes, etc, trying to use the principles of their work and modeling the structure of billions of interconnected neurons. As a result of this, Deep learning is a step-by-step process similar to a human’s learning process. To do this, it is necessary to provide a neural network with a huge amount of data to train the system to classify data clearly and accurately.

In fact, the network receives a series of impulses as the inputs and gives the outputs, just like the human brain. At each moment, each neuron has a certain value (analogous to the electric potential of biological neurons) and, if this value exceeds the threshold, the neuron sends a single impulse, and its value drops to a level below the average for 2–30 ms (an analog of the rehabilitation process in biological neurons, so-called refractory period). When out of the equilibrium, the potential of the neuron smoothly begins to tend to the average value.

In general, deep learning is very similar to the process of human learning and has a phased process of abstraction. Each layer will have a different “weighting”, and this weighting reflects what was known about the components of the images. The higher the layer level, the more specific the components are. Like the human brain, the source signal in deep learning passes through processing layers further, it takes a partial understanding (shallow) to a general abstraction (deep), where it can perceive the object.

An important part of creating and training neural networks is also the understanding and application of cognitive science. This is a sphere that studies the mind and the processes in it, combining the elements of philosophy, psychology, linguistics, anthropology, and neurobiology. Many scientists believe that the creation of artificial intelligence is just another way of applying cognitive science, demonstrating how human thinking can be modeled in machines. A striking example of cognitive science is the Kahneman decision-making model, determining how a person makes a choice at any given moment — consciously or not (now often used in marketing AI).

At the moment, the biggest challenges to use deep learning lie in the field of understanding the language and conducting dialogs — systems must learn to operate the abstract meanings that are described semantically (creativity and understanding the meaning of a speech). And yet, despite the rapid development of this area, the human brain is still considered the most advanced “device” among neural networks: 100 trillion synaptic connections, organized into the most complex architecture.
Though, scientists believe that in the next half-century (forecasts vary greatly — from 10 to 100 years), the Universe will be able to step towards artificial neural networks that exceed human capabilities.


An Introduction to Spiking Neural Networks (Part 1)

Recently, Elon Musk owned neurotech startup Neuralink announced its ambitious plans to enhance the human brain’s computational capabilities by implanting minuscule, flexible robotic electrodes onto the surface of the brain. These nanomachines would then effectively be able to function as a part of your brain, making us superhuman cyborgs if all goes according to plan!

That brings us to the question, how would these nanomachines be able to process the signals in our brain and further contribute additional signals to enhance the brain’s capabilities? In order to understand this, let’s first take a look at how the neurons in the brain are wired and how information is represented and transmitted by them.

We will also then see how the biological neural networks in our brain compared to the artificial neural networks(ANNs) that have led to the emergence of deep learning. Finally, we will explore whether modeling neural networks by using more biologically realistic neuron models and learning rules could be the next big step in the world of deep learning.

I am writing this article mainly for readers who are fascinated by the seemingly magical powers of deep learning and have little to no background in biology, so pardon me if some of the next information seems very elementary to some of you!

Understanding how biological neurons create and transmit information

Although there are hundreds of different types of neurons in the brain, the above diagram which can be found in most introductory textbooks can be considered a good functional representation of a neuron.

The biochemical conditions in our brain are such that concentrations of ions are unequal inside and outside the membrane of a neuron which leads to the development of a potential difference. There is a higher concentration of positively charged ions outside the cell membrane, which means that there is a negative potential difference between the inside and the outside.

Neurons communicate with each other through voltage spikes! That is, from an engineer’s point of view, a neuron is nothing but a battery with a switch which only can be closed for the tiniest instant, thus producing a voltage spike and then becoming open again!

Biologists will complain that this is a very crude way to describe neuron function, but it works. Although there are beautiful biochemical mechanisms at play behind all of this, essentially a biological neuron is a battery with a switch that stays closed only momentarily. If you are interested in understanding how the sodium-potassium pump works and how voltage spikes are generated, I would recommend watching Khan Academy's video series on this topic:

You might be thinking,”Wait, that sounds easy enough but when does the switch close?” In order to answer that allow me to introduce you to the concept of ‘receptive field’ of a neuron.

Receptive field

Let us consider the neurons in your eyes (known as photoreceptors) to understand this concept. The photoreceptors are connected to the visual cortex(the part of your brain that processes visual input) through the optic nerve and some other neurons in the eye.

Now read this carefully! Each photorecepetor’s ‘switch’ closes when light is incident upon it in a particular manner.The exact way in which light has to be incident on a photoreceptor to close its switch is different for each photoreceptor.

For example, one photoreceptor may respond to red light falling on it at 60 degrees. Another photorecpetor may be receptive to the intensity of light. Its switch may close when light below a certain intensity level falls on it at any angle. The particular way in which light needs to fall upon a photoreceptor is called its ‘receptive field’ and we say that it fires an ‘action potential’ (fancy biologists’ way of saying spike of high voltage). If the manner in which light is incident on a neuron does not align with its receptive field, the neuron will not be as ‘activated’ and thus its switch is much less likely to close as compared to the case in which the light does align with its receptive field.

Each neuron in the nervous system has a receptive field! It is easy for us to figure out what the receptive field is for neurons that directly receive external input from the surroundings of the organism. However, as we go deeper inside the nervous system, the receptive fields of neurons become more complex. For example, there may be a neuron in your brain that fires an action potential when you see a picture of Cristiano Ronaldo or one that fires when you hear high pitched screaming as you are walking through the woods!

But how does that neuron deep inside your brain come to know that you are looking at a picture of Cristiano Ronaldo? Drawing a parallel to ConvNets, our neuron here has learned to detect a feature. But what learning rule is at play here? Does our brain actually carry out backpropagation?! Before I answer that, let me explain to you how action potentials are transmitted to other neurons.

When a neuron fires an action potential, the voltage spike travels along the axon of the neuron (which essentially acts like a conducting wire, for biochemical mechanism refer to Khanacademy). When the action potential reaches an axon terminal(the end part of an axon), it causes the release of neurotransmitters.

What are neurotransmitters?

Neurotransmitters are chemicals which are stored in bags called synaptic vesicles in the axon terminals. These bags are burst when an action potential reaches the terminal. The chemicals travel through the region between the axon terminal of the initial neuron (presynaptic neuron) and the next neuron’s dendrites (postsynaptic neuron). The connection between two neurons is called a synapse. The neurotransmitters attach to receptors present on the dendrites of the postsynaptic neuron. The effect of the neurotransmitters attaching to receptors depends on the type of neurotransmitter. Let’s see what the types are.

Types of neurotransmitters

Excitatory and inhibitory are the two types of neurotransmitters. When excitatory neurotransmitters attach onto receptors, they cause a net flow of positive ions into the neuron, which is called depolarization. The openings in the neuron’s cell membrane through which the positive ions flow are called ion channels.

Initially the voltage changes occuring due to opening of ion channels are linear and additive in nature. However once enough positive ions have come inside and the potential difference has reached a value called ‘threshold potential’ there is a sudden mass entry of positive ions which results in an action potential (ie. closing of the switch). After an action potential has occurred, the neuron returns to its initial stable state over a period of time.

On the other hand, if an inhibitory neurotransmitter that attaches to the receptors , it causes opening of ion channels that allow negatively charged ions to flow in or let positively charged ions flow out. Due to this, the potential difference between the exterior and interior becomes more negative and this process is called hyperpolarization.

How different types of neurotransmitters together determine neuron function

Almost all neurons in the brain have thousands of synapses(connections to other neurons). Neurons themselves can be classified as excitatory and inhibitory. Excitatory neurons contain only excitatory neurotransmitters in their synaptic vesicles whereas inhibitory neurons contain only inhibitory neurotransmitters. It is reasonable to assume that the average neuron in the brain is connected to tens or maybe hundreds of neurons of each type. As a result, at any point of time, a neuron in an active organism would be receiving inputs at multiple synapses. The net effect of this is that inhibitory and excitatory presynaptic neurons would compete to influence the activity of our postsynaptic neuron. Sometimes, the excitatory neurons win and the postsynaptic neuron fires a spike. Whereas sometimes the inhibitory neurons are able to shut it up for quite a long time!

Now here’s what makes it even more interesting. If the neuron that fired a spike is inhibitory, it will try to suppress the other neurons it is connected to. This even though its presynaptic excitatory neurons won the battle and managed to get it to spike, they might lose the war because our current neuron may suppress other excitatory neurons from firing and slow down the flow of information!

Strength of connections between neurons

The strength of a connection between two neurons decides to what extent the presynaptic neuron is able to influence the spiking behavior of the postsynaptic neuron. The strength depends upon the amount of neurotransmitter the presynaptic neuron releases and how receptive the postsynaptic neuron is to it. The strength of a connection between two neurons is not fixed. I will discuss how the magnitudes of connections are changed in the next part of this series :).

What happens after an action potential is fired?

Earlier in this post I said that after a neuron fires an action potential, it returns to its stable state. However, this is not an instantaneous process. The time period after the end on an action potential is called refractory period.

During the refractory period, it is difficult or in some cases impossible to trigger another action potential in the neuron no matter how strong the excitatory inputs to it are. This happens because the sodium ion channels which play a crucial role in the depolarization phase that causes the action potential are in an inactivated state and do not open at any membrane potential.

This is how it is ensured that a neuron receiving constant excitatory input does not go on a crazy firing spree and end up producing a lot more information for other neurons to process than necessary.

Computational models of neurons

How can we make computer models of biological neurons? Note that all information transmission related processes in biological neurons occur due to change in concentrations of ions. Could we model these biochemical processes using differential equations?

In 1952, two scientists Hodgkin and Huxley attempted to do so and ended up winning a Noble prize for their efforts! They modeled the cell membrane of and do not open at any membrane potential.the neuron as an electric circuit, similar to the one shown below:

They considered only the two most common ions involved in the generation of action potentials: Sodium(Na+) and Potassium(K+). They also incorporated a ‘leak current’ into their model, which basically accounts for the cell membrane not being completely impermeable to ions at all times. Using Ohm’s Law(V = IR) they wrote down the following circuit equations:

We know that the current flowing through the circuit is the sum of these three currents and the current through the capacitor. Using the current-voltage relation for a capacitor we get:


Our Brain Can Be Used As A Pattern

The neuron also called the "brain cell" is an electrically excitable cell that processes and transmits information by electrochemical signals. The synapse is a junction, between a pair of neurons, where the impulse is being transmitted. The neurons are never divided or replaced by new ones. On the other hand, the synapses can be modified, in such a way, that we can make connections where the signals had never been transmitted before. The workings of these generate a sequence of systematically different structures of our brain.

Neurons &ndash brain cells (Source: Pixabay)

Like biological neurons, an artificial neural network's simulated neurons work together.

To each connection between one synthesized neuron and another, we assign a value called a weight. This number represents the strength of the linkage and defines the internal procedures of the machine.

Every time we use the NN, the system is re-modifying the weights and targeting the minimization of errors, which, in turn, gets us one step closer to our goals &ndash the right classification of the data. This means that the outputs are correct or have a good probability rate of being the correct solutions.

Machine learning is intended for a larger purpose i.e., achieving a high rate of intelligence.

Using an innovative artificial intelligence tool, the NN learns how to generate contextually relevant reviews. For example, if we ask for the best food around us, the system will answer. But the language will include various adjectives, which are not consistent with our way of talking.

If we use every single input available for a valuable result, then high performance could be obtained.


Discussion

We here present an initial framework for quantitatively comparing any artificial neural network to the brain’s neural network for visual processing. With even the relatively small number of brain benchmarks that we have included so far, the framework already reveals interesting patterns: It extends prior work showing that performance correlates with brain similarity, and our analysis of state-of-the-art networks yielded DenseNet-169, CORnet-S and ResNet-101 as the current best models of the primate visual stream. On the other hand, we also find a potential disconnect between ImageNet performance and Brain-Score: many of the best ImageNet models fall behind other models on Brain-Score, with the winning DenseNet-169 not being the best ImageNet model, and even small networks (“BaseNets”) with poor ImageNet performance achieving reasonable scores.

We do not believe that our initial set of chosen metrics is perfect, and we expect the metrics to evolve in several ways:

By including more data of the same type used here

More neural sites collected with even the same set of images will provide more independent data samples, ensuring that models do not implicitly overfit a single set of benchmarks. Moreover, more data from more individuals will allow us to better estimate between-participant variability (i.e., the noise ceiling), establishing the upper bound of where models could possibly be (see below).

By acquiring the same types of data using new images

Presently, our datasets use naturalistic images, generated by pasting objects on a random backgrounds. While these datasets are already extremely challenging, we will more stringently be able to test model ability to generalize beyond its training set by expanding our datasets to more classes of images (e.g., photographs, distorted images (Geirhos et al., 2018), artistic renderings (Kubilius et al., 2018a), images optimized for neural responses (Bashivan et al., 2018)).

By acquiring the same types of data from other brain regions

The current benchmarks include V4, IT and behavioral readouts, but visual stimuli are first processed by the retina, LGN, V1 and V2 in the ventral stream. Including spiking neural data from these regions further constrains models in their early processing. Moreover, top-down modulation and control warrants recordings outside the ventral stream in regions such as PFC.

By adding qualitatively new types of data

Our current set of neural responses consists of recordings from implanted electrode arrays, but in humans, fMRI recordings are much more common. Local Field Potential (LFP), ECoG, and EEG/MEG could also be valuable sources of data. Moreover, good models of the primate brain should not only predict neural and behavioral responses but should also match brain structure (anatomy) in terms of number of layers, their order, connectivity patterns, ratios of numbers of neurons in different areas, and so on. Finally, to scale this framework to a more holistic view of the brain, adding benchmarks for other tasks and domains outside of core object recognition is essential.

By providing better experimental estimates of the ceilings of each component score

Note that it is still difficult to establish whether the ANN models are truly plateauing in their brain similarity – as implied in the results presented above – or if we are observing the limitations of our experimental datasets. For instance, neural ceilings only reflect the internal consistency of individuals neurons and, in that sense, are only an upper bound on the ceiling. That is, those neural responses are collected from individual monkeys, and it may be unreasonable to expect a single model to correctly predict every monkey’s neuron responses. A more reasonable ceiling might therefore need to reflect the consistency of an average monkey, leaving individual variabilities aside. However, in typical neuroscience experiments, recordings from only two monkeys are obtained, making it currently impossible to directly estimate these potentially lower ceilings.

Behavioral ceilings, on the other hand, might not be prone to such ceiling effects as they are already estimated using multiple humans responses (i.e. the “pooled” human data, see Rajalingham et al. (2015, 2018)). However, reaching consistency with the pooled human behavioral may not be the only way that one might want to use ANN models to inform brain science, as the across-subject variation is also an important aspect of the data that models should aim to inform on.

By developing new ways to compute the similarity between models and data

Besides computing neural predictivity, there are multiple possible ways and particular parameter choices. Others have used for instance different versions of linear regression (Agrawal et al., 2014), RDMs (Khaligh-Razavi and Kriegeskorte, 2014 Cichy et al., 2016) or GLM (Cadena et al., 2017). We see neural predictivity as the current strongest form of comparing neural responses because it maps between the two systems and makes specific predictions on a spike-rate level. One could also use entirely new types of comparison, such as precise temporal dynamics of neural responses that are ignored here, even though they are likely to play an important role in brain function (Wang et al., 2018), or causal manipulations that may constrain models more strongly (Rajalingham and DiCarlo, 2018).

By developing brain scores that are tuned separately for the non-human primate and the human

Our current set of benchmarks consist of recordings in macaques and behavioral measurements in humans and models are thus implicitly assumed to fit both of these primates. We do not believe that one ANN model should ultimately fit both species, so we imagine future versions of Brain-Score will treat them separately.

We caution that while Brain-Score reveals that one model is better than another, it does not yet reveal why that is the case. Due to current experimental constraints, we are not yet able to use Brain-Score to actually train a model. Both of these are key goals of our ongoing work.

To aid future efforts of aligning neural networks and the brain, we are building tools that allow researchers to quickly get a sense how their model scores against the available brain data on multiple dimensions, as well as compare against other models. Researchers can use our online platform Brain-Score.org to obtain all available brain data, submit new data and score their models on standardized benchmarks. The online platform provides an interface for submitting candidate models which are then automatically run on the current version of all benchmarks (code open-sourced at github.com/brain-score) and notify the submitting user about scores.

By providing this initial set of benchmarks we hope to ignite a discussion and further community-wide efforts around even better metrics, brain data and models. In this respect, our field is far closer to the beginning than the end, but it is important to get started and this is our version of such a start. We hope that Brain-Score will become a way of keeping track of computational models of the brain in terms of “how close we are” and quickly identifying the strongest model for a specific benchmark.


Beyond System 1 neural networks

One thing that can’t be denied, however, is that humans do in fact extract rules from their environment and develop abstract thoughts and concepts that they use to process and analyze new information. This complex symbol manipulation enables humans to compare and draw analogies between different tasks and perform efficient transfer learning. Understanding and applying causality remain among the unique features of the human brain.

“It is certainly the case that humans can learn abstract rules and extrapolate to new contexts in a way that exceeds modern ANNs. Calculus is perhaps the best example of learning to apply rules across different contexts. Discovering natural laws in physics is another example, where you learn a very general rule from a set of limited observations,” Hasson and Nastase say.

These are the kind of capabilities that emerge not from the activations and interactions of a single neural network but are the result of the accumulated knowledge across many minds and generations.

This is one area that direct-fit models fall short, Hasson and Nastase acknowledge. Scientifically, it is called System 1 and System 2 thinking. System 1 refers to the kind of tasks that can be learned by rote, such as recognizing faces, walking, running, driving. You can perform most of these capabilities subconsciously, while also performing some other task (e.g., walking and talking to someone else at the same time, driving and listening to the radio). System 2, however, requires concentration and conscious thinking (can you solve a differential equation while jogging?).

“In the paper, we distinguish fast and automatic System 1 capacities from the slow and deliberate cognitive functions,” Hasson and Nastase say. “While direct fit allows the brain to be competent while being blind to the solution it learned (similar to all evolved functional solutions in biology), and while it explains the ability of System 1 to learn to perceive and act across many contexts, it still doesn’t fully explain a subset of human functions attributed to System 2 which seems to gain some explicit understanding of the underlying structure of the world.”

So what do we need to develop AI algorithms that have System 2 capabilities? This is one area where there’s much debate in the research community. Some scientists, including deep learning pioneer Yoshua Bengio, believe that pure neural network-based systems will eventually lead to System 2 level AI. New research in the field shows that advanced neural network structures manifest the kind of symbol manipulation capabilities that were previously thought to be off-limits for deep learning.

In “Direct Fit to Nature,” the authors support the pure neural network–based approach. In their paper, they write: “Although the human mind inspires us to touch the stars, it is grounded in the mindless billions of direct-fit parameters of System 1. Therefore, direct-fit interpolation is not the end goal but rather the starting point for understanding the architecture of higher-order cognition. There is no other substrate from which System 2 could arise.”

An alternative view is the creation of hybrid systems that incorporate classic symbolic AI with neural networks. The area has drawn much attention in the past year, and there are several projects that show that rule-based AI and neural networks can complement each other to create systems that are stronger than the sum of their parts.

“Although non-neural symbolic computing—in the vein of von Neumann’s model of a control unit and arithmetic logic units—is useful in its own right and may be relevant at some level of description, the human System 2 is a product of biological evolution and emerges from neural networks,” Hasson and Nastase wrote in their comments to TechTalks.

In their paper, Hasson and Nastase expand on some of the possible components that might develop higher capabilities for neural networks. One interesting suggestion is providing a physical body for neural networks to experience and explore the world like other living beings.