Misplaced Pages

Neural computation

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Neural computation is the information processing performed by networks of neurons . Neural computation is affiliated with the philosophical tradition known as Computational theory of mind , also referred to as computationalism, which advances the thesis that neural computation explains cognition . The first persons to propose an account of neural activity as being computational was Warren McCullock and Walter Pitts in their seminal 1943 paper, A Logical Calculus of the Ideas Immanent in Nervous Activity.

#310689

44-435: There are three general branches of computationalism, including classicism, connectionism , and computational neuroscience . All three branches agree that cognition is computation, however, they disagree on what sorts of computations constitute cognition. The classicism tradition believes that computation in the brain is digital, analogous to digital computing. Both connectionism and computational neuroscience do not require that

88-415: A couple of improvements to the simple perceptron idea, such as intermediate processors (now known as " hidden layers ") alongside input and output units, and used a sigmoid activation function instead of the old "all-or-nothing" function. Their work built upon that of John Hopfield , who was a key figure investigating the mathematical characteristics of sigmoid activation functions. From the late 1980s to

132-482: A five layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes. In 1972, Shun'ichi Amari produced an early example of self-organizing network . There was some conflict among artificial intelligence researchers as to what neural networks are useful for. Around late 1960s, there was a widespread lull in research and publications on neural networks, "the neural network winter", which lasted through

176-701: A formal and mathematical approach, and Frank Rosenblatt who published the 1958 paper "The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain" in Psychological Review , while working at the Cornell Aeronautical Laboratory. The first wave ended with the 1969 book about the limitations of the original perceptron idea, written by Marvin Minsky and Seymour Papert , which contributed to discouraging major funding agencies in

220-520: A formal and mathematical approach. McCulloch and Pitts showed how neural systems could implement first-order logic : Their classic paper "A Logical Calculus of Ideas Immanent in Nervous Activity" (1943) is important in this development here. They were influenced by the work of Nicolas Rashevsky in the 1930s and symbolic logic in the style of Principia Mathematica . Hebb contributed greatly to speculations about neural functioning, and proposed

264-421: A learning principle, Hebbian learning . Lashley argued for distributed representations as a result of his failure to find anything like a localized engram in years of lesion experiments. Friedrich Hayek independently conceived the model, first in a brief unpublished manuscript in 1920, then expanded into a book in 1952. The Perceptron machines were proposed and built by Frank Rosenblatt , who published

308-474: Is computational , that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine . Some researchers argued that the trend in connectionism represented a reversion toward associationism and the abandonment of the idea of a language of thought , something they saw as mistaken. In contrast, those very tendencies made connectionism attractive for other researchers. Connectionism and computationalism need not be at odds, but

352-657: Is a scientific journal dedicated to this subject, Neural Computation . Artificial neural networks (ANN) is a subfield of the research area machine learning . Work on ANNs has been somewhat inspired by knowledge of neural computation. Connectionism Connectionism is the name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many "waves" since its beginnings. The first wave appeared 1943 with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through

396-671: Is given for example by Bechtel & Abrahamsen, Marcus and Maurer. James L. McClelland James Lloyd " Jay " McClelland , FBA (born December 1, 1948) is the Lucie Stern Professor at Stanford University , where he was formerly the chair of the Psychology Department. He is best known for his work on statistical learning and Parallel Distributed Processing , applying connectionist models (or neural networks ) to explain cognitive phenomena such as spoken word recognition and visual word recognition . McClelland

440-730: Is more conclusively characterized as a split between computationalism and dynamical systems . In 2014, Alex Graves and others from DeepMind published a series of papers describing a novel Deep Neural Network structure called the Neural Turing Machine able to read symbols on a tape and store symbols in memory. Relational Networks, another Deep Network module published by DeepMind, are able to create object-like representations and manipulate them to answer complex questions. Relational Networks and Neural Turing Machines are further evidence that connectionism and computationalism need not be at odds. Smolensky's Subsymbolic Paradigm has to meet

484-634: Is simply the manner in which organic brains happen to implement the symbol-manipulation system. This is logically possible, as it is well known that connectionist models can implement symbol-manipulation systems of the kind used in computationalist models, as indeed they must be able if they are to explain the human ability to perform symbol-manipulation tasks. Several cognitive models combining both symbol-manipulative and connectionist architectures have been proposed. Among them are Paul Smolensky 's Integrated Connectionist/Symbolic Cognitive Architecture (ICS). and Ron Sun 's CLARION (cognitive architecture) . But

SECTION 10

#1732851633311

528-675: Is to a large extent responsible for the large increase in scientific interest in connectionism in the 1980s. McClelland was born on December 1, 1948, to Walter Moore and Frances (Shaffer) McClelland. He received a B.A. in Psychology from Columbia University in 1970, and a Ph.D. in Cognitive Psychology from the University of Pennsylvania in 1975. He married Heidi Marsha Feldman on May 6, 1978, and has two daughters. In 1986 McClelland published Parallel Distributed Processing: Explorations in

572-517: Is with respect to error-propagation networks that are needed to support learning, but error propagation can explain some of the biologically-generated electrical activity seen at the scalp in event-related potentials such as the N400 and P600 , and this provides some biological support for one of the key assumptions of connectionist learning procedures. Many recurrent connectionist models also incorporate dynamical systems theory . Many researchers, such as

616-577: The Parallel Distributed Processing (PDP) by James L. McClelland , David E. Rumelhart et al., which has introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (known as " hidden layers " now) alongside input and output units and using sigmoid activation function instead of the old 'all-or-nothing' function. Hopfield approached the field from the perspective of statistical mechanics, providing some early forms of mathematical rigor that increased

660-664: The 1958 paper “The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain” in Psychological Review , while working at the Cornell Aeronautical Laboratory. He cited Hebb, Hayek, Uttley, and Ashby as main influences. Another form of connectionist model was the relational network framework developed by the linguist Sydney Lamb in the 1960s. The research group led by Widrow empirically searched for methods to train two-layered ADALINE networks (MADALINE), with limited success. A method to train multilayered perceptrons with arbitrary levels of trainable weights

704-412: The 1970s, during which the field of artificial intelligence turned towards symbolic methods. The publication of Perceptrons (1969) is typically regarded as a catalyst of this event. The second wave begun in the early 1980s. Some key publications included ( John Hopfield , 1982) which popularized Hopfield networks , the 1986 paper that popularized backpropagation, and the 1987 two-volume book about

748-600: The Fodor-Pylyshyn challenge formulated by classical symbol theory for a convincing theory of cognition in modern connectionism. In order to be an adequate alternative theory of cognition, Smolensky's Subsymbolic Paradigm would have to explain the existence of systematicity or systematic relations in language cognition without the assumption that cognitive processes are causally sensitive to the classical constituent structure of mental representations. The subsymbolic paradigm, or connectionism in general, would thus have to explain

792-599: The Microstructure of Cognition with David Rumelhart , which some still regard as a bible for cognitive scientists . Geoffrey Hinton was a member of the PDP group. McClelland present work focuses on learning, memory processes, and psycholinguistics , still within the framework of connectionist models. He is a former chair of the Rumelhart Prize committee, having collaborated with Rumelhart for many years, and himself received

836-522: The US from investing in connectionist research. With a few noteworthy deviations, most connectionist research entered a period of inactivity until the mid-1980s. The term connectionist model was reintroduced in a 1982 paper in the journal Cognitive Science by Jerome Feldman and Dana Ballard. The second wave blossomed in the late 1980s, following a 1987 book about Parallel Distributed Processing by James L. McClelland , David E. Rumelhart et al., which introduced

880-566: The award in 2010 at the Cognitive Science Society Annual Conference in Portland, Oregon. McClelland and David Rumelhart are known for their debate with Steven Pinker and Alan Prince regarding the necessity of a language-specific learning module. In fall 2006 McClelland moved to Stanford University from Carnegie Mellon University , where he was a professor of psychology and Cognitive Neuroscience . He also holds

924-444: The case of a feedforward network, or to a previous layer in the case of a recurrent network. Discovery of non-linear activation functions has enabled the second wave of connectionism. Neural networks follow two basic principles: Most of the variety among the models comes from: Connectionist work in general does not need to be biologically realistic. One area where connectionist models are thought to be biologically implausible

SECTION 20

#1732851633311

968-450: The classical constituent structure of mental representations, the theory of cognition it develops would be, at best, an implementation architecture of the classical model of symbol theory and thus not a genuine alternative (connectionist) theory of cognition. The classical model of symbolism is characterized by (1) a combinatorial syntax and semantics of mental representations and (2) mental operations as structure-sensitive processes, based on

1012-496: The computations that realize cognition are necessarily digital computations. However, the two branches greatly disagree upon which sorts of experimental data should be used to construct explanatory models of cognitive phenomena. Connectionists rely upon behavioral evidence to construct models to explain cognitive phenomena, whereas computational neuroscience leverages neuroanatomical and neurophysiological information to construct mathematical models that explain cognition. When comparing

1056-424: The connectionist Paul Smolensky , have argued that connectionist models will evolve toward fully continuous , high-dimensional, non-linear , dynamic systems approaches. Precursors of the connectionist principles can be traced to early work in psychology , such as that of William James . Psychological theories based on knowledge about the human brain were fashionable in the late 19th century. As early as 1869,

1100-428: The connections could represent synapses , as in the human brain . This principle has been seen as an alternative to GOFAI and the classical theories of mind based on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception. Internal states of any network change over time due to neurons sending a signal to a succeeding layer of neurons in

1144-420: The debate in the late 1980s and early 1990s led to opposition between the two approaches. Throughout the debate, some researchers have argued that connectionism and computationalism are fully compatible, though full consensus on this issue has not been reached. Differences between the two approaches include the following: Despite these differences, some theorists have proposed that the connectionist architecture

1188-443: The debate rests on whether this symbol manipulation forms the foundation of cognition in general, so this is not a potential vindication of computationalism. Nonetheless, computational descriptions may be helpful high-level descriptions of cognition of logic, for example. The debate was largely centred on logical arguments about whether connectionist networks could produce the syntactic structure observed in this sort of reasoning. This

1232-410: The difficulty in deciphering how ANNs process information or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level. The current (third) wave has been marked by advances in deep learning , which have made possible the creation of large language models . The success of deep-learning networks in the past decade has greatly increased

1276-535: The existence of systematicity and compositionality without relying on the mere implementation of a classical cognitive architecture. This challenge implies a dilemma: If the Subsymbolic Paradigm could contribute nothing to the systematicity and compositionality of mental representations, it would be insufficient as a basis for an alternative theory of cognition. However, if the Subsymbolic Paradigm's contribution to systematicity requires mental processes grounded in

1320-626: The fundamental principle of syntactic and semantic constituent structure of mental representations as used in Fodor's "Language of Thought (LOT)". This can be used to explain the following closely related properties of human cognition, namely its (1) productivity, (2) systematicity, (3) compositionality, and (4) inferential coherence. This challenge has been met in modern connectionism, for example, not only by Smolensky's "Integrated Connectionist/Symbolic (ICS) Cognitive Architecture", but also by Werning and Maye's "Oscillatory Networks". An overview of this

1364-408: The late 1980s, some researchers (including Jerry Fodor , Steven Pinker and others) reacted against it. They argued that connectionism, as then developing, threatened to obliterate what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism . Computationalism is a specific form of cognitivism that argues that mental activity

Neural computation - Misplaced Pages Continue

1408-408: The learning algorithm, the number of units, etc.), or in unhelpfully low-level terms. In this sense, connectionist models may instantiate, and thereby provide evidence for, a broad theory of cognition (i.e., connectionism), without representing a helpful theory of the particular process that is being modelled. In this sense, the debate might be considered as to some extent reflecting a mere difference in

1452-447: The level of analysis in which particular theories are framed. Some researchers suggest that the analysis gap is the consequence of connectionist mechanisms giving rise to emergent phenomena that may be describable in computational terms. In the 2000s, the popularity of dynamical systems in philosophy of mind have added a new perspective on the debate; some authors now argue that any split between connectionism and computationalism

1496-527: The mid-1990s, connectionism took on an almost revolutionary tone when Schneider, Terence Horgan and Tienson posed the question of whether connectionism represented a fundamental shift in psychology and so-called "good old-fashioned AI," or GOFAI . Some advantages of the second wave connectionist approach included its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity for graceful degradation . Its disadvantages included

1540-403: The neurologist John Hughlings Jackson argued for multi-level, distributed systems. Following from this lead, Herbert Spencer 's Principles of Psychology , 3rd edition (1872), and Sigmund Freud 's Project for a Scientific Psychology (composed 1895) propounded connectionist or proto-connectionist theories. These tended to be speculative theories. But by the early 20th century, Edward Thorndike

1584-487: The neuron either fires an action potential or it does not. Accordingly, neural spike trains could be seen as strings of digits. Alternatively, analog computing systems perform manipulations on non-discrete, irreducibly continuous variables, that is, entities that vary continuously as a function of time. These sorts of operations are characterized by systems of differential equations. Neural computation can be studied for example by building models of neural computation . There

1628-553: The perceived respectability of the field. Another important series of publications proved that neural networks are universal function approximators , which also provided some mathematical respectability. Some early popular demonstration projects appeared during this time. NETtalk (1987) learned to pronounce written English. It achieved popular success, appearing on the Today show . TD-Gammon (1992) reached top human level in backgammon . As connectionism became increasingly popular in

1672-418: The popularity of this approach, but the complexity and scale of such networks has brought with them increased interpretability problems . The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and

1716-419: The three main traditions of the computational theory of mind, as well as the different possible forms of computation in the brain, it is helpful to define what we mean by computation in a general sense. Computation is the processing of information, otherwise known as variables or entities, according to a set of rules. A rule in this sense is simply an instruction for executing a manipulation on the current state of

1760-565: The variable, in order to produce a specified output. In other words, a rule dictates which output to produce given a certain input to the computing system. A computing system is a mechanism whose components must be functionally organized to process the information in accordance with the established set of rules. The types of information processed by a computing system determine which type of computations it performs. Traditionally, in cognitive science there have been two proposed types of computation related to neural activity - digital and analog , with

1804-457: The vast majority of theoretical work incorporating a digital understanding of cognition. Computing systems that perform digital computation are functionally organized to execute operations on strings of digits with respect to the type and location of the digit on the string. It has been argued that neural spike train signaling implements some form of digital computation, since neural spikes may be considered as discrete units or digits, like 0 or 1 -

Neural computation - Misplaced Pages Continue

1848-463: Was later achieved although using fast-variable binding abilities outside of those standardly assumed in connectionist models. Part of the appeal of computational descriptions is that they are relatively easy to interpret, and thus may be seen as contributing to our understanding of particular mental processes, whereas connectionist models are in general more opaque, to the extent that they may be describable only in very general terms (such as specifying

1892-578: Was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, called the Group Method of Data Handling . This method employs incremental layer by layer training based on regression analysis , where useless units in hidden layers are pruned with the help of a validation set. The first multilayered perceptrons trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari . In computer experiments conducted by Amari's student Saito,

1936-575: Was writing about human learning that posited a connectionist type network. Hopfield networks had precursors in the Ising model due to Wilhelm Lenz (1920) and Ernst Ising (1925), though the Ising model conceived by them did not involve time. Monte Carlo simulations of Ising model required the advent of computers in the 1950s. The first wave begun in 1943 with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through

#310689