Misplaced Pages

Spectro-temporal receptive field

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The spectro-temporal receptive field or spatio-temporal receptive field (STRF) of a neuron represents which types of stimuli excite or inhibit that neuron. "Spectro-temporal" refers most commonly to audition, where the neuron's response depends on frequency versus time, while "spatio-temporal" refers to vision, where the neuron's response depends on spatial location versus time. Thus they are not exactly the same concept, but both are referred to as STRF and serve a similar role in the analysis of neural responses.

#580419

51-495: If linearity is assumed, the neuron can be modeled as having a time-varying firing rate equal to the convolution of the stimulus with the STRF. The example STRF here is for an auditory neuron from the area CM (caudal medial) of a male zebra finch , when played conspecific birdsong. The colour of this plot shows the effect of sound on this neuron: this neuron tends to be excited by sound from about 2.5 kHz to 7 kHz heard by

102-519: A 0 = 1 {\displaystyle a_{0}=1} , the above function is considered affine in linear algebra (i.e. not linear). A Boolean function is linear if one of the following holds for the function's truth table : Another way to express this is that each variable always makes a difference in the truth value of the operation or it never makes a difference. Negation , Logical biconditional , exclusive or , tautology , and contradiction are linear functions. In physics , linearity

153-433: A x ) = a F ( x ) {\displaystyle F(ax)=aF(x)} for scalar a . This principle has many applications in physics and engineering because many physical systems can be modeled as linear systems. For example, a beam can be modeled as a linear system where the input stimulus is the load on the beam and the output response is the deflection of the beam. The importance of linear systems

204-428: A bigger amplitude than any of the components individually; this is called constructive interference . In most realistic physical situations, the equation governing the wave is only approximately linear. In these situations, the superposition principle only approximately holds. As a rule, the accuracy of the approximation tends to improve as the amplitude of the wave gets smaller. For examples of phenomena that arise when

255-564: A certain operating region—for example, a high-fidelity amplifier may distort a small signal, but sufficiently little to be acceptable (acceptable but imperfect linearity); and may distort very badly if the input exceeds a certain value. For an electronic device (or other physical device) that converts a quantity to another quantity, Bertram S. Kolts writes: There are three basic definitions for integral linearity in common use: independent linearity, zero-based linearity, and terminal, or end-point, linearity. In each case, linearity defines how well

306-515: A continuation of Chapter 8 [Interference]. On the other hand, few opticians would regard the Michelson interferometer as an example of diffraction. Some of the important categories of diffraction relate to the interference that accompanies division of the wavefront, so Feynman's observation to some extent reflects the difficulty that we may have in distinguishing division of amplitude and division of wavefront. The phenomenon of interference between waves

357-421: A function such as f ( x ) = a x + b {\displaystyle f(x)=ax+b} is defined by a linear polynomial in its argument, it is sometimes also referred to as being a "linear function", and the relationship between the argument and the function value may be referred to as a "linear relationship". This is potentially confusing, but usually the intended meaning will be clear from

408-525: A ket vector | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } into superposition of component ket vectors | ϕ j ⟩ {\displaystyle |\phi _{j}\rangle } as: | ψ i ⟩ = ∑ j C j | ϕ j ⟩ , {\displaystyle |\psi _{i}\rangle =\sum _{j}{C_{j}}|\phi _{j}\rangle ,} where

459-494: A linear function is the function defined by f ( x ) = ( a x , b x ) {\displaystyle f(x)=(ax,bx)} that maps the real line to a line in the Euclidean plane R that passes through the origin. An example of a linear polynomial in the variables X , {\displaystyle X,} Y {\displaystyle Y} and Z {\displaystyle Z}

510-464: A quantum mechanical state is a ray in projective Hilbert space , not a vector . According to Dirac : " if the ket vector corresponding to a state is multiplied by any complex number, not zero, the resulting ket vector will correspond to the same state [italics in original]." However, the sum of two rays to compose a superpositioned ray is undefined. As a result, Dirac himself uses ket vector representations of states to decompose or split, for example,

561-501: A superposition is interpreted as a vector sum . If the superposition holds, then it automatically also holds for all linear operations applied on these functions (due to definition), such as gradients, differentials or integrals (if they exist). By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific and simple form, often the response becomes easier to compute. For example, in Fourier analysis ,

SECTION 10

#1732894939581

612-420: A superposition of plane waves (waves of fixed frequency , polarization , and direction). As long as the superposition principle holds (which is often but not always; see nonlinear optics ), the behavior of any light wave can be understood as a superposition of the behavior of these simpler plane waves . Waves are usually described by variations in some parameters through space and time—for example, height in

663-411: A water wave, pressure in a sound wave, or the electromagnetic field in a light wave. The value of this parameter is called the amplitude of the wave and the wave itself is a function specifying the amplitude at each point. In any system with waves, the waveform at a given time is a function of the sources (i.e., external forces, if any, that create or affect the wave) and initial conditions of

714-432: A wavefront into infinitesimal coherent wavelets (sources), the effect is called diffraction. That is the difference between the two phenomena is [a matter] of degree only, and basically, they are two limiting cases of superposition effects. Yet another source concurs: In as much as the interference fringes observed by Young were the diffraction pattern of the double slit, this chapter [Fraunhofer diffraction] is, therefore,

765-534: Is a X + b Y + c Z + d . {\displaystyle aX+bY+cZ+d.} Linearity of a mapping is closely related to proportionality . Examples in physics include the linear relationship of voltage and current in an electrical conductor ( Ohm's law ), and the relationship of mass and weight . By contrast, more complicated relationships, such as between velocity and kinetic energy , are nonlinear . Generalized for functions in more than one dimension , linearity means

816-543: Is (to put it abstractly) finding a function y that satisfies some equation F ( y ) = 0 {\displaystyle F(y)=0} with some boundary specification G ( y ) = z . {\displaystyle G(y)=z.} For example, in Laplace's equation with Dirichlet boundary conditions , F would be the Laplacian operator in a region R , G would be an operator that restricts y to

867-400: Is a high fidelity audio amplifier , which must amplify a signal without changing its waveform. Others are linear filters , and linear amplifiers in general. In most scientific and technological , as distinct from mathematical, applications, something may be described as linear if the characteristic is approximately but not exactly a straight line; and linearity may be valid only within

918-1169: Is a nonlinear function. By the additive state decomposition, the system can be additively decomposed into x ˙ 1 = A x 1 + B u 1 + ϕ ( y d ) , x 1 ( 0 ) = x 0 , x ˙ 2 = A x 2 + B u 2 + ϕ ( c T x 1 + c T x 2 ) − ϕ ( y d ) , x 2 ( 0 ) = 0 {\displaystyle {\begin{aligned}{\dot {x}}_{1}&=Ax_{1}+Bu_{1}+\phi (y_{d}),&&x_{1}(0)=x_{0},\\{\dot {x}}_{2}&=Ax_{2}+Bu_{2}+\phi \left(c^{\mathsf {T}}x_{1}+c^{\mathsf {T}}x_{2}\right)-\phi (y_{d}),&&x_{2}(0)=0\end{aligned}}} with x = x 1 + x 2 . {\displaystyle x=x_{1}+x_{2}.} This decomposition can help to simplify controller design. According to Léon Brillouin ,

969-464: Is a property of the differential equations governing many systems; for instance, the Maxwell equations or the diffusion equation . Linearity of a homogenous differential equation means that if two functions f and g are solutions of the equation, then any linear combination af + bg is, too. In instrumentation, linearity means that a given change in an input variable gives the same change in

1020-407: Is based on this idea. When two or more waves traverse the same space, the net amplitude at each point is the sum of the amplitudes of the individual waves. In some cases, such as in noise-canceling headphones , the summed variation has a smaller amplitude than the component variations; this is called destructive interference . In other cases, such as in a line array , the summed variation will have

1071-569: Is only available for linear systems. However, the additive state decomposition can be applied to both linear and nonlinear systems. Next, consider a nonlinear system x ˙ = A x + B ( u 1 + u 2 ) + ϕ ( c T x ) , x ( 0 ) = x 0 , {\displaystyle {\dot {x}}=Ax+B(u_{1}+u_{2})+\phi \left(c^{\mathsf {T}}x\right),\qquad x(0)=x_{0},} where ϕ {\displaystyle \phi }

SECTION 20

#1732894939581

1122-772: Is that they are easier to analyze mathematically; there is a large body of mathematical techniques, frequency-domain linear transform methods such as Fourier and Laplace transforms, and linear operator theory, that are applicable. Because physical systems are generally only approximately linear, the superposition principle is only an approximation of the true physical behavior. The superposition principle applies to any linear system, including algebraic equations , linear differential equations , and systems of equations of those forms. The stimuli and responses could be numbers, functions, vectors, vector fields , time-varying signals, or any other object that satisfies certain axioms . Note that when vectors or vector fields are involved,

1173-417: Is the branch of mathematics concerned with systems of linear equations. In Boolean algebra , a linear function is a function f {\displaystyle f} for which there exist a 0 , a 1 , … , a n ∈ { 0 , 1 } {\displaystyle a_{0},a_{1},\ldots ,a_{n}\in \{0,1\}} such that Note that if

1224-454: Is the sum (or integral) of all the individual sinusoidal responses. As another common example, in Green's function analysis , the stimulus is written as the superposition of infinitely many impulse functions , and the response is then a superposition of impulse responses . Fourier analysis is particularly common for waves . For example, in electromagnetic theory, ordinary light is described as

1275-683: Is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X , and input B produces response Y , then input ( A + B ) produces response ( X + Y ). A function F ( x ) {\displaystyle F(x)} that satisfies the superposition principle is called a linear function . Superposition can be defined by two simpler properties: additivity F ( x 1 + x 2 ) = F ( x 1 ) + F ( x 2 ) {\displaystyle F(x_{1}+x_{2})=F(x_{1})+F(x_{2})} and homogeneity F (

1326-488: Is to write it as a superposition (called " quantum superposition ") of (possibly infinitely many) other wave functions of a certain type— stationary states whose behavior is particularly simple. Since the Schrödinger equation is linear, the behavior of the original wave function can be computed through the superposition principle this way. The projective nature of quantum-mechanical-state space causes some confusion, because

1377-401: The C j {\displaystyle C_{j}} ) phase change on the C j {\displaystyle C_{j}} does not affect the equivalence class of the | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } . There are exact correspondences between the superposition presented in the main on this page and

1428-420: The C j ∈ C {\displaystyle C_{j}\in {\textbf {C}}} . The equivalence class of the | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } allows a well-defined meaning to be given to the relative phases of the C j {\displaystyle C_{j}} ., but an absolute (same amount for all

1479-450: The animal 12 ms ago, but it is inhibited by sound in the same frequency range from about 18 ms ago. See A computational theory for early auditory receptive fields can be expressed from normative physical, mathematical and perceptual arguments, permitting axiomatic derivation of auditory receptive fields in two stages: These shapes of the receptive field functions in these models can be determined by necessity from structural properties of

1530-625: The boundary of R , and z would be the function that y is required to equal on the boundary of R . In the case that F and G are both linear operators, then the superposition principle says that a superposition of solutions to the first equation is another solution to the first equation: F ( y 1 ) = F ( y 2 ) = ⋯ = 0 ⇒ F ( y 1 + y 2 + ⋯ ) = 0 , {\displaystyle F(y_{1})=F(y_{2})=\cdots =0\quad \Rightarrow \quad F(y_{1}+y_{2}+\cdots )=0,} while

1581-409: The boundary values superpose: G ( y 1 ) + G ( y 2 ) = G ( y 1 + y 2 ) . {\displaystyle G(y_{1})+G(y_{2})=G(y_{1}+y_{2}).} Using these facts, if a list can be compiled of solutions to the first equation, then these solutions can be carefully put into a superposition such that it will satisfy

Spectro-temporal receptive field - Misplaced Pages Continue

1632-463: The context. The word linear comes from Latin linearis , "pertaining to or resembling a line". In mathematics, a linear map or linear function f ( x ) is a function that satisfies the two properties: These properties are known as the superposition principle . In this definition, x is not necessarily a real number , but can in general be an element of any vector space . A more special definition of linear function , not coinciding with

1683-902: The definition of linear map, is used in elementary mathematics (see below). Additivity alone implies homogeneity for rational α, since f ( x + x ) = f ( x ) + f ( x ) {\displaystyle f(x+x)=f(x)+f(x)} implies f ( n x ) = n f ( x ) {\displaystyle f(nx)=nf(x)} for any natural number n by mathematical induction , and then n f ( x ) = f ( n x ) = f ( m n m x ) = m f ( n m x ) {\displaystyle nf(x)=f(nx)=f(m{\tfrac {n}{m}}x)=mf({\tfrac {n}{m}}x)} implies f ( n m x ) = n m f ( x ) {\displaystyle f({\tfrac {n}{m}}x)={\tfrac {n}{m}}f(x)} . The density of

1734-427: The device's actual performance across a specified operating range approximates a straight line. Linearity is usually measured in terms of a deviation, or non-linearity, from an ideal straight line and it is typically expressed in terms of percent of full scale , or in ppm (parts per million) of full scale. Typically, the straight line is obtained by performing a least-squares fit of the data. The three definitions vary in

1785-432: The environment combined with requirements about the internal structure of the auditory system to enable theoretically well-founded processing of sound signals at different temporal and log-spectral scales. This neuroscience article is a stub . You can help Misplaced Pages by expanding it . Linearity In mathematics, the term linear is used in two distinct senses for two different properties: An example of

1836-424: The equation up into smaller pieces, solving each of those pieces, and summing the solutions. In a different usage to the above definition, a polynomial of degree 1 is said to be linear, because the graph of a function of that form is a straight line. Over the reals, a simple example of a linear equation is given by: where m is often called the slope or gradient , and b the y-intercept , which gives

1887-400: The linear operating region of a device, for example a transistor , is where an output dependent variable (such as the transistor collector current ) is directly proportional to an input dependent variable (such as the base current). This ensures that an analog output is an accurate representation of an input, typically with higher amplitude (amplified). A typical example of linear equipment

1938-434: The manner in which the straight line is positioned relative to the actual device's performance. Also, all three of these definitions ignore any gain, or offset errors that may be present in the actual device's performance characteristics. Superposition principle The superposition principle , also known as superposition property , states that, for all linear systems , the net response caused by two or more stimuli

1989-427: The other side. (See image at the top.) With regard to wave superposition, Richard Feynman wrote: No-one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical difference between them. The best we can do, roughly speaking, is to say that when there are only a few sources, say two, interfering, then

2040-433: The output of the measurement apparatus: this is highly desirable in scientific work. In general, instruments are close to linear over a certain range, and most useful within that range. In contrast, human senses are highly nonlinear: for instance, the brain completely ignores incoming light unless it exceeds a certain absolute threshold number of photons. Linear motion traces a straight line trajectory. In electronics ,

2091-490: The point of intersection between the graph of the function and the y -axis. Note that this usage of the term linear is not the same as in the section above, because linear polynomials over the real numbers do not in general satisfy either additivity or homogeneity. In fact, they do so if and only if the constant term – b in the example – equals 0. If b ≠ 0 , the function is called an affine function (see in greater generality affine transformation ). Linear algebra

Spectro-temporal receptive field - Misplaced Pages Continue

2142-572: The principle of superposition was first stated by Daniel Bernoulli in 1753: "The general motion of a vibrating system is given by a superposition of its proper vibrations." The principle was rejected by Leonhard Euler and then by Joseph Lagrange . Bernoulli argued that any sonorous body could vibrate in a series of simple modes with a well-defined frequency of oscillation. As he had earlier indicated, these modes could be superposed to produce more complex vibrations. In his reaction to Bernoulli's memoirs, Euler praised his colleague for having best developed

2193-424: The property of a function of being compatible with addition and scaling , also known as the superposition principle . Linearity of a polynomial means that its degree is less than two. The use of the term for polynomials stems from the fact that the graph of a polynomial in one variable is a straight line . In the term " linear equation ", the word refers to the linearity of the polynomials involved. Because

2244-524: The quantum superposition. For example, the Bloch sphere to represent pure state of a two-level quantum mechanical system ( qubit ) is also known as the Poincaré sphere representing different types of classical pure polarization states. Nevertheless, on the topic of quantum superposition, Kramers writes: "The principle of [quantum] superposition ... has no analogy in classical physics" . According to Dirac : "

2295-494: The rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear. The concept of linearity can be extended to linear operators . Important examples of linear operators include the derivative considered as a differential operator , and other operators constructed from it, such as del and the Laplacian . When a differential equation can be expressed in linear form, it can generally be solved by breaking

2346-407: The result is usually called interference, but if there is a large number of them, it seems that the word diffraction is more often used. Other authors elaborate: The difference is one of convenience and convention. If the waves to be superposed originate from a few coherent sources, say, two, the effect is called interference. On the other hand, if the waves to be superposed originate by subdividing

2397-1120: The second equation. This is one common method of approaching boundary-value problems. Consider a simple linear system: x ˙ = A x + B ( u 1 + u 2 ) , x ( 0 ) = x 0 . {\displaystyle {\dot {x}}=Ax+B(u_{1}+u_{2}),\qquad x(0)=x_{0}.} By superposition principle, the system can be decomposed into x ˙ 1 = A x 1 + B u 1 , x 1 ( 0 ) = x 0 , x ˙ 2 = A x 2 + B u 2 , x 2 ( 0 ) = 0 {\displaystyle {\begin{aligned}{\dot {x}}_{1}&=Ax_{1}+Bu_{1},&&x_{1}(0)=x_{0},\\{\dot {x}}_{2}&=Ax_{2}+Bu_{2},&&x_{2}(0)=0\end{aligned}}} with x = x 1 + x 2 . {\displaystyle x=x_{1}+x_{2}.} Superposition principle

2448-418: The stimulus is written as the superposition of infinitely many sinusoids . Due to the superposition principle, each of these sinusoids can be analyzed separately, and its individual response can be computed. (The response is itself a sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase .) According to the superposition principle, the response to the original stimulus

2499-462: The superposition principle does not exactly hold, see the articles nonlinear optics and nonlinear acoustics . In quantum mechanics , a principal task is to compute how a certain type of wave propagates and behaves. The wave is described by a wave function , and the equation governing its behavior is called the Schrödinger equation . A primary approach to computing the behavior of a wave function

2550-475: The superposition that occurs in quantum mechanics is of an essentially different nature from any occurring in the classical theory [italics in original]." Though reasoning by Dirac includes atomicity of observation, which is valid, as for phase, they actually mean phase translation symmetry derived from time translation symmetry , which is also applicable to classical states, as shown above with classical polarization states. A common type of boundary value problem

2601-475: The system. In many cases (for example, in the classic wave equation ), the equation describing the wave is linear. When this is true, the superposition principle can be applied. That means that the net amplitude caused by two or more waves traversing the same space is the sum of the amplitudes that would have been produced by the individual waves separately. For example, two waves traveling towards each other will pass right through each other without any distortion on

SECTION 50

#1732894939581
#580419