Misplaced Pages

Kernel (linear algebra)

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In mathematics , the kernel of a linear map , also known as the null space or nullspace , is the part of the domain which is mapped to the zero vector of the co-domain ; the kernel is always a linear subspace of the domain. That is, given a linear map L  : V → W between two vector spaces V and W , the kernel of L is the vector space of all elements v of V such that L ( v ) = 0 , where 0 denotes the zero vector in W , or more symbolically: ker ⁡ ( L ) = { v ∈ V ∣ L ( v ) = 0 } = L − 1 ( 0 ) . {\displaystyle \ker(L)=\left\{\mathbf {v} \in V\mid L(\mathbf {v} )=\mathbf {0} \right\}=L^{-1}(\mathbf {0} ).}

#212787

141-730: The kernel of L is a linear subspace of the domain V . In the linear map L : V → W , {\displaystyle L:V\to W,} two elements of V have the same image in W if and only if their difference lies in the kernel of L , that is, L ( v 1 ) = L ( v 2 )  if and only if  L ( v 1 − v 2 ) = 0 . {\displaystyle L\left(\mathbf {v} _{1}\right)=L\left(\mathbf {v} _{2}\right)\quad {\text{ if and only if }}\quad L\left(\mathbf {v} _{1}-\mathbf {v} _{2}\right)=\mathbf {0} .} From this, it follows by

282-442: A ‖ 2 = 1 {\displaystyle \langle e_{i},e_{i}\rangle =\|e_{a}\|^{2}=1} for each index i . {\displaystyle i.} This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let V {\displaystyle V} be any inner product space. Then a collection E = { e

423-502: A , e a ⟩ = ‖ e a ‖ 2 = 1 {\displaystyle \langle e_{a},e_{a}\rangle =\|e_{a}\|^{2}=1} for all a , b ∈ A . {\displaystyle a,b\in A.} Using an infinite-dimensional analog of the Gram-Schmidt process one may show: Theorem. Any separable inner product space has an orthonormal basis. Using

564-413: A 2 ⋅ x ⋮ a m ⋅ x ] . {\displaystyle A\mathbf {x} ={\begin{bmatrix}\mathbf {a} _{1}\cdot \mathbf {x} \\\mathbf {a} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {a} _{m}\cdot \mathbf {x} \end{bmatrix}}.} Here, a 1 , ... , a m denote the rows of the matrix A . It follows that x

705-676: A b b d ] [ y 1 y 2 ] = a x 1 y 1 + b x 1 y 2 + b x 2 y 1 + d x 2 y 2 . {\displaystyle \langle x,y\rangle :=x^{\operatorname {T} }\mathbf {M} y=\left[x_{1},x_{2}\right]{\begin{bmatrix}a&b\\b&d\end{bmatrix}}{\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}=ax_{1}y_{1}+bx_{1}y_{2}+bx_{2}y_{1}+dx_{2}y_{2}.} As mentioned earlier, every inner product on R 2 {\displaystyle \mathbb {R} ^{2}}

846-461: A } a ∈ A {\displaystyle E=\left\{e_{a}\right\}_{a\in A}} is a basis for V {\displaystyle V} if the subspace of V {\displaystyle V} generated by finite linear combinations of elements of E {\displaystyle E} is dense in V {\displaystyle V} (in the norm induced by

987-764: A 1 , b 1 , … , a n , b n ) ∈ R 2 n {\displaystyle \left(a_{1},b_{1},\ldots ,a_{n},b_{n}\right)\in \mathbb {R} ^{2n}} ), then the dot product x ⋅ y = ( x 1 , … , x 2 n ) ⋅ ( y 1 , … , y 2 n ) := x 1 y 1 + ⋯ + x 2 n y 2 n {\displaystyle x\,\cdot \,y=\left(x_{1},\ldots ,x_{2n}\right)\,\cdot \,\left(y_{1},\ldots ,y_{2n}\right):=x_{1}y_{1}+\cdots +x_{2n}y_{2n}} defines

1128-432: A 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮

1269-419: A 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮   a m 1 x 1 + a m 2 x 2 + ⋯ +

1410-1038: A 22 x 2 + ⋯ + a 2 n x n = 0 ⋮   a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0 . {\displaystyle A\mathbf {x} =\mathbf {0} \;\;\Leftrightarrow \;\;{\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\;\cdots \;+\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\;\cdots \;+\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \ \;&&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\;\cdots \;+\;&&a_{mn}x_{n}&&\;=\;&&&0{\text{.}}\\\end{alignedat}}} Thus

1551-896: A m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0 } . {\displaystyle \left\{\left[\!\!{\begin{array}{c}x_{1}\\x_{2}\\\vdots \\x_{n}\end{array}}\!\!\right]\in K^{n}:{\begin{alignedat}{6}a_{11}x_{1}&;&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=0&\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=0&\\&&&&&&&&&&\vdots \quad &\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=0&\end{alignedat}}\right\}.} For example,

SECTION 10

#1732858135213

1692-725: A m n x n = b m {\displaystyle A\mathbf {x} =\mathbf {b} \quad {\text{or}}\quad {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\;\cdots \;+\;&&a_{1n}x_{n}&&\;=\;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\;\cdots \;+\;&&a_{2n}x_{n}&&\;=\;&&&b_{2}\\&&&&&&&&&&\vdots \ \;&&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\;\cdots \;+\;&&a_{mn}x_{n}&&\;=\;&&&b_{m}\\\end{alignedat}}} If u and v are two possible solutions to

1833-608: A , b ∈ F {\displaystyle a,b\in F} . If the positive-definiteness condition is replaced by merely requiring that ⟨ x , x ⟩ ≥ 0 {\displaystyle \langle x,x\rangle \geq 0} for all x {\displaystyle x} , then one obtains the definition of positive semi-definite Hermitian form . A positive semi-definite Hermitian form ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle }

1974-411: A , b ] . {\displaystyle [a,b].} The inner product is ⟨ f , g ⟩ = ∫ a b f ( t ) g ( t ) ¯ d t . {\displaystyle \langle f,g\rangle =\int _{a}^{b}f(t){\overline {g(t)}}\,\mathrm {d} t.} This space is not complete; consider for example, for

2115-451: A n d [ − 4 2 3 ] [ − 1 − 26 16 ] = 0 , {\displaystyle {\begin{bmatrix}2&3&5\end{bmatrix}}{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}=0\quad \mathrm {and} \quad {\begin{bmatrix}-4&2&3\end{bmatrix}}{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}=0,} which illustrates that vectors in

2256-566: A Hausdorff pre-Hilbert space ) is a real vector space or a complex vector space with an operation called an inner product . The inner product of two vectors in the space is a scalar , often denoted with angle brackets such as in ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles , and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces , in which

2397-441: A Hamel basis E ∪ F {\displaystyle E\cup F} for K , {\displaystyle K,} where E ∩ F = ∅ . {\displaystyle E\cap F=\varnothing .} Since it is known that the Hamel dimension of K {\displaystyle K} is c , {\displaystyle c,}

2538-465: A and b are arbitrary scalars. Over R {\displaystyle \mathbb {R} } , conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a positive-definite symmetric bilinear form . The binomial expansion of a square becomes Some authors, especially in physics and matrix algebra , prefer to define inner products and sesquilinear forms with linearity in

2679-404: A field K (typically R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ), that is operating on column vectors x with n components over K . The kernel of this linear map is the set of solutions to the equation A x = 0 , where 0 is understood as the zero vector . The dimension of the kernel of A is called

2820-424: A homogeneous system of linear equations , the subset of Euclidean space described by a system of homogeneous linear parametric equations , the span of a collection of vectors, and the null space , column space , and row space of a matrix . Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n -space that passes through the origin. A natural description of

2961-455: A partial order on the set of all subspaces (of any dimension). A subspace cannot lie in any subspace of lesser dimension. If dim  U  =  k , a finite number, and U  ⊂  W , then dim  W  =  k if and only if U  =  W . Given subspaces U and W of a vector space V , then their intersection U  ∩  W  := { v  ∈  V  : v  is an element of both U and  W }

SECTION 20

#1732858135213

3102-467: A symmetric positive-definite matrix M {\displaystyle \mathbf {M} } such that ⟨ x , y ⟩ = x T M y {\displaystyle \langle x,y\rangle =x^{\operatorname {T} }\mathbf {M} y} for all x , y ∈ R n . {\displaystyle x,y\in \mathbb {R} ^{n}.} If M {\displaystyle \mathbf {M} }

3243-499: A system of equations . The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span. The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space K : { [ x 1 x 2 ⋮ x n ] ∈ K n :

3384-419: A 1-subspace is the scalar multiplication of one non- zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication: This idea is generalized for higher dimensions with linear span , but criteria for equality of k -spaces specified by sets of k vectors are not so simple. A dual description

3525-1146: A basis of the kernel of A . Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that [ A I ] {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}} reduces to [ B C ] {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}} means that there exists an invertible matrix P {\displaystyle P} such that [ A I ] P = [ B C ] , {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}P={\begin{bmatrix}B\\\hline C\end{bmatrix}},} with B {\displaystyle B} in column echelon form. Thus A P = B {\displaystyle AP=B} , I P = C {\displaystyle IP=C} , and A C = B {\displaystyle AC=B} . A column vector v {\displaystyle \mathbf {v} } belongs to

3666-746: A bijection. Then there is a linear transformation T : K → L {\displaystyle T:K\to L} such that T f = φ ( f ) {\displaystyle Tf=\varphi (f)} for f ∈ F , {\displaystyle f\in F,} and T e = 0 {\displaystyle Te=0} for e ∈ E . {\displaystyle e\in E.} Let V = K ⊕ L {\displaystyle V=K\oplus L} and let G = { ( k , T k ) : k ∈ K } {\displaystyle G=\{(k,Tk):k\in K\}} be

3807-491: A direct sum U ⊕ W {\displaystyle U\oplus W} is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. dim ⁡ ( U ⊕ W ) = dim ⁡ ( U ) + dim ⁡ ( W ) {\displaystyle \dim(U\oplus W)=\dim(U)+\dim(W)} Inner product space In mathematics , an inner product space (or, rarely,

3948-686: A homogeneous system of linear equations involving x , y , and z : 2 x + 3 y + 5 z = 0 , − 4 x + 2 y + 3 z = 0. {\displaystyle {\begin{aligned}2x+3y+5z&=0,\\-4x+2y+3z&=0.\end{aligned}}} The same linear equations can also be written in matrix form as: [ 2 3 5 0 − 4 2 3 0 ] . {\displaystyle \left[{\begin{array}{ccc|c}2&3&5&0\\-4&2&3&0\end{array}}\right].} Through Gauss–Jordan elimination ,

4089-421: A matrix A consists of all column vectors x such that x A = 0 , where T denotes the transpose of a matrix. The left null space of A is the same as the kernel of A . The left null space of A is the orthogonal complement to the column space of A , and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of A are

4230-750: A non-trivial result, and is proved below. The following proof is taken from Halmos's A Hilbert Space Problem Book (see the references). Let K {\displaystyle K} be a Hilbert space of dimension ℵ 0 . {\displaystyle \aleph _{0}.} (for instance, K = ℓ 2 ( N ) {\displaystyle K=\ell ^{2}(\mathbb {N} )} ). Let E {\displaystyle E} be an orthonormal basis of K , {\displaystyle K,} so | E | = ℵ 0 . {\displaystyle |E|=\aleph _{0}.} Extend E {\displaystyle E} to

4371-523: A real inner product on the real vector space V R . {\displaystyle V_{\mathbb {R} }.} Every inner product on a real vector space is a bilinear and symmetric map . For example, if V = C {\displaystyle V=\mathbb {C} } with inner product ⟨ x , y ⟩ = x y ¯ , {\displaystyle \langle x,y\rangle =x{\overline {y}},} where V {\displaystyle V}

Kernel (linear algebra) - Misplaced Pages Continue

4512-1001: A real inner product on this space. The unique complex inner product ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \,\cdot ,\cdot \,\rangle } on V = C n {\displaystyle V=\mathbb {C} ^{n}} induced by the dot product is the map that sends c = ( c 1 , … , c n ) , d = ( d 1 , … , d n ) ∈ C n {\displaystyle c=\left(c_{1},\ldots ,c_{n}\right),d=\left(d_{1},\ldots ,d_{n}\right)\in \mathbb {C} ^{n}} to ⟨ c , d ⟩ := c 1 d 1 ¯ + ⋯ + c n d n ¯ {\displaystyle \langle c,d\rangle :=c_{1}{\overline {d_{1}}}+\cdots +c_{n}{\overline {d_{n}}}} (because

4653-413: A single vector equation: The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace. In general, a linear combination of vectors v 1 ,  v 2 , ... ,  v k is any vector of the form The set of all possible linear combinations is called the span : If

4794-474: A vector and a covector. Every inner product space induces a norm , called its canonical norm , that is defined by ‖ x ‖ = ⟨ x , x ⟩ . {\displaystyle \|x\|={\sqrt {\langle x,x\rangle }}.} With this norm, every inner product space becomes a normed vector space . So, every general property of normed vector spaces applies to inner product spaces. In particular, one has

4935-413: A vector space over C {\displaystyle \mathbb {C} } that becomes an inner product space with the inner product ⟨ x , y ⟩ := x y ¯  for  x , y ∈ C . {\displaystyle \langle x,y\rangle :=x{\overline {y}}\quad {\text{ for }}x,y\in \mathbb {C} .} Unlike with

5076-401: A vector space over a field K , a subset W of V is a linear subspace of V if it is a vector space over K for the operations of V . Equivalently, a linear subspace of V is a nonempty subset W such that, whenever w 1 , w 2 are elements of W and α , β are elements of K , it follows that αw 1 + βw 2 is in W . The singleton set consisting of

5217-399: A well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of

5358-402: Is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as U ⊕ W {\displaystyle U\oplus W} . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. The dimension of

5499-441: Is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a continuous function. For real random variables X {\displaystyle X} and Y , {\displaystyle Y,} the expected value of their product ⟨ X , Y ⟩ = E [ X Y ] {\displaystyle \langle X,Y\rangle =\mathbb {E} [XY]}

5640-406: Is a free variable ranging over all real numbers, this can be expressed equally well as: [ x y z ] = c [ − 1 − 26 16 ] . {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}=c{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}.} The kernel of A is precisely

5781-447: Is a linear subspace of H ¯ , {\displaystyle {\overline {H}},} the inner product of H {\displaystyle H} is the restriction of that of H ¯ , {\displaystyle {\overline {H}},} and H {\displaystyle H} is dense in H ¯ {\displaystyle {\overline {H}}} for

Kernel (linear algebra) - Misplaced Pages Continue

5922-474: Is a continuous linear operator that satisfies ⟨ x , A x ⟩ = 0 {\displaystyle \langle x,Ax\rangle =0} for all x ∈ V , {\displaystyle x\in V,} then A = 0. {\displaystyle A=0.} This statement is no longer true if ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \,\cdot ,\cdot \,\rangle }

6063-492: Is a linear combination of the corresponding columns of C {\displaystyle C} . The problem of computing the kernel on a computer depends on the nature of the coefficients. If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem , which reduces

6204-548: Is a linear map (linear for both V {\displaystyle V} and V R {\displaystyle V_{\mathbb {R} }} ) that denotes rotation by 90 ∘ {\displaystyle 90^{\circ }} in the plane. Because x {\displaystyle x} and A x {\displaystyle Ax} are perpendicular vectors and ⟨ x , A x ⟩ R {\displaystyle \langle x,Ax\rangle _{\mathbb {R} }}

6345-455: Is a real vector space then ⟨ x , y ⟩ = Re ⁡ ⟨ x , y ⟩ = 1 4 ( ‖ x + y ‖ 2 − ‖ x − y ‖ 2 ) {\displaystyle \langle x,y\rangle =\operatorname {Re} \langle x,y\rangle ={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}\right)} and

6486-438: Is a vector space over the field C , {\displaystyle \mathbb {C} ,} then V R = R 2 {\displaystyle V_{\mathbb {R} }=\mathbb {R} ^{2}} is a vector space over R {\displaystyle \mathbb {R} } and ⟨ x , y ⟩ R {\displaystyle \langle x,y\rangle _{\mathbb {R} }}

6627-514: Is also a subspace of V . Proof: For every vector space V , the set { 0 } and V itself are subspaces of V . If U and W are subspaces, their sum is the subspace U + W = { u + w : u ∈ U , w ∈ W } . {\displaystyle U+W=\left\{\mathbf {u} +\mathbf {w} \colon \mathbf {u} \in U,\mathbf {w} \in W\right\}.} For example,

6768-417: Is also equivalent to consider linear combinations of two elements at a time. In a topological vector space X , a subspace W need not be topologically closed , but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals ). Descriptions of subspaces include the solution set to

6909-434: Is an inner product if and only if for all x {\displaystyle x} , if ⟨ x , x ⟩ = 0 {\displaystyle \langle x,x\rangle =0} then x = 0 {\displaystyle x=\mathbf {0} } . In the following properties, which result almost immediately from the definition of an inner product, x , y and z are arbitrary vectors, and

7050-428: Is an inner product. In this case, ⟨ X , X ⟩ = 0 {\displaystyle \langle X,X\rangle =0} if and only if P [ X = 0 ] = 1 {\displaystyle \mathbb {P} [X=0]=1} (that is, X = 0 {\displaystyle X=0} almost surely ), where P {\displaystyle \mathbb {P} } denotes

7191-508: Is an isometric linear map V → ℓ 2 {\displaystyle V\rightarrow \ell ^{2}} with a dense image. This theorem can be regarded as an abstract form of Fourier series , in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials . Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided ℓ 2 {\displaystyle \ell ^{2}}

SECTION 50

#1732858135213

7332-642: Is an orthonormal basis of the space C [ − π , π ] {\displaystyle C[-\pi ,\pi ]} with the L 2 {\displaystyle L^{2}} inner product. The mapping f ↦ 1 2 π { ∫ − π π f ( t ) e − i k t d t } k ∈ Z {\displaystyle f\mapsto {\frac {1}{\sqrt {2\pi }}}\left\{\int _{-\pi }^{\pi }f(t)e^{-ikt}\,\mathrm {d} t\right\}_{k\in \mathbb {Z} }}

7473-593: Is considered as a real vector space in the usual way (meaning that it is identified with the 2 n − {\displaystyle 2n-} dimensional real vector space R 2 n , {\displaystyle \mathbb {R} ^{2n},} with each ( a 1 + i b 1 , … , a n + i b n ) ∈ C n {\displaystyle \left(a_{1}+ib_{1},\ldots ,a_{n}+ib_{n}\right)\in \mathbb {C} ^{n}} identified with (

7614-604: Is defined appropriately, as is explained in the article Hilbert space ). In particular, we obtain the following result in the theory of Fourier series: Theorem. Let V {\displaystyle V} be the inner product space C [ − π , π ] . {\displaystyle C[-\pi ,\pi ].} Then the sequence (indexed on set of all integers) of continuous functions e k ( t ) = e i k t 2 π {\displaystyle e_{k}(t)={\frac {e^{ikt}}{\sqrt {2\pi }}}}

7755-454: Is denoted 0 {\displaystyle \mathbf {0} } for distinguishing it from the scalar 0 . An inner product space is a vector space V over the field F together with an inner product , that is, a map that satisfies the following three properties for all vectors x , y , z ∈ V {\displaystyle x,y,z\in V} and all scalars

7896-985: Is dense in V . {\displaystyle V.} Finally, { ( e , 0 ) : e ∈ E } {\displaystyle \{(e,0):e\in E\}} is a maximal orthonormal set in G {\displaystyle G} ; if 0 = ⟨ ( e , 0 ) , ( k , T k ) ⟩ = ⟨ e , k ⟩ + ⟨ 0 , T k ⟩ = ⟨ e , k ⟩ {\displaystyle 0=\langle (e,0),(k,Tk)\rangle =\langle e,k\rangle +\langle 0,Tk\rangle =\langle e,k\rangle } for all e ∈ E {\displaystyle e\in E} then k = 0 , {\displaystyle k=0,} so ( k , T k ) = ( 0 , 0 ) {\displaystyle (k,Tk)=(0,0)}

8037-389: Is equivalent to a homogeneous system of linear equations : A x = 0 ⇔ a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 +

8178-507: Is in column echelon form, B w = 0 {\displaystyle B\mathbf {w} =\mathbf {0} } , if and only if the nonzero entries of w {\displaystyle \mathbf {w} } correspond to the zero columns of B {\displaystyle B} . By multiplying by C {\displaystyle C} , one may deduce that this is the case if and only if v = C w {\displaystyle \mathbf {v} =C\mathbf {w} }

8319-425: Is in the kernel of A , if and only if x is orthogonal (or perpendicular) to each of the row vectors of A (since orthogonality is defined as having a dot product of 0). The row space , or coimage, of a matrix A is the span of the row vectors of A . By the above reasoning, the kernel of A is the orthogonal complement to the row space. That is, a vector x lies in the kernel of A , if and only if it

8460-508: Is instead a real inner product, as this next example shows. Suppose that V = C {\displaystyle V=\mathbb {C} } has the inner product ⟨ x , y ⟩ := x y ¯ {\displaystyle \langle x,y\rangle :=x{\overline {y}}} mentioned above. Then the map A : V → V {\displaystyle A:V\to V} defined by A x = i x {\displaystyle Ax=ix}

8601-416: Is just the dot product, ⟨ x , A x ⟩ R = 0 {\displaystyle \langle x,Ax\rangle _{\mathbb {R} }=0} for all vectors x ; {\displaystyle x;} nevertheless, this rotation map A {\displaystyle A} is certainly not identically 0. {\displaystyle 0.} In contrast, using

SECTION 60

#1732858135213

8742-557: Is known as the Hermitian form and is given by ⟨ x , y ⟩ = y † M x = x † M y ¯ , {\displaystyle \langle x,y\rangle =y^{\dagger }\mathbf {M} x={\overline {x^{\dagger }\mathbf {M} y}},} where M {\displaystyle M} is any Hermitian positive-definite matrix and y † {\displaystyle y^{\dagger }}

8883-579: Is not defined in V R , {\displaystyle V_{\mathbb {R} },} the vector in V {\displaystyle V} denoted by i x {\displaystyle ix} is nevertheless still also an element of V R {\displaystyle V_{\mathbb {R} }} ). For the complex inner product, ⟨ x , i x ⟩ = − i ‖ x ‖ 2 , {\displaystyle \langle x,ix\rangle =-i\|x\|^{2},} whereas for

9024-413: Is of this form (where b ∈ R , a > 0 {\displaystyle b\in \mathbb {R} ,a>0} and d > 0 {\displaystyle d>0} satisfy a d > b 2 {\displaystyle ad>b^{2}} ). The general form of an inner product on C n {\displaystyle \mathbb {C} ^{n}}

9165-488: Is perpendicular to every vector in the row space of A . The dimension of the row space of A is called the rank of A , and the dimension of the kernel of A is called the nullity of A . These quantities are related by the rank–nullity theorem rank ⁡ ( A ) + nullity ⁡ ( A ) = n . {\displaystyle \operatorname {rank} (A)+\operatorname {nullity} (A)=n.} The left null space , or cokernel , of

9306-782: Is positive-definite (which happens if and only if det M = a d − b 2 > 0 {\displaystyle \det \mathbf {M} =ad-b^{2}>0} and one/both diagonal elements are positive) then for any x := [ x 1 , x 2 ] T , y := [ y 1 , y 2 ] T ∈ R 2 , {\displaystyle x:=\left[x_{1},x_{2}\right]^{\operatorname {T} },y:=\left[y_{1},y_{2}\right]^{\operatorname {T} }\in \mathbb {R} ^{2},} ⟨ x , y ⟩ := x T M y = [ x 1 , x 2 ] [

9447-417: Is provided with linear functionals (usually implemented as linear equations). One non- zero linear functional F specifies its kernel subspace F  = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space ): It is generalized for higher codimensions with

9588-474: Is the conjugate transpose of y . {\displaystyle y.} For the real case, this corresponds to the dot product of the results of directionally-different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. It is a weighted-sum version of the dot product with positive weights—up to an orthogonal transformation. The article on Hilbert spaces has several examples of inner product spaces, wherein

9729-604: Is the dot product x ⋅ y , {\displaystyle x\cdot y,} where x = a + i b ∈ V = C {\displaystyle x=a+ib\in V=\mathbb {C} } is identified with the point ( a , b ) ∈ V R = R 2 {\displaystyle (a,b)\in V_{\mathbb {R} }=\mathbb {R} ^{2}} (and similarly for y {\displaystyle y} ); thus

9870-479: Is the identity matrix then ⟨ x , y ⟩ = x T M y {\displaystyle \langle x,y\rangle =x^{\operatorname {T} }\mathbf {M} y} is the dot product. For another example, if n = 2 {\displaystyle n=2} and M = [ a b b d ] {\displaystyle \mathbf {M} ={\begin{bmatrix}a&b\\b&d\end{bmatrix}}}

10011-489: Is the transpose of x . {\displaystyle x.} A function ⟨ ⋅ , ⋅ ⟩ : R n × R n → R {\displaystyle \langle \,\cdot ,\cdot \,\rangle :\mathbb {R} ^{n}\times \mathbb {R} ^{n}\to \mathbb {R} } is an inner product on R n {\displaystyle \mathbb {R} ^{n}} if and only if there exists

10152-418: Is the zero vector in G . {\displaystyle G.} Hence the dimension of G {\displaystyle G} is | E | = ℵ 0 , {\displaystyle |E|=\aleph _{0},} whereas it is clear that the dimension of V {\displaystyle V} is c . {\displaystyle c.} This completes

10293-468: Is thus a one-to-one correspondence between complex inner products on a complex vector space V , {\displaystyle V,} and real inner products on V . {\displaystyle V.} For example, suppose that V = C n {\displaystyle V=\mathbb {C} ^{n}} for some integer n > 0. {\displaystyle n>0.} When V {\displaystyle V}

10434-513: The symmetric map ⟨ x , y ⟩ = x y {\displaystyle \langle x,y\rangle =xy} (rather than the usual conjugate symmetric map ⟨ x , y ⟩ = x y ¯ {\displaystyle \langle x,y\rangle =x{\overline {y}}} ) then its real part ⟨ x , y ⟩ R {\displaystyle \langle x,y\rangle _{\mathbb {R} }} would not be

10575-458: The Cartesian plane R . Take W to be the set of points ( x , y ) of R such that x = y . Then W is a subspace of R . Proof: In general, any subset of the real coordinate space R that is defined by a homogeneous system of linear equations will yield a subspace. (The equation in example I was z  = 0, and the equation in example II was x  =  y .) Again take

10716-744: The Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} is orthonormal if ⟨ e i , e j ⟩ = 0 {\displaystyle \langle e_{i},e_{j}\rangle =0} for every i ≠ j {\displaystyle i\neq j} and ⟨ e i , e i ⟩ = ‖ e

10857-452: The Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that Theorem. Any complete inner product space has an orthonormal basis. The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is

10998-465: The coordinates t 1 , ..., t k for a vector in the span are uniquely determined. A basis for a subspace S is a set of linearly independent vectors whose span is S . The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more). The set-theoretical inclusion binary relation specifies

11139-857: The dot product is an inner product space, an example of a Euclidean vector space . ⟨ [ x 1 ⋮ x n ] , [ y 1 ⋮ y n ] ⟩ = x T y = ∑ i = 1 n x i y i = x 1 y 1 + ⋯ + x n y n , {\displaystyle \left\langle {\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}},{\begin{bmatrix}y_{1}\\\vdots \\y_{n}\end{bmatrix}}\right\rangle =x^{\textsf {T}}y=\sum _{i=1}^{n}x_{i}y_{i}=x_{1}y_{1}+\cdots +x_{n}y_{n},} where x T {\displaystyle x^{\operatorname {T} }}

11280-627: The first isomorphism theorem that the image of L is isomorphic to the quotient of V by the kernel: im ⁡ ( L ) ≅ V / ker ⁡ ( L ) . {\displaystyle \operatorname {im} (L)\cong V/\ker(L).} In the case where V is finite-dimensional , this implies the rank–nullity theorem : dim ⁡ ( ker ⁡ L ) + dim ⁡ ( im ⁡ L ) = dim ⁡ ( V ) . {\displaystyle \dim(\ker L)+\dim(\operatorname {im} L)=\dim(V).} where

11421-432: The four fundamental subspaces associated with the matrix A . The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: A x = b or a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1

11562-721: The imaginary part (also called the complex part ) of ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is always 0. {\displaystyle 0.} Assume for the rest of this section that V {\displaystyle V} is a complex vector space. The polarization identity for complex vector spaces shows that The map defined by ⟨ x ∣ y ⟩ = ⟨ y , x ⟩ {\displaystyle \langle x\mid y\rangle =\langle y,x\rangle } for all x , y ∈ V {\displaystyle x,y\in V} satisfies

11703-530: The nullity of A . In set-builder notation , N ⁡ ( A ) = Null ⁡ ( A ) = ker ⁡ ( A ) = { x ∈ K n ∣ A x = 0 } . {\displaystyle \operatorname {N} (A)=\operatorname {Null} (A)=\operatorname {ker} (A)=\left\{\mathbf {x} \in K^{n}\mid A\mathbf {x} =\mathbf {0} \right\}.} The matrix equation

11844-410: The orthogonal complement in V of ker ⁡ ( L ) {\displaystyle \ker(L)} . This is the generalization to linear operators of the row space , or coimage, of a matrix. The notion of kernel also makes sense for homomorphisms of modules , which are generalizations of vector spaces where the scalars are elements of a ring , rather than a field . The domain of

11985-540: The probability of the event. This definition of expectation as inner product can be extended to random vectors as well. The inner product for complex square matrices of the same size is the Frobenius inner product ⟨ A , B ⟩ := tr ⁡ ( A B † ) {\displaystyle \langle A,B\rangle :=\operatorname {tr} \left(AB^{\dagger }\right)} . Since trace and transposition are linear and

12126-422: The topology defined by the norm. In this article, F denotes a field that is either the real numbers R , {\displaystyle \mathbb {R} ,} or the complex numbers C . {\displaystyle \mathbb {C} .} A scalar is thus an element of F . A bar over an expression representing a scalar denotes the complex conjugate of this scalar. A zero vector

12267-422: The zero vector alone and the entire vector space itself are linear subspaces that are called the trivial subspaces of the vector space. In the vector space V = R (the real coordinate space over the field R of real numbers ), take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V . Proof: Let the field be R again, but now let the vector space V be

12408-493: The Frobenius inner product is positive definite too, and so is an inner product. On an inner product space, or more generally a vector space with a nondegenerate form (hence an isomorphism V → V ∗ {\displaystyle V\to V^{*}} ), vectors can be sent to covectors (in coordinates, via transpose), so that one can take the inner product and outer product of two vectors—not simply of

12549-434: The above equation, then A ( u − v ) = A u − A v = b − b = 0 {\displaystyle A(\mathbf {u} -\mathbf {v} )=A\mathbf {u} -A\mathbf {v} =\mathbf {b} -\mathbf {b} =\mathbf {0} } Thus, the difference of any two solutions to the equation A x = b lies in the kernel of A . It follows that any solution to

12690-449: The art software for this purpose is the Lapack library. Linear subspace In mathematics , and more specifically in linear algebra , a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces . If V is

12831-626: The assignment x ↦ ⟨ x , x ⟩ {\displaystyle x\mapsto {\sqrt {\langle x,x\rangle }}} would not define a norm. The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable. For instance, if ⟨ x , y ⟩ = 0 {\displaystyle \langle x,y\rangle =0} then ⟨ x , y ⟩ R = 0 , {\displaystyle \langle x,y\rangle _{\mathbb {R} }=0,} but

12972-505: The axioms of the inner product except that it is antilinear in its first , rather than its second, argument. The real part of both ⟨ x ∣ y ⟩ {\displaystyle \langle x\mid y\rangle } and ⟨ x , y ⟩ {\displaystyle \langle x,y\rangle } are equal to Re ⁡ ⟨ x , y ⟩ {\displaystyle \operatorname {Re} \langle x,y\rangle } but

13113-631: The cardinality of the continuum, it must be that | F | = c . {\displaystyle |F|=c.} Let L {\displaystyle L} be a Hilbert space of dimension c {\displaystyle c} (for instance, L = ℓ 2 ( R ) {\displaystyle L=\ell ^{2}(\mathbb {R} )} ). Let B {\displaystyle B} be an orthonormal basis for L {\displaystyle L} and let φ : F → B {\displaystyle \varphi :F\to B} be

13254-564: The complex inner product ⟨ x , y ⟩ {\displaystyle \langle x,y\rangle } is the map ⟨ x , y ⟩ R = Re ⁡ ⟨ x , y ⟩   :   V R × V R → R , {\displaystyle \langle x,y\rangle _{\mathbb {R} }=\operatorname {Re} \langle x,y\rangle ~:~V_{\mathbb {R} }\times V_{\mathbb {R} }\to \mathbb {R} ,} which necessarily forms

13395-605: The complex inner product gives ⟨ x , A x ⟩ = − i ‖ x ‖ 2 , {\displaystyle \langle x,Ax\rangle =-i\|x\|^{2},} which (as expected) is not identically zero. Let V {\displaystyle V} be a finite dimensional inner product space of dimension n . {\displaystyle n.} Recall that every basis of V {\displaystyle V} consists of exactly n {\displaystyle n} linearly independent vectors. Using

13536-929: The conjugation is on the second matrix, it is a sesquilinear operator. We further get Hermitian symmetry by, ⟨ A , B ⟩ = tr ⁡ ( A B † ) = tr ⁡ ( B A † ) ¯ = ⟨ B , A ⟩ ¯ {\displaystyle \langle A,B\rangle =\operatorname {tr} \left(AB^{\dagger }\right)={\overline {\operatorname {tr} \left(BA^{\dagger }\right)}}={\overline {\left\langle B,A\right\rangle }}} Finally, since for A {\displaystyle A} nonzero, ⟨ A , A ⟩ = ∑ i j | A i j | 2 > 0 {\displaystyle \langle A,A\rangle =\sum _{ij}\left|A_{ij}\right|^{2}>0} , we get that

13677-414: The definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W . The equivalent definition states that it

13818-449: The dimension of the subspace in K will be the dimension of the null set of A , the composite matrix of the n functions. In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation: The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix Every subspace of K can be described as

13959-459: The dot product; furthermore, without the complex conjugate, if x ∈ C {\displaystyle x\in \mathbb {C} } but x ∉ R {\displaystyle x\not \in \mathbb {R} } then ⟨ x , x ⟩ = x x = x 2 ∉ [ 0 , ∞ ) {\displaystyle \langle x,x\rangle =xx=x^{2}\not \in [0,\infty )} so

14100-526: The equation A x = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation A x = b is { v + x ∣ A v = b ∧ x ∈ Null ⁡ ( A ) } , {\displaystyle \left\{\mathbf {v} +\mathbf {x} \mid A\mathbf {v} =\mathbf {b} \land \mathbf {x} \in \operatorname {Null} (A)\right\},} Geometrically, this says that

14241-485: The field to be R , but now let the vector space V be the set R of all functions from R to R . Let C( R ) be the subset consisting of continuous functions . Then C( R ) is a subspace of R . Proof: Keep the same field and vector space as before, but now consider the set Diff( R ) of all differentiable functions . The same sort of argument as before shows that this is a subspace too. Examples that extend these themes are common in functional analysis . From

14382-767: The following properties: Suppose that ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is an inner product on V {\displaystyle V} (so it is antilinear in its second argument). The polarization identity shows that the real part of the inner product is Re ⁡ ⟨ x , y ⟩ = 1 4 ( ‖ x + y ‖ 2 − ‖ x − y ‖ 2 ) . {\displaystyle \operatorname {Re} \langle x,y\rangle ={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}\right).} If V {\displaystyle V}

14523-1850: The graph of T . {\displaystyle T.} Let G ¯ {\displaystyle {\overline {G}}} be the closure of G {\displaystyle G} in V {\displaystyle V} ; we will show G ¯ = V . {\displaystyle {\overline {G}}=V.} Since for any e ∈ E {\displaystyle e\in E} we have ( e , 0 ) ∈ G , {\displaystyle (e,0)\in G,} it follows that K ⊕ 0 ⊆ G ¯ . {\displaystyle K\oplus 0\subseteq {\overline {G}}.} Next, if b ∈ B , {\displaystyle b\in B,} then b = T f {\displaystyle b=Tf} for some f ∈ F ⊆ K , {\displaystyle f\in F\subseteq K,} so ( f , b ) ∈ G ⊆ G ¯ {\displaystyle (f,b)\in G\subseteq {\overline {G}}} ; since ( f , 0 ) ∈ G ¯ {\displaystyle (f,0)\in {\overline {G}}} as well, we also have ( 0 , b ) ∈ G ¯ . {\displaystyle (0,b)\in {\overline {G}}.} It follows that 0 ⊕ L ⊆ G ¯ , {\displaystyle 0\oplus L\subseteq {\overline {G}},} so G ¯ = V , {\displaystyle {\overline {G}}=V,} and G {\displaystyle G}

14664-595: The inner product is the dot product or scalar product of Cartesian coordinates . Inner product spaces of infinite dimension are widely used in functional analysis . Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces . The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano , in 1898. An inner product naturally induces an associated norm , (denoted | x | {\displaystyle |x|} and | y | {\displaystyle |y|} in

14805-442: The inner product). Say that E {\displaystyle E} is an orthonormal basis for V {\displaystyle V} if it is a basis and ⟨ e a , e b ⟩ = 0 {\displaystyle \left\langle e_{a},e_{b}\right\rangle =0} if a ≠ b {\displaystyle a\neq b} and ⟨ e

14946-399: The inner products differ in their complex part: The last equality is similar to the formula expressing a linear functional in terms of its real part. These formulas show that every complex inner product is completely determined by its real part. Moreover, this real part defines an inner product on V , {\displaystyle V,} considered as a real vector space. There

15087-688: The interval [−1, 1] the sequence of continuous "step" functions, { f k } k , {\displaystyle \{f_{k}\}_{k},} defined by: f k ( t ) = { 0 t ∈ [ − 1 , 0 ] 1 t ∈ [ 1 k , 1 ] k t t ∈ ( 0 , 1 k ) {\displaystyle f_{k}(t)={\begin{cases}0&t\in [-1,0]\\1&t\in \left[{\tfrac {1}{k}},1\right]\\kt&t\in \left(0,{\tfrac {1}{k}}\right)\end{cases}}} This sequence

15228-496: The kernel can be further expressed in parametric vector form , as follows: [ x y z ] = c [ − 1 / 16 − 13 / 8 1 ] ( where  c ∈ R ) {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}=c{\begin{bmatrix}-1/16\\-13/8\\1\end{bmatrix}}\quad ({\text{where }}c\in \mathbb {R} )} Since c

15369-516: The kernel of A {\displaystyle A} (that is A v = 0 {\displaystyle A\mathbf {v} =\mathbf {0} } ) if and only if B w = 0 , {\displaystyle B\mathbf {w} =\mathbf {0} ,} where w = P − 1 v = C − 1 v {\displaystyle \mathbf {w} =P^{-1}\mathbf {v} =C^{-1}\mathbf {v} } . As B {\displaystyle B}

15510-470: The kernel of A are orthogonal to each of the row vectors of A . These two (linearly independent) row vectors span the row space of A —a plane orthogonal to the vector (−1,−26,16) . With the rank 2 of A , the nullity 1 of A , and the dimension 3 of A , we have an illustration of the rank-nullity theorem. A basis of the kernel of a matrix may be computed by Gaussian elimination . For this purpose, given an m × n matrix A , we construct first

15651-2084: The kernel of A consists in the non-zero columns of C such that the corresponding column of B is a zero column . In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that A = [ 1 0 − 3 0 2 − 8 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 ] . {\displaystyle A={\begin{bmatrix}1&0&-3&0&2&-8\\0&1&5&0&-1&4\\0&0&0&1&7&-9\\0&0&0&0&0&0\end{bmatrix}}.} Then [ A I ] = [ 1 0 − 3 0 2 − 8 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}={\begin{bmatrix}1&0&-3&0&2&-8\\0&1&5&0&-1&4\\0&0&0&1&7&-9\\0&0&0&0&0&0\\\hline 1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&1&0&0\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix}}.} Putting

15792-433: The kernel of A is the same as the solution set to the above homogeneous equations. The kernel of a m × n matrix A over a field K is a linear subspace of K . That is, the kernel of A , the set Null( A ) , has the following three properties: The product A x can be written in terms of the dot product of vectors as follows: A x = [ a 1 ⋅ x

15933-745: The kernel. Consider the matrix A = [ 2 3 5 − 4 2 3 ] . {\displaystyle A={\begin{bmatrix}2&3&5\\-4&2&3\end{bmatrix}}.} The kernel of this matrix consists of all vectors ( x , y , z ) ∈ R for which [ 2 3 5 − 4 2 3 ] [ x y z ] = [ 0 0 ] , {\displaystyle {\begin{bmatrix}2&3&5\\-4&2&3\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},} which can be expressed as

16074-424: The mapping is a module, with the kernel constituting a submodule . Here, the concepts of rank and nullity do not necessarily apply. If V and W are topological vector spaces such that W is finite-dimensional, then a linear operator L : V → W is continuous if and only if the kernel of L is a closed subspace of V . Consider a linear map represented as a m × n matrix A with coefficients in

16215-623: The matrix can be reduced to: [ 1 0 1 / 16 0 0 1 13 / 8 0 ] . {\displaystyle \left[{\begin{array}{ccc|c}1&0&1/16&0\\0&1&13/8&0\end{array}}\right].} Rewriting the matrix in equation form yields: x = − 1 16 z y = − 13 8 z . {\displaystyle {\begin{aligned}x&=-{\frac {1}{16}}z\\y&=-{\frac {13}{8}}z.\end{aligned}}} The elements of

16356-410: The metric induced by the inner product yields a complete metric space . An example of an inner product space which induces an incomplete metric is the space C ( [ a , b ] ) {\displaystyle C([a,b])} of continuous complex valued functions f {\displaystyle f} and g {\displaystyle g} on the interval [

16497-492: The minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: dim ⁡ ( U + W ) = dim ⁡ ( U ) + dim ⁡ ( W ) − dim ⁡ ( U ∩ W ) . {\displaystyle \dim(U+W)=\dim(U)+\dim(W)-\dim(U\cap W).} A set of subspaces

16638-623: The next example shows that the converse is in general not true. Given any x ∈ V , {\displaystyle x\in V,} the vector i x {\displaystyle ix} (which is the vector x {\displaystyle x} rotated by 90°) belongs to V {\displaystyle V} and so also belongs to V R {\displaystyle V_{\mathbb {R} }} (although scalar multiplication of x {\displaystyle x} by i = − 1 {\displaystyle i={\sqrt {-1}}}

16779-448: The null space of some matrix (see § Algorithms below for more). The subset of K described by a system of homogeneous linear parametric equations is a subspace: For example, the set of all vectors ( x ,  y ,  z ) parameterized by the equations is a two-dimensional subspace of K , if K is a number field (such as real or rational numbers). In linear algebra, the system of parametric equations can be written as

16920-442: The picture); so, every inner product space is a normed vector space . If this normed space is also complete (that is, a Banach space ) then the inner product space is a Hilbert space . If an inner product space H is not a Hilbert space, it can be extended by completion to a Hilbert space H ¯ . {\displaystyle {\overline {H}}.} This means that H {\displaystyle H}

17061-427: The problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors , a floating-point matrix has almost always a full rank , even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned , i.e. it has a low condition number . Even for

17202-543: The problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication). For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity , but are faster and behave better with modern computer hardware . For matrices whose entries are floating-point numbers ,

17343-614: The proof. Parseval's identity leads immediately to the following theorem: Theorem. Let V {\displaystyle V} be a separable inner product space and { e k } k {\displaystyle \left\{e_{k}\right\}_{k}} an orthonormal basis of V . {\displaystyle V.} Then the map x ↦ { ⟨ e k , x ⟩ } k ∈ N {\displaystyle x\mapsto {\bigl \{}\langle e_{k},x\rangle {\bigr \}}_{k\in \mathbb {N} }}

17484-509: The rank–nullity theorem can be restated as Rank ⁡ ( L ) + Nullity ⁡ ( L ) = dim ⁡ ( domain ⁡ L ) . {\displaystyle \operatorname {Rank} (L)+\operatorname {Nullity} (L)=\dim \left(\operatorname {domain} L\right).} When V is an inner product space , the quotient V / ker ⁡ ( L ) {\displaystyle V/\ker(L)} can be identified with

17625-437: The real inner product the value is always ⟨ x , i x ⟩ R = 0. {\displaystyle \langle x,ix\rangle _{\mathbb {R} }=0.} If ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \,\cdot ,\cdot \,\rangle } is a complex inner product and A : V → V {\displaystyle A:V\to V}

17766-413: The real numbers, the assignment ( x , y ) ↦ x y {\displaystyle (x,y)\mapsto xy} does not define a complex inner product on C . {\displaystyle \mathbb {C} .} More generally, the real n {\displaystyle n} -space R n {\displaystyle \mathbb {R} ^{n}} with

17907-462: The real part of this map ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \,\cdot ,\cdot \,\rangle } is equal to the dot product). Real vs. complex inner products Let V R {\displaystyle V_{\mathbb {R} }} denote V {\displaystyle V} considered as a vector space over the real numbers rather than complex numbers. The real part of

18048-467: The row augmented matrix [ A I ] , {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}},} where I is the n × n identity matrix . Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix [ B C ] . {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}.} A basis of

18189-1066: The second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. Bra-ket notation in quantum mechanics also uses slightly different notation, i.e. ⟨ ⋅ | ⋅ ⟩ {\displaystyle \langle \cdot |\cdot \rangle } , where ⟨ x | y ⟩ := ( y , x ) {\displaystyle \langle x|y\rangle :=\left(y,x\right)} . Several notations are used for inner products, including ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } , ( ⋅ , ⋅ ) {\displaystyle \left(\cdot ,\cdot \right)} , ⟨ ⋅ | ⋅ ⟩ {\displaystyle \langle \cdot |\cdot \rangle } and ( ⋅ | ⋅ ) {\displaystyle \left(\cdot |\cdot \right)} , as well as

18330-408: The set of all vectors ( x , y , z ) (over real or rational numbers ) satisfying the equations x + 3 y + 2 z = 0 and 2 x − 4 y + 5 z = 0 {\displaystyle x+3y+2z=0\quad {\text{and}}\quad 2x-4y+5z=0} is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions,

18471-419: The solution set to A x = b is the translation of the kernel of A by the vector v . See also Fredholm alternative and flat (geometry) . The following is a simple illustration of the computation of the kernel of a matrix (see § Computation by Gaussian elimination , below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to

18612-393: The solution set to these equations (in this case, a line through the origin in R ). Here, since the vector (−1,−26,16) constitutes a basis of the kernel of A . The nullity of A is 1. The following dot products are zero: [ 2 3 5 ] [ − 1 − 26 16 ] = 0

18753-422: The standard inner product ⟨ x , y ⟩ = x y ¯ , {\displaystyle \langle x,y\rangle =x{\overline {y}},} on C {\displaystyle \mathbb {C} } is an "extension" the dot product . Also, had ⟨ x , y ⟩ {\displaystyle \langle x,y\rangle } been instead defined to be

18894-542: The subspace of K spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz -plane, with each point on the plane described by infinitely many different values of t 1 , t 2 , t 3 . In general, vectors v 1 , ... ,  v k are called linearly independent if for ( t 1 ,  t 2 , ... ,  t k ) ≠ ( u 1 ,  u 2 , ... ,  u k ). If v 1 , ..., v k are linearly independent, then

19035-432: The sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality max ( dim ⁡ U , dim ⁡ W ) ≤ dim ⁡ ( U + W ) ≤ dim ⁡ ( U ) + dim ⁡ ( W ) . {\displaystyle \max(\dim U,\dim W)\leq \dim(U+W)\leq \dim(U)+\dim(W).} Here,

19176-776: The term rank refers to the dimension of the image of L , dim ⁡ ( im ⁡ L ) , {\displaystyle \dim(\operatorname {im} L),} while nullity refers to the dimension of the kernel of L , dim ⁡ ( ker ⁡ L ) . {\displaystyle \dim(\ker L).} That is, Rank ⁡ ( L ) = dim ⁡ ( im ⁡ L )  and  Nullity ⁡ ( L ) = dim ⁡ ( ker ⁡ L ) , {\displaystyle \operatorname {Rank} (L)=\dim(\operatorname {im} L)\qquad {\text{ and }}\qquad \operatorname {Nullity} (L)=\dim(\ker L),} so that

19317-659: The three last vectors of C , [ 3 − 5 1 0 0 0 ] , [ − 2 1 0 − 7 1 0 ] , [ 8 − 4 0 9 0 1 ] {\displaystyle \left[\!\!{\begin{array}{r}3\\-5\\1\\0\\0\\0\end{array}}\right],\;\left[\!\!{\begin{array}{r}-2\\1\\0\\-7\\1\\0\end{array}}\right],\;\left[\!\!{\begin{array}{r}8\\-4\\0\\9\\0\\1\end{array}}\right]} are

19458-1308: The upper part in column echelon form by column operations on the whole matrix gives [ B C ] = [ 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 3 − 2 8 0 1 0 − 5 1 − 4 0 0 0 1 0 0 0 0 1 0 − 7 9 0 0 0 0 1 0 0 0 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}={\begin{bmatrix}1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&0&0&0\\\hline 1&0&0&3&-2&8\\0&1&0&-5&1&-4\\0&0&0&1&0&0\\0&0&1&0&-7&9\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix}}.} The last three columns of B are zero columns. Therefore,

19599-782: The usual dot product. Among the simplest examples of inner product spaces are R {\displaystyle \mathbb {R} } and C . {\displaystyle \mathbb {C} .} The real numbers R {\displaystyle \mathbb {R} } are a vector space over R {\displaystyle \mathbb {R} } that becomes an inner product space with arithmetic multiplication as its inner product: ⟨ x , y ⟩ := x y  for  x , y ∈ R . {\displaystyle \langle x,y\rangle :=xy\quad {\text{ for }}x,y\in \mathbb {R} .} The complex numbers C {\displaystyle \mathbb {C} } are

19740-524: The vector x . In linear algebra, this subspace is known as the column space (or image ) of the matrix A . It is precisely the subspace of K spanned by the column vectors of A . The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below). In general, a subspace of K determined by k parameters (or spanned by k vectors) has dimension k . However, there are exceptions to this rule. For example,

19881-446: The vectors v 1 , ... ,  v k have n components, then their span is a subspace of K . Geometrically, the span is the flat through the origin in n -dimensional space determined by the points v 1 , ... ,  v k . A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation: In this case, the subspace consists of all possible values of

#212787