# Rank–nullity theorem

Rank–nullity theorem "Rank theorem" leitet hier weiter. For the rank theorem of multivariable calculus, see constant rank theorem. This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: see talk:Rank–nullity theorem#Terminology. Please help improve this article if you can. (April 2022) (Erfahren Sie, wie und wann Sie diese Vorlagennachricht entfernen können) Rank–nullity theorem The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its nullity (the dimension of its kernel).[1][2][3][4] Inhalt 1 Stating the theorem 1.1 Matrizen 2 Beweise 2.1 Erster Beweis 2.2 Zweiter Beweis 3 Reformulations and generalizations 4 Zitate 5 References Stating the theorem Let {Anzeigestil T:Vto W} be a linear transformation between two vector spaces where {Anzeigestil T} 's domain {Anzeigestil V} is finite dimensional. Dann {Anzeigestil Betreibername {Rank} (T)~+~operatorname {Nullity} (T)~=~dim V,} wo {Anzeigestil Betreibername {Rank} (T)~:=~dim(Name des Bedieners {Image} (T))Quad {Text{ und }}qquad operatorname {Nullity} (T)~:=~dim(Name des Bedieners {Ker} (T)).} Mit anderen Worten, {Anzeigestil dim(Name des Bedieners {ich bin} T)+dim(ker T)=dim(Name des Bedieners {domain} T).} This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Ausdrücklich, since T induces an isomorphism from {displaystyle V/operatorname {Ker} (T)} zu {Anzeigestil Betreibername {Image} (T),} the existence of a basis for V that extends any given basis of {Anzeigestil Betreibername {Ker} (T)} impliziert, via the splitting lemma, das {Anzeigestil Betreibername {Image} (T)oplus operatorname {Ker} (T)cong V.} Taking dimensions, the rank–nullity theorem follows.

Matrices Since {Anzeigestil Betreibername {Matte} _{mtimes n}(mathbb {F} )cong operatorname {Hom} links(mathbb {F} ^{n},mathbb {F} ^{m}Rechts),} [5] matrices immediately come to mind when discussing linear maps. In the case of an {displaystyle mtimes n} Matrix, the dimension of the domain is {Anzeigestil n,} the number of columns in the matrix. Thus the rank–nullity theorem for a given matrix {displaystyle Min operatorname {Matte} _{mtimes n}(mathbb {F} )} immediately becomes {Anzeigestil Betreibername {Rank} (M)+Name des Bedieners {Nullity} (M)=n.} Proofs Here we provide two proofs. The first[2] operates in the general case, using linear maps. The second proof[6] looks at the homogeneous system {Anzeigestil mathbf {Ax} =mathbf {0} } zum {Anzeigestil mathbf {EIN} in operatorname {Matte} _{mtimes n}(mathbb {F} )} with rank {Anzeigestil r} and shows explicitly that there exists a set of {displaystyle n-r} linearly independent solutions that span the kernel of {Anzeigestil mathbf {EIN} } .

While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.

First proof Let {Anzeigestil V,W} be vector spaces over some field {Anzeigestil mathbb {F} } und {Anzeigestil T} defined as in the statement of the theorem with {displaystyle dim V=n} .

Wie {Anzeigestil Betreibername {Ker} Tsubset V} ist ein Unterraum, there exists a basis for it. Vermuten {displaystyle dim operatorname {Ker} T=k} und lass {Anzeigestil {mathematisch {K}}:={v_{1},Punkte ,v_{k}}Teilmengenoperatorname {Ker} (T)} be such a basis.

We may now, by the Steinitz exchange lemma, extend {Anzeigestil {mathematisch {K}}} mit {displaystyle n-k} linearly independent vectors {displaystyle w_{1},Punkte ,w_{n-k}} to form a full basis of {Anzeigestil V} .

Lassen {Anzeigestil {mathematisch {S}}:={w_{1},Punkte ,w_{n-k}}subset Vsetminus operatorname {Ker} (T)} so dass {Anzeigestil {mathematisch {B}}:={mathematisch {K}}Tasse {mathematisch {S}}={v_{1},Punkte ,v_{k},w_{1},Punkte ,w_{n-k}}subset V} is a basis for {Anzeigestil V} . Davon, we know that {Anzeigestil Betreibername {Ich bin} T=operatorname {Span} T({mathematisch {B}})= Betreibername {Span} {T(v_{1}),Punkte ,T(v_{k}),T(w_{1}),Punkte ,T(w_{n-k})}= Betreibername {Span} {T(w_{1}),Punkte ,T(w_{n-k})}= Betreibername {Span} T({mathematisch {S}}).} We now claim that {Anzeigestil T({mathematisch {S}})} is a basis for {Anzeigestil Betreibername {Ich bin} T} . The above equality already states that {Anzeigestil T({mathematisch {S}})} is a generating set for {Anzeigestil Betreibername {Ich bin} T} ; it remains to be shown that it is also linearly independent to conclude that it is a basis.

Vermuten {Anzeigestil T({mathematisch {S}})} is not linearly independent, und lass {Anzeigestil Summe _{j=1}^{n-k}Alpha _{j}T(w_{j})=0_{W}} für einige {Anzeigestil Alpha _{j}in mathbb {F} } .

Daher, owing to the linearity of {Anzeigestil T} , es folgt dem {displaystyle Tleft(Summe _{j=1}^{n-k}Alpha _{j}w_{j}Rechts)=0_{W}implies left(Summe _{j=1}^{n-k}Alpha _{j}w_{j}Rechts)in operatorname {Ker} T=operatorname {Span} {mathematisch {K}}subset V.} This is a contradiction to {Anzeigestil {mathematisch {B}}} being a basis, unless all {Anzeigestil Alpha _{j}} are equal to zero. Dies zeigt, dass {Anzeigestil T({mathematisch {S}})} is linearly independent, and more specifically that it is a basis for {Anzeigestil Betreibername {Ich bin} T} .

To summarize, wir haben {Anzeigestil {mathematisch {K}}} , a basis for {Anzeigestil Betreibername {Ker} T} , und {Anzeigestil T({mathematisch {S}})} , a basis for {Anzeigestil Betreibername {Ich bin} T} .

Finally we may state that {Anzeigestil Betreibername {Rank} (T)+Name des Bedieners {Nullity} (T)=dim operatorname {Ich bin} T+dim operatorname {Ker} T=|T({mathematisch {S}})|+|{mathematisch {K}}|=(n-k)+k=n=dim V.} This concludes our proof.

Second proof Let {Anzeigestil mathbf {EIN} in operatorname {Matte} _{mtimes n}(mathbb {F} )} mit {Anzeigestil r} linearly independent columns (d.h. {Anzeigestil Betreibername {Rank} (mathbf {EIN} )=r} ). Das werden wir zeigen: There exists a set of {displaystyle n-r} linearly independent solutions to the homogeneous system {Anzeigestil mathbf {Ax} =mathbf {0} } . That every other solution is a linear combination of these {displaystyle n-r} solutions.

To do this, we will produce a matrix {Anzeigestil mathbf {X} in operatorname {Matte} _{ntimes (n-r)}(mathbb {F} )} whose columns form a basis of the null space of {Anzeigestil mathbf {EIN} } .

Ohne Verlust der Allgemeinheit, assume that the first {Anzeigestil r} columns of {Anzeigestil mathbf {EIN} } are linearly independent. So, wir können schreiben {Anzeigestil mathbf {EIN} ={Start{pMatrix}mathbf {EIN} _{1}&mathbf {EIN} _{2}Ende{pMatrix}},} wo {Anzeigestil mathbf {EIN} _{1}in operatorname {Matte} _{mtimes r}(mathbb {F} )} mit {Anzeigestil r} linearly independent column vectors, und {Anzeigestil mathbf {EIN} _{2}in operatorname {Matte} _{mtimes (n-r)}(mathbb {F} )} , each of whose {displaystyle n-r} columns are linear combinations of the columns of {Anzeigestil mathbf {EIN} _{1}} .

Das bedeutet, dass {Anzeigestil mathbf {EIN} _{2}=mathbf {EIN} _{1}mathbf {B} } für einige {Anzeigestil mathbf {B} in operatorname {Matte} _{rtimes (n-r)}} (see rank factorization) und, somit, {Anzeigestil mathbf {EIN} ={Start{pMatrix}mathbf {EIN} _{1}&mathbf {EIN} _{1}mathbf {B} Ende{pMatrix}}.} Lassen {Anzeigestil mathbf {X} ={Start{pMatrix}-mathbf {B} \mathbf {ich} _{n-r}Ende{pMatrix}},} wo {Anzeigestil mathbf {ich} _{n-r}} ist der {Anzeigestil (n-r)mal (n-r)} identity matrix. We note that {Anzeigestil mathbf {X} in operatorname {Matte} _{ntimes (n-r)}(mathbb {F} )} satisfies {Anzeigestil mathbf {EIN} mathbf {X} ={Start{pMatrix}mathbf {EIN} _{1}&mathbf {EIN} _{1}mathbf {B} Ende{pMatrix}}{Start{pMatrix}-mathbf {B} \mathbf {ich} _{n-r}Ende{pMatrix}}=-mathbf {EIN} _{1}mathbf {B} +mathbf {EIN} _{1}mathbf {B} =mathbf {0} _{mtimes (n-r)}.} Deswegen, each of the {displaystyle n-r} columns of {Anzeigestil mathbf {X} } are particular solutions of {Anzeigestil mathbf {Ax} =mathbf {0} _{mathbb {F} ^{m}}} .

Außerdem, das {displaystyle n-r} columns of {Anzeigestil mathbf {X} } are linearly independent because {Anzeigestil mathbf {Xu} =mathbf {0} _{mathbb {F} ^{n}}} will imply {Anzeigestil mathbf {u} =mathbf {0} _{mathbb {F} ^{n-r}}} zum {Anzeigestil mathbf {u} in mathbb {F} ^{n-r}} : {Anzeigestil mathbf {X} mathbf {u} =mathbf {0} _{mathbb {F} ^{n}}impliziert {Start{pMatrix}-mathbf {B} \mathbf {ich} _{n-r}Ende{pMatrix}}mathbf {u} =mathbf {0} _{mathbb {F} ^{n}}impliziert {Start{pMatrix}-mathbf {B} mathbf {u} \mathbf {u} Ende{pMatrix}}={Start{pMatrix}mathbf {0} _{mathbb {F} ^{r}}\mathbf {0} _{mathbb {F} ^{n-r}}Ende{pMatrix}}implies mathbf {u} =mathbf {0} _{mathbb {F} ^{n-r}}.} Deswegen, the column vectors of {Anzeigestil mathbf {X} } constitute a set of {displaystyle n-r} linearly independent solutions for {Anzeigestil mathbf {Ax} =mathbf {0} _{mathbb {F} ^{m}}} .

We next prove that any solution of {Anzeigestil mathbf {Ax} =mathbf {0} _{mathbb {F} ^{m}}} must be a linear combination of the columns of {Anzeigestil mathbf {X} } .

For this, Lassen {Anzeigestil mathbf {u} ={Start{pMatrix}mathbf {u} _{1}\mathbf {u} _{2}Ende{pMatrix}}in mathbb {F} ^{n}} be any vector such that {Anzeigestil mathbf {Au} =mathbf {0} _{mathbb {F} ^{m}}} . Note that since the columns of {Anzeigestil mathbf {EIN} _{1}} are linearly independent, {Anzeigestil mathbf {EIN} _{1}mathbf {x} =mathbf {0} _{mathbb {F} ^{m}}} impliziert {Anzeigestil mathbf {x} =mathbf {0} _{mathbb {F} ^{r}}} .

Deswegen, {Anzeigestil {Start{Reihe}{rcl}mathbf {EIN} mathbf {u} &=&mathbf {0} _{mathbb {F} ^{m}}\impliziert {Start{pMatrix}mathbf {EIN} _{1}&mathbf {EIN} _{1}mathbf {B} Ende{pMatrix}}{Start{pMatrix}mathbf {u} _{1}\mathbf {u} _{2}Ende{pMatrix}}&=&mathbf {EIN} _{1}mathbf {u} _{1}+mathbf {EIN} _{1}mathbf {B} mathbf {u} _{2}&=&mathbf {EIN} _{1}(mathbf {u} _{1}+mathbf {B} mathbf {u} _{2})&=&mathbf {0} _{mathbb {F} ^{m}}\implies mathbf {u} _{1}+mathbf {B} mathbf {u} _{2}&=&mathbf {0} _{mathbb {F} ^{r}}\implies mathbf {u} _{1}&=&-mathbf {B} mathbf {u} _{2}Ende{Reihe}}} {displaystyle implies mathbf {u} ={Start{pMatrix}mathbf {u} _{1}\mathbf {u} _{2}Ende{pMatrix}}={Start{pMatrix}-mathbf {B} \mathbf {ich} _{n-r}Ende{pMatrix}}mathbf {u} _{2}=mathbf {X} mathbf {u} _{2}.} This proves that any vector {Anzeigestil mathbf {u} } that is a solution of {Anzeigestil mathbf {Ax} =mathbf {0} } must be a linear combination of the {displaystyle n-r} special solutions given by the columns of {Anzeigestil mathbf {X} } . And we have already seen that the columns of {Anzeigestil mathbf {X} } are linearly independent. Somit, the columns of {Anzeigestil mathbf {X} } constitute a basis for the null space of {Anzeigestil mathbf {EIN} } . Deswegen, the nullity of {Anzeigestil mathbf {EIN} } ist {displaystyle n-r} . Seit {Anzeigestil r} equals rank of {Anzeigestil mathbf {EIN} } , es folgt dem {Anzeigestil Betreibername {Rank} (mathbf {EIN} )+Name des Bedieners {Nullity} (mathbf {EIN} )=n} . This concludes our proof.

Reformulations and generalizations This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.

In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Ausdrücklich, given that {displaystyle 0rightarrow Urightarrow Vmathbin {overset {T}{rightarrow }} Rrightarrow 0} is a short exact sequence of vector spaces, dann {displaystyle Uoplus Rcong V} , somit {Anzeigestil dim(U)+dim(R)=dim(v).} Here R plays the role of im T and U is ker T, d.h. {displaystyle 0rightarrow ker Tmathbin {hookrightarrow } Vmathbin {overset {T}{rightarrow }} Name des Bedieners {ich bin} Trightarrow 0} Im endlichdimensionalen Fall, this formulation is susceptible to a generalization: wenn 0 → V1 → V2 → ⋯ → Vr → 0 is an exact sequence of finite-dimensional vector spaces, dann[7] {Anzeigestil Summe _{i=1}^{r}(-1)^{ich}dim(V_{ich})=0.} The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map {displaystyle Tin operatorname {Hom} (v,W)} , wo {Anzeigestil V} und {Anzeigestil W.} are finite-dimensional, ist definiert durch {Anzeigestil Betreibername {index} T=dim operatorname {Ker} (T)-dim operatorname {Coker} T.} Intuitiv, {displaystyle dim operatorname {Ker} T} is the number of independent solutions {Anzeigestil v} of the equation {displaystyle Tv=0} , und {displaystyle dim operatorname {Coker} T} is the number of independent restrictions that have to be put on {Anzeigestil m} to make {displaystyle Tv=w} lösbar. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement {Anzeigestil Betreibername {index} T=dim V-dim W.} We see that we can easily read off the index of the linear map {Anzeigestil T} from the involved spaces, without any need to analyze {Anzeigestil T} in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.

Citations ^ Axler (2015) p. 63, §3.22 ^ Jump up to: a b Friedberg, Insel & Spence (2014) p. 70, §2.1, Satz 2.3 ^ Katznelson & Katznelson (2008) p. 52, §2.5.1 ^ Valenza (1993) p. 71, §4.3 ^ Friedberg, Insel & Spence (2014) pp. 103-104, §2.4, Satz 2.20 ^ Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 ^ Zaman, Ragib. "Dimensions of vector spaces in an exact sequence". Mathematics Stack Exchange. Abgerufen 27 Oktober 2015. References Axler, Sheldon (2015). Lineare Algebra richtig gemacht. Bachelor-Texte in Mathematik (3Dr. Ed.). Springer. ISBN 978-3-319-11079-0. Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (2014). Linear Algebra (4Das D.). Pearson Education. ISBN 978-0130084514. Meier, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). EIN (Terse) Introduction to Linear Algebra. Amerikanische Mathematische Gesellschaft. ISBN 978-0-8218-4419-9. Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Bachelor-Texte in Mathematik (3Dr. Ed.). Springer. ISBN 3-540-94099-5. Kategorien: Theorems in linear algebraIsomorphism theorems

Wenn Sie andere ähnliche Artikel wissen möchten Rank–nullity theorem Sie können die Kategorie besuchen Isomorphism theorems.

Geh hinauf

Wir verwenden eigene Cookies und Cookies von Drittanbietern, um die Benutzererfahrung zu verbessern Mehr Informationen