Proof for $dim(U+W)$












2












$begingroup$


I was studying linear algebra today, when I got a formula that gives me the dimensions of a sum between two subspaces:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$
And there's a proof below it... But before reading the proof, I wanted to give it a try... Here's what I've got:



$U$ and $W$ are subspaces.
$t = dim(U)$ and $s = dim(W)$



1)My proof for direct sum:



If i have $V = Uoplus W$, I can assume that $U cap W = 0$ and for any $z in V$ it can be written as a linear combination between the vectors of the basis $U$ and $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} + sum_{i=1}^{s} gamma_{i}cdot W_{i} : forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Because of that two affirmatives, I can assume that the vectors in the basis $U$ and $W$ will be L.I, therefore, will be a basis for $V$, and $V$ will have: $dim(V) = dim(U) + dim(W) = t+s$.



2)My proof for sum with intersection:



Now if I have that $V = U+W$, I can assume that $U cap W neq 0 $, and because of that, there are some vectors different than the trivial one, that can be written as a linear combination of the vectors in the basis $U$ and simultaneously as a linear combination fo the vectors in the basis $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} = sum_{i=1}^{s} gamma_{i}cdot W_{i}\ forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Now, what I think is the best to be done is to find solutions for $z$ that will give me the set of vectors that are in the intersection of $U$ and $W$.



Having in hands the numbers of vectors in the set $z$, I can see that $U+W$ will give me a L.D set (because it has some intersection $z$), compound of vectors in the basis $U$ and basis $W$, and since I know that this L.D set needs to be L.I to be a basis for V, I need to remove some dependent vectors, that are directly related to the intersection... That's why:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$



That's what I've got by my intuition and knowledge at the moment. Please correct me because I know that I'm not being rigorous, and tell me what you think... Am I in the way? Is that a good approach to the real proof?



Thanks










share|cite|improve this question











$endgroup$












  • $begingroup$
    Your intuition for the case that $Ucap Wneq 0$ is correct, but I would probably try to avoid starting with basis on $U$ and $W$ because what you really need is a basis for $Ucap W$ so that you can show that the formula actually holds (One way would be to construct $Uoplus (W/(Ucap W))$).
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:11












  • $begingroup$
    Thank you for commenting!! But is the proof for $ U oplus W $ correct?
    $endgroup$
    – Bruno Reis
    Aug 31 '16 at 2:15










  • $begingroup$
    Yes, you get that the set of vectors composed of a basis for $U$ and a basis for $W$ form a basis for $V$ (important points here being that a basis is a set of linearly independent vectors that span the space) by the fact that those spaces have trivial intersection, hence. For the general case, the essential challenge is how to achieve that same sort of situation without having the trivial intersection situation to give you a basis for $V$ for free.
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:19
















2












$begingroup$


I was studying linear algebra today, when I got a formula that gives me the dimensions of a sum between two subspaces:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$
And there's a proof below it... But before reading the proof, I wanted to give it a try... Here's what I've got:



$U$ and $W$ are subspaces.
$t = dim(U)$ and $s = dim(W)$



1)My proof for direct sum:



If i have $V = Uoplus W$, I can assume that $U cap W = 0$ and for any $z in V$ it can be written as a linear combination between the vectors of the basis $U$ and $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} + sum_{i=1}^{s} gamma_{i}cdot W_{i} : forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Because of that two affirmatives, I can assume that the vectors in the basis $U$ and $W$ will be L.I, therefore, will be a basis for $V$, and $V$ will have: $dim(V) = dim(U) + dim(W) = t+s$.



2)My proof for sum with intersection:



Now if I have that $V = U+W$, I can assume that $U cap W neq 0 $, and because of that, there are some vectors different than the trivial one, that can be written as a linear combination of the vectors in the basis $U$ and simultaneously as a linear combination fo the vectors in the basis $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} = sum_{i=1}^{s} gamma_{i}cdot W_{i}\ forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Now, what I think is the best to be done is to find solutions for $z$ that will give me the set of vectors that are in the intersection of $U$ and $W$.



Having in hands the numbers of vectors in the set $z$, I can see that $U+W$ will give me a L.D set (because it has some intersection $z$), compound of vectors in the basis $U$ and basis $W$, and since I know that this L.D set needs to be L.I to be a basis for V, I need to remove some dependent vectors, that are directly related to the intersection... That's why:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$



That's what I've got by my intuition and knowledge at the moment. Please correct me because I know that I'm not being rigorous, and tell me what you think... Am I in the way? Is that a good approach to the real proof?



Thanks










share|cite|improve this question











$endgroup$












  • $begingroup$
    Your intuition for the case that $Ucap Wneq 0$ is correct, but I would probably try to avoid starting with basis on $U$ and $W$ because what you really need is a basis for $Ucap W$ so that you can show that the formula actually holds (One way would be to construct $Uoplus (W/(Ucap W))$).
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:11












  • $begingroup$
    Thank you for commenting!! But is the proof for $ U oplus W $ correct?
    $endgroup$
    – Bruno Reis
    Aug 31 '16 at 2:15










  • $begingroup$
    Yes, you get that the set of vectors composed of a basis for $U$ and a basis for $W$ form a basis for $V$ (important points here being that a basis is a set of linearly independent vectors that span the space) by the fact that those spaces have trivial intersection, hence. For the general case, the essential challenge is how to achieve that same sort of situation without having the trivial intersection situation to give you a basis for $V$ for free.
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:19














2












2








2





$begingroup$


I was studying linear algebra today, when I got a formula that gives me the dimensions of a sum between two subspaces:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$
And there's a proof below it... But before reading the proof, I wanted to give it a try... Here's what I've got:



$U$ and $W$ are subspaces.
$t = dim(U)$ and $s = dim(W)$



1)My proof for direct sum:



If i have $V = Uoplus W$, I can assume that $U cap W = 0$ and for any $z in V$ it can be written as a linear combination between the vectors of the basis $U$ and $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} + sum_{i=1}^{s} gamma_{i}cdot W_{i} : forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Because of that two affirmatives, I can assume that the vectors in the basis $U$ and $W$ will be L.I, therefore, will be a basis for $V$, and $V$ will have: $dim(V) = dim(U) + dim(W) = t+s$.



2)My proof for sum with intersection:



Now if I have that $V = U+W$, I can assume that $U cap W neq 0 $, and because of that, there are some vectors different than the trivial one, that can be written as a linear combination of the vectors in the basis $U$ and simultaneously as a linear combination fo the vectors in the basis $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} = sum_{i=1}^{s} gamma_{i}cdot W_{i}\ forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Now, what I think is the best to be done is to find solutions for $z$ that will give me the set of vectors that are in the intersection of $U$ and $W$.



Having in hands the numbers of vectors in the set $z$, I can see that $U+W$ will give me a L.D set (because it has some intersection $z$), compound of vectors in the basis $U$ and basis $W$, and since I know that this L.D set needs to be L.I to be a basis for V, I need to remove some dependent vectors, that are directly related to the intersection... That's why:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$



That's what I've got by my intuition and knowledge at the moment. Please correct me because I know that I'm not being rigorous, and tell me what you think... Am I in the way? Is that a good approach to the real proof?



Thanks










share|cite|improve this question











$endgroup$




I was studying linear algebra today, when I got a formula that gives me the dimensions of a sum between two subspaces:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$
And there's a proof below it... But before reading the proof, I wanted to give it a try... Here's what I've got:



$U$ and $W$ are subspaces.
$t = dim(U)$ and $s = dim(W)$



1)My proof for direct sum:



If i have $V = Uoplus W$, I can assume that $U cap W = 0$ and for any $z in V$ it can be written as a linear combination between the vectors of the basis $U$ and $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} + sum_{i=1}^{s} gamma_{i}cdot W_{i} : forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Because of that two affirmatives, I can assume that the vectors in the basis $U$ and $W$ will be L.I, therefore, will be a basis for $V$, and $V$ will have: $dim(V) = dim(U) + dim(W) = t+s$.



2)My proof for sum with intersection:



Now if I have that $V = U+W$, I can assume that $U cap W neq 0 $, and because of that, there are some vectors different than the trivial one, that can be written as a linear combination of the vectors in the basis $U$ and simultaneously as a linear combination fo the vectors in the basis $W$:
$$
z = sum_{i=1}^{t} beta_{i}cdot U_{i} = sum_{i=1}^{s} gamma_{i}cdot W_{i}\ forall U_{i} in U, W_{i} in W, beta,gamma in R
$$

Now, what I think is the best to be done is to find solutions for $z$ that will give me the set of vectors that are in the intersection of $U$ and $W$.



Having in hands the numbers of vectors in the set $z$, I can see that $U+W$ will give me a L.D set (because it has some intersection $z$), compound of vectors in the basis $U$ and basis $W$, and since I know that this L.D set needs to be L.I to be a basis for V, I need to remove some dependent vectors, that are directly related to the intersection... That's why:
$$
dim (u+w) = dim(u) + dim(w) - dim(u cap w)
$$



That's what I've got by my intuition and knowledge at the moment. Please correct me because I know that I'm not being rigorous, and tell me what you think... Am I in the way? Is that a good approach to the real proof?



Thanks







linear-algebra dimension-theory






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 11 '18 at 0:45









amWhy

1




1










asked Aug 31 '16 at 1:53









Bruno ReisBruno Reis

982418




982418












  • $begingroup$
    Your intuition for the case that $Ucap Wneq 0$ is correct, but I would probably try to avoid starting with basis on $U$ and $W$ because what you really need is a basis for $Ucap W$ so that you can show that the formula actually holds (One way would be to construct $Uoplus (W/(Ucap W))$).
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:11












  • $begingroup$
    Thank you for commenting!! But is the proof for $ U oplus W $ correct?
    $endgroup$
    – Bruno Reis
    Aug 31 '16 at 2:15










  • $begingroup$
    Yes, you get that the set of vectors composed of a basis for $U$ and a basis for $W$ form a basis for $V$ (important points here being that a basis is a set of linearly independent vectors that span the space) by the fact that those spaces have trivial intersection, hence. For the general case, the essential challenge is how to achieve that same sort of situation without having the trivial intersection situation to give you a basis for $V$ for free.
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:19


















  • $begingroup$
    Your intuition for the case that $Ucap Wneq 0$ is correct, but I would probably try to avoid starting with basis on $U$ and $W$ because what you really need is a basis for $Ucap W$ so that you can show that the formula actually holds (One way would be to construct $Uoplus (W/(Ucap W))$).
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:11












  • $begingroup$
    Thank you for commenting!! But is the proof for $ U oplus W $ correct?
    $endgroup$
    – Bruno Reis
    Aug 31 '16 at 2:15










  • $begingroup$
    Yes, you get that the set of vectors composed of a basis for $U$ and a basis for $W$ form a basis for $V$ (important points here being that a basis is a set of linearly independent vectors that span the space) by the fact that those spaces have trivial intersection, hence. For the general case, the essential challenge is how to achieve that same sort of situation without having the trivial intersection situation to give you a basis for $V$ for free.
    $endgroup$
    – Justin Benfield
    Aug 31 '16 at 2:19
















$begingroup$
Your intuition for the case that $Ucap Wneq 0$ is correct, but I would probably try to avoid starting with basis on $U$ and $W$ because what you really need is a basis for $Ucap W$ so that you can show that the formula actually holds (One way would be to construct $Uoplus (W/(Ucap W))$).
$endgroup$
– Justin Benfield
Aug 31 '16 at 2:11






$begingroup$
Your intuition for the case that $Ucap Wneq 0$ is correct, but I would probably try to avoid starting with basis on $U$ and $W$ because what you really need is a basis for $Ucap W$ so that you can show that the formula actually holds (One way would be to construct $Uoplus (W/(Ucap W))$).
$endgroup$
– Justin Benfield
Aug 31 '16 at 2:11














$begingroup$
Thank you for commenting!! But is the proof for $ U oplus W $ correct?
$endgroup$
– Bruno Reis
Aug 31 '16 at 2:15




$begingroup$
Thank you for commenting!! But is the proof for $ U oplus W $ correct?
$endgroup$
– Bruno Reis
Aug 31 '16 at 2:15












$begingroup$
Yes, you get that the set of vectors composed of a basis for $U$ and a basis for $W$ form a basis for $V$ (important points here being that a basis is a set of linearly independent vectors that span the space) by the fact that those spaces have trivial intersection, hence. For the general case, the essential challenge is how to achieve that same sort of situation without having the trivial intersection situation to give you a basis for $V$ for free.
$endgroup$
– Justin Benfield
Aug 31 '16 at 2:19




$begingroup$
Yes, you get that the set of vectors composed of a basis for $U$ and a basis for $W$ form a basis for $V$ (important points here being that a basis is a set of linearly independent vectors that span the space) by the fact that those spaces have trivial intersection, hence. For the general case, the essential challenge is how to achieve that same sort of situation without having the trivial intersection situation to give you a basis for $V$ for free.
$endgroup$
– Justin Benfield
Aug 31 '16 at 2:19










1 Answer
1






active

oldest

votes


















0












$begingroup$

To spell out what I am suggesting in my comments, for a finite dimensional vector space, $V$, with subspaces $U$ and $W$, such that $V=U+W$, but not necessarily $Ucap W = {0}$.



Let ${b_i}$ be a basis for $Ucap W$, then, in particular, since $Ucap W$ is a subspace of $U$ we have that ${b_i}$ spans a (sub)space of $U$. We construct a basis for $U$ based on the given basis for that (sub)space as follows:



If $operatorname{span}{b_i}neq U$, then we find a vector $u_1in U$ which is not in $operatorname{span}{b_i}$, and append it to the set, so we now have ${b_i,u_1}$. If this set spans $U$, we are done, other we repeat the above process to obtain a $u_2$. This process is continued until $operatorname{span}{b_i,u_j}=U$, which will happen in finitely many steps, since $U$ must be finite dimensional because $U$ is a subspace of $V$ and $V$ is finite dimensional (See note at end).



Hence the set ${b_i,u_j}$ is a basis for $U$. We perform the analogous construction for $W$, obtaining basis ${b_i,w_k}$ for $W$. Now the set ${b_i,u_j,w_k}$ must span $V$ because every vector in $V$ is a sum of a vector in $U$ with a vector in $W$ and hence can be expressed as a linear combination of the form



$v=sum_{iin I} r_ib_i+sum_{jin J}s_ju_j+sum_{kin K}t_kw_k$



(side note, all of the $s_j$'s or all of the $t_k$'s can be $0$ depending on which of $U$ or $W$ it lives in).



Now we remark that $dim V=|I|+|J|+|K|$, where $|A|$ denotes cardinality of $A$ (i.e. "size"). We further observe that $dim U=|I|+|J|$, $dim W=|I|+|K|$, and $dim (Ucap W)=|I|$. Therefore, we conclude that



$dim U+dim V-dim(Ucap W)=(|I|+|J|)+(|I|+|K|)-|I|=|I|+|J|+|K|=dim V$.



Note: I used finiteness of $V$ to construct the basis. For infinite dimensional $V$, this theorem still holds (edit: as long as $dim(Ucap V)<dim V$), if you interpret the $+$ and $-$ as cardinal arithmetic operators instead of real number arithmetic operators. But you need to use a different method to get the required basis (or a different approach to proving the theorem altogether).






share|cite|improve this answer











$endgroup$













  • $begingroup$
    You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
    $endgroup$
    – darij grinberg
    Nov 9 '18 at 2:27











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1909206%2fproof-for-dimuw%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

To spell out what I am suggesting in my comments, for a finite dimensional vector space, $V$, with subspaces $U$ and $W$, such that $V=U+W$, but not necessarily $Ucap W = {0}$.



Let ${b_i}$ be a basis for $Ucap W$, then, in particular, since $Ucap W$ is a subspace of $U$ we have that ${b_i}$ spans a (sub)space of $U$. We construct a basis for $U$ based on the given basis for that (sub)space as follows:



If $operatorname{span}{b_i}neq U$, then we find a vector $u_1in U$ which is not in $operatorname{span}{b_i}$, and append it to the set, so we now have ${b_i,u_1}$. If this set spans $U$, we are done, other we repeat the above process to obtain a $u_2$. This process is continued until $operatorname{span}{b_i,u_j}=U$, which will happen in finitely many steps, since $U$ must be finite dimensional because $U$ is a subspace of $V$ and $V$ is finite dimensional (See note at end).



Hence the set ${b_i,u_j}$ is a basis for $U$. We perform the analogous construction for $W$, obtaining basis ${b_i,w_k}$ for $W$. Now the set ${b_i,u_j,w_k}$ must span $V$ because every vector in $V$ is a sum of a vector in $U$ with a vector in $W$ and hence can be expressed as a linear combination of the form



$v=sum_{iin I} r_ib_i+sum_{jin J}s_ju_j+sum_{kin K}t_kw_k$



(side note, all of the $s_j$'s or all of the $t_k$'s can be $0$ depending on which of $U$ or $W$ it lives in).



Now we remark that $dim V=|I|+|J|+|K|$, where $|A|$ denotes cardinality of $A$ (i.e. "size"). We further observe that $dim U=|I|+|J|$, $dim W=|I|+|K|$, and $dim (Ucap W)=|I|$. Therefore, we conclude that



$dim U+dim V-dim(Ucap W)=(|I|+|J|)+(|I|+|K|)-|I|=|I|+|J|+|K|=dim V$.



Note: I used finiteness of $V$ to construct the basis. For infinite dimensional $V$, this theorem still holds (edit: as long as $dim(Ucap V)<dim V$), if you interpret the $+$ and $-$ as cardinal arithmetic operators instead of real number arithmetic operators. But you need to use a different method to get the required basis (or a different approach to proving the theorem altogether).






share|cite|improve this answer











$endgroup$













  • $begingroup$
    You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
    $endgroup$
    – darij grinberg
    Nov 9 '18 at 2:27
















0












$begingroup$

To spell out what I am suggesting in my comments, for a finite dimensional vector space, $V$, with subspaces $U$ and $W$, such that $V=U+W$, but not necessarily $Ucap W = {0}$.



Let ${b_i}$ be a basis for $Ucap W$, then, in particular, since $Ucap W$ is a subspace of $U$ we have that ${b_i}$ spans a (sub)space of $U$. We construct a basis for $U$ based on the given basis for that (sub)space as follows:



If $operatorname{span}{b_i}neq U$, then we find a vector $u_1in U$ which is not in $operatorname{span}{b_i}$, and append it to the set, so we now have ${b_i,u_1}$. If this set spans $U$, we are done, other we repeat the above process to obtain a $u_2$. This process is continued until $operatorname{span}{b_i,u_j}=U$, which will happen in finitely many steps, since $U$ must be finite dimensional because $U$ is a subspace of $V$ and $V$ is finite dimensional (See note at end).



Hence the set ${b_i,u_j}$ is a basis for $U$. We perform the analogous construction for $W$, obtaining basis ${b_i,w_k}$ for $W$. Now the set ${b_i,u_j,w_k}$ must span $V$ because every vector in $V$ is a sum of a vector in $U$ with a vector in $W$ and hence can be expressed as a linear combination of the form



$v=sum_{iin I} r_ib_i+sum_{jin J}s_ju_j+sum_{kin K}t_kw_k$



(side note, all of the $s_j$'s or all of the $t_k$'s can be $0$ depending on which of $U$ or $W$ it lives in).



Now we remark that $dim V=|I|+|J|+|K|$, where $|A|$ denotes cardinality of $A$ (i.e. "size"). We further observe that $dim U=|I|+|J|$, $dim W=|I|+|K|$, and $dim (Ucap W)=|I|$. Therefore, we conclude that



$dim U+dim V-dim(Ucap W)=(|I|+|J|)+(|I|+|K|)-|I|=|I|+|J|+|K|=dim V$.



Note: I used finiteness of $V$ to construct the basis. For infinite dimensional $V$, this theorem still holds (edit: as long as $dim(Ucap V)<dim V$), if you interpret the $+$ and $-$ as cardinal arithmetic operators instead of real number arithmetic operators. But you need to use a different method to get the required basis (or a different approach to proving the theorem altogether).






share|cite|improve this answer











$endgroup$













  • $begingroup$
    You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
    $endgroup$
    – darij grinberg
    Nov 9 '18 at 2:27














0












0








0





$begingroup$

To spell out what I am suggesting in my comments, for a finite dimensional vector space, $V$, with subspaces $U$ and $W$, such that $V=U+W$, but not necessarily $Ucap W = {0}$.



Let ${b_i}$ be a basis for $Ucap W$, then, in particular, since $Ucap W$ is a subspace of $U$ we have that ${b_i}$ spans a (sub)space of $U$. We construct a basis for $U$ based on the given basis for that (sub)space as follows:



If $operatorname{span}{b_i}neq U$, then we find a vector $u_1in U$ which is not in $operatorname{span}{b_i}$, and append it to the set, so we now have ${b_i,u_1}$. If this set spans $U$, we are done, other we repeat the above process to obtain a $u_2$. This process is continued until $operatorname{span}{b_i,u_j}=U$, which will happen in finitely many steps, since $U$ must be finite dimensional because $U$ is a subspace of $V$ and $V$ is finite dimensional (See note at end).



Hence the set ${b_i,u_j}$ is a basis for $U$. We perform the analogous construction for $W$, obtaining basis ${b_i,w_k}$ for $W$. Now the set ${b_i,u_j,w_k}$ must span $V$ because every vector in $V$ is a sum of a vector in $U$ with a vector in $W$ and hence can be expressed as a linear combination of the form



$v=sum_{iin I} r_ib_i+sum_{jin J}s_ju_j+sum_{kin K}t_kw_k$



(side note, all of the $s_j$'s or all of the $t_k$'s can be $0$ depending on which of $U$ or $W$ it lives in).



Now we remark that $dim V=|I|+|J|+|K|$, where $|A|$ denotes cardinality of $A$ (i.e. "size"). We further observe that $dim U=|I|+|J|$, $dim W=|I|+|K|$, and $dim (Ucap W)=|I|$. Therefore, we conclude that



$dim U+dim V-dim(Ucap W)=(|I|+|J|)+(|I|+|K|)-|I|=|I|+|J|+|K|=dim V$.



Note: I used finiteness of $V$ to construct the basis. For infinite dimensional $V$, this theorem still holds (edit: as long as $dim(Ucap V)<dim V$), if you interpret the $+$ and $-$ as cardinal arithmetic operators instead of real number arithmetic operators. But you need to use a different method to get the required basis (or a different approach to proving the theorem altogether).






share|cite|improve this answer











$endgroup$



To spell out what I am suggesting in my comments, for a finite dimensional vector space, $V$, with subspaces $U$ and $W$, such that $V=U+W$, but not necessarily $Ucap W = {0}$.



Let ${b_i}$ be a basis for $Ucap W$, then, in particular, since $Ucap W$ is a subspace of $U$ we have that ${b_i}$ spans a (sub)space of $U$. We construct a basis for $U$ based on the given basis for that (sub)space as follows:



If $operatorname{span}{b_i}neq U$, then we find a vector $u_1in U$ which is not in $operatorname{span}{b_i}$, and append it to the set, so we now have ${b_i,u_1}$. If this set spans $U$, we are done, other we repeat the above process to obtain a $u_2$. This process is continued until $operatorname{span}{b_i,u_j}=U$, which will happen in finitely many steps, since $U$ must be finite dimensional because $U$ is a subspace of $V$ and $V$ is finite dimensional (See note at end).



Hence the set ${b_i,u_j}$ is a basis for $U$. We perform the analogous construction for $W$, obtaining basis ${b_i,w_k}$ for $W$. Now the set ${b_i,u_j,w_k}$ must span $V$ because every vector in $V$ is a sum of a vector in $U$ with a vector in $W$ and hence can be expressed as a linear combination of the form



$v=sum_{iin I} r_ib_i+sum_{jin J}s_ju_j+sum_{kin K}t_kw_k$



(side note, all of the $s_j$'s or all of the $t_k$'s can be $0$ depending on which of $U$ or $W$ it lives in).



Now we remark that $dim V=|I|+|J|+|K|$, where $|A|$ denotes cardinality of $A$ (i.e. "size"). We further observe that $dim U=|I|+|J|$, $dim W=|I|+|K|$, and $dim (Ucap W)=|I|$. Therefore, we conclude that



$dim U+dim V-dim(Ucap W)=(|I|+|J|)+(|I|+|K|)-|I|=|I|+|J|+|K|=dim V$.



Note: I used finiteness of $V$ to construct the basis. For infinite dimensional $V$, this theorem still holds (edit: as long as $dim(Ucap V)<dim V$), if you interpret the $+$ and $-$ as cardinal arithmetic operators instead of real number arithmetic operators. But you need to use a different method to get the required basis (or a different approach to proving the theorem altogether).







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 9 '18 at 2:24









darij grinberg

10.5k33062




10.5k33062










answered Aug 31 '16 at 2:49









Justin BenfieldJustin Benfield

2,6002622




2,6002622












  • $begingroup$
    You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
    $endgroup$
    – darij grinberg
    Nov 9 '18 at 2:27


















  • $begingroup$
    You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
    $endgroup$
    – darij grinberg
    Nov 9 '18 at 2:27
















$begingroup$
You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
$endgroup$
– darij grinberg
Nov 9 '18 at 2:27




$begingroup$
You still need to prove that the big set of size $|I|+|J|+|K|$ is linearly independent. Also, there is no minus operator for cardinal numbers.
$endgroup$
– darij grinberg
Nov 9 '18 at 2:27


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1909206%2fproof-for-dimuw%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

Marschland

Dieringhausen