Prove the following statements: Linear algebra (Vector spaces)
Clash Royale CLAN TAG#URR8PPP
up vote
3
down vote
favorite
Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:
(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.
Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.
So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.
linear-algebra vector-spaces proof-writing alternative-proof
 |Â
show 1 more comment
up vote
3
down vote
favorite
Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:
(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.
Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.
So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.
linear-algebra vector-spaces proof-writing alternative-proof
1
You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51
Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54
But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02
You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14
You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19
 |Â
show 1 more comment
up vote
3
down vote
favorite
up vote
3
down vote
favorite
Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:
(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.
Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.
So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.
linear-algebra vector-spaces proof-writing alternative-proof
Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:
(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.
Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.
So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.
linear-algebra vector-spaces proof-writing alternative-proof
edited Jul 16 at 11:18
caffeinemachine
6,08721145
6,08721145
asked Jul 16 at 9:49
Anonymous I
804725
804725
1
You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51
Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54
But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02
You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14
You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19
 |Â
show 1 more comment
1
You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51
Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54
But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02
You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14
You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19
1
1
You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51
You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51
Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54
Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54
But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02
But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02
You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14
You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14
You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19
You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19
 |Â
show 1 more comment
3 Answers
3
active
oldest
votes
up vote
3
down vote
accepted
For $(i)implies (ii)$ we have
$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$
For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists
$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$
for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have
$$vec w=a_1vec v_1+ldots+a_nvec v_n$$
and
$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$
which is a contradiction.
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
add a comment |Â
up vote
2
down vote
$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.
$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
add a comment |Â
up vote
1
down vote
First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.
Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
add a comment |Â
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
For $(i)implies (ii)$ we have
$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$
For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists
$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$
for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have
$$vec w=a_1vec v_1+ldots+a_nvec v_n$$
and
$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$
which is a contradiction.
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
add a comment |Â
up vote
3
down vote
accepted
For $(i)implies (ii)$ we have
$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$
For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists
$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$
for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have
$$vec w=a_1vec v_1+ldots+a_nvec v_n$$
and
$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$
which is a contradiction.
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
For $(i)implies (ii)$ we have
$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$
For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists
$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$
for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have
$$vec w=a_1vec v_1+ldots+a_nvec v_n$$
and
$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$
which is a contradiction.
For $(i)implies (ii)$ we have
$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$
For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists
$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$
for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have
$$vec w=a_1vec v_1+ldots+a_nvec v_n$$
and
$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$
which is a contradiction.
edited Jul 16 at 11:51
answered Jul 16 at 10:30
gimusi
65.4k73684
65.4k73684
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
add a comment |Â
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Please correct my last comment.
– Anonymous I
Jul 16 at 10:31
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50
add a comment |Â
up vote
2
down vote
$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.
$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
add a comment |Â
up vote
2
down vote
$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.
$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
add a comment |Â
up vote
2
down vote
up vote
2
down vote
$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.
$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.
$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.
$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.
answered Jul 16 at 10:36


Berci
56.4k23570
56.4k23570
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
add a comment |Â
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38
add a comment |Â
up vote
1
down vote
First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.
Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
add a comment |Â
up vote
1
down vote
First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.
Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
add a comment |Â
up vote
1
down vote
up vote
1
down vote
First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.
Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.
First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.
Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.
edited Jul 16 at 10:46
answered Jul 16 at 10:40
Bernard
110k635103
110k635103
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
add a comment |Â
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853286%2fprove-the-following-statements-linear-algebra-vector-spaces%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51
Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54
But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02
You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14
You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19