Prove the following statements: Linear algebra (Vector spaces)

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite












Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:



(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.



Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.



So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.







share|cite|improve this question

















  • 1




    You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
    – Poon Levi
    Jul 16 at 9:51










  • Well yeah, just like in the example on this wikipedia page here.
    – Anonymous I
    Jul 16 at 9:54










  • But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
    – Anonymous I
    Jul 16 at 10:02










  • You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
    – Matthias Klupsch
    Jul 16 at 10:14










  • You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
    – Jalapeno Nachos
    Jul 16 at 10:19














up vote
3
down vote

favorite












Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:



(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.



Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.



So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.







share|cite|improve this question

















  • 1




    You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
    – Poon Levi
    Jul 16 at 9:51










  • Well yeah, just like in the example on this wikipedia page here.
    – Anonymous I
    Jul 16 at 9:54










  • But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
    – Anonymous I
    Jul 16 at 10:02










  • You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
    – Matthias Klupsch
    Jul 16 at 10:14










  • You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
    – Jalapeno Nachos
    Jul 16 at 10:19












up vote
3
down vote

favorite









up vote
3
down vote

favorite











Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:



(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.



Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.



So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.







share|cite|improve this question













Let $V$ be a vector space and $P subseteq V$ a subset. Proof that the following statements are equivalent:



(i) $P$ is linearly independent.
(ii) Each vector in $ operatornamevect(P)$ can be uniquely expressed as a linear combination of vectors in $P$.



Hint: Use contradiction for (i) $Rightarrow$ (ii) by presuming that a vector can be expressed as two linear combinations of vectors from $P$.



So let's assume a vector $x = beginbmatrixa_1 \a_2 \a_3 \endbmatrix$ can be expressed by two linear combinations from vectors out of $P$.
This implies that there isn't a unique way to express a vector with two linear combinations from vectors from $P$.
This contradicts (ii). I don't really know what is required for a sufficient proof. Corrections would be appreciated.









share|cite|improve this question












share|cite|improve this question




share|cite|improve this question








edited Jul 16 at 11:18









caffeinemachine

6,08721145




6,08721145









asked Jul 16 at 9:49









Anonymous I

804725




804725







  • 1




    You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
    – Poon Levi
    Jul 16 at 9:51










  • Well yeah, just like in the example on this wikipedia page here.
    – Anonymous I
    Jul 16 at 9:54










  • But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
    – Anonymous I
    Jul 16 at 10:02










  • You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
    – Matthias Klupsch
    Jul 16 at 10:14










  • You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
    – Jalapeno Nachos
    Jul 16 at 10:19












  • 1




    You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
    – Poon Levi
    Jul 16 at 9:51










  • Well yeah, just like in the example on this wikipedia page here.
    – Anonymous I
    Jul 16 at 9:54










  • But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
    – Anonymous I
    Jul 16 at 10:02










  • You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
    – Matthias Klupsch
    Jul 16 at 10:14










  • You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
    – Jalapeno Nachos
    Jul 16 at 10:19







1




1




You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51




You need to show that the negation of (ii) contradicts (i), not (ii) (which it trivially does)
– Poon Levi
Jul 16 at 9:51












Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54




Well yeah, just like in the example on this wikipedia page here.
– Anonymous I
Jul 16 at 9:54












But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02




But how do you that here? Is one allowed to say it is trivial and just move on. Because the best thing I can think of is just write a particular vector like my $x$ and say it can be written in two other vectors of $P$.
– Anonymous I
Jul 16 at 10:02












You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14




You have to show that if a vector is a linear combination of the elements in $P$ in two distinct ways, then $P$ is not linearly independent. This is by no means trivial and requires proof.
– Matthias Klupsch
Jul 16 at 10:14












You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19




You're almost there. Try and contradict (i). It's not trivial. Use the definition of linear independence too.
– Jalapeno Nachos
Jul 16 at 10:19










3 Answers
3






active

oldest

votes

















up vote
3
down vote



accepted










For $(i)implies (ii)$ we have



$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$



For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists



$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$



for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have



$$vec w=a_1vec v_1+ldots+a_nvec v_n$$



and



$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$



which is a contradiction.






share|cite|improve this answer























  • Please correct my last comment.
    – Anonymous I
    Jul 16 at 10:31










  • Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
    – Anonymous I
    Jul 16 at 10:34










  • @AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
    – gimusi
    Jul 16 at 11:50

















up vote
2
down vote













$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.



$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.






share|cite|improve this answer





















  • Just like a contraposition in the wikipedia articles I mentioned.
    – Anonymous I
    Jul 16 at 10:38

















up vote
1
down vote













First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.



Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.






share|cite|improve this answer























  • Cf. my reference link in the comments.
    – Anonymous I
    Jul 16 at 10:41










  • Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
    – Bernard
    Jul 16 at 10:48










Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853286%2fprove-the-following-statements-linear-algebra-vector-spaces%23new-answer', 'question_page');

);

Post as a guest






























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
3
down vote



accepted










For $(i)implies (ii)$ we have



$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$



For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists



$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$



for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have



$$vec w=a_1vec v_1+ldots+a_nvec v_n$$



and



$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$



which is a contradiction.






share|cite|improve this answer























  • Please correct my last comment.
    – Anonymous I
    Jul 16 at 10:31










  • Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
    – Anonymous I
    Jul 16 at 10:34










  • @AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
    – gimusi
    Jul 16 at 11:50














up vote
3
down vote



accepted










For $(i)implies (ii)$ we have



$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$



For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists



$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$



for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have



$$vec w=a_1vec v_1+ldots+a_nvec v_n$$



and



$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$



which is a contradiction.






share|cite|improve this answer























  • Please correct my last comment.
    – Anonymous I
    Jul 16 at 10:31










  • Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
    – Anonymous I
    Jul 16 at 10:34










  • @AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
    – gimusi
    Jul 16 at 11:50












up vote
3
down vote



accepted







up vote
3
down vote



accepted






For $(i)implies (ii)$ we have



$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$



For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists



$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$



for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have



$$vec w=a_1vec v_1+ldots+a_nvec v_n$$



and



$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$



which is a contradiction.






share|cite|improve this answer















For $(i)implies (ii)$ we have



$$a_1vec v_1+ldots+a_nvec v_n=b_1vec v_1+ldots+b_nvec v_n implies (a_1-b_1)vec v_1+ldots+(a_n-b_n)vec v_n=0 \implies a_1=b_1,ldots,a_n=b_n$$



For $(ii)implies (i)$ suppose $P$ is not linearly independent, then exists



$$c_1vec v_1+ldots+c_nvec v_n=vec 0$$



for some $c_i$ not all equal to zero. Therefore for any $win operatornamevect(P)
$ we have



$$vec w=a_1vec v_1+ldots+a_nvec v_n$$



and



$$vec w=vec w+vec 0=(a_1+c_1)vec v_1+ldots+(a_n+c_n)vec v_n$$



which is a contradiction.







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 16 at 11:51


























answered Jul 16 at 10:30









gimusi

65.4k73684




65.4k73684











  • Please correct my last comment.
    – Anonymous I
    Jul 16 at 10:31










  • Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
    – Anonymous I
    Jul 16 at 10:34










  • @AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
    – gimusi
    Jul 16 at 11:50
















  • Please correct my last comment.
    – Anonymous I
    Jul 16 at 10:31










  • Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
    – Anonymous I
    Jul 16 at 10:34










  • @AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
    – gimusi
    Jul 16 at 11:50















Please correct my last comment.
– Anonymous I
Jul 16 at 10:31




Please correct my last comment.
– Anonymous I
Jul 16 at 10:31












Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34




Oh, ok I see now. I'm not used to construct good logical proofs only induction type ones.
– Anonymous I
Jul 16 at 10:34












@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50




@AnonymousI Thanks for the edit, I add the vector symbol also to the zero vector!
– gimusi
Jul 16 at 11:50










up vote
2
down vote













$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.



$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.






share|cite|improve this answer





















  • Just like a contraposition in the wikipedia articles I mentioned.
    – Anonymous I
    Jul 16 at 10:38














up vote
2
down vote













$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.



$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.






share|cite|improve this answer





















  • Just like a contraposition in the wikipedia articles I mentioned.
    – Anonymous I
    Jul 16 at 10:38












up vote
2
down vote










up vote
2
down vote









$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.



$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.






share|cite|improve this answer













$lnot$(i)$implieslnot$(ii): If $P$ is linearly dependent, $0$ can be expressed in multiple ways as a linear combination of elements of $P$.



$lnot$(ii)$implieslnot$(i): If a vector $v$ can be expressed by two different linear combinations of elements of $P$, subtract these to arrive a nontrivial linear combination resulting in $0$.







share|cite|improve this answer













share|cite|improve this answer



share|cite|improve this answer











answered Jul 16 at 10:36









Berci

56.4k23570




56.4k23570











  • Just like a contraposition in the wikipedia articles I mentioned.
    – Anonymous I
    Jul 16 at 10:38
















  • Just like a contraposition in the wikipedia articles I mentioned.
    – Anonymous I
    Jul 16 at 10:38















Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38




Just like a contraposition in the wikipedia articles I mentioned.
– Anonymous I
Jul 16 at 10:38










up vote
1
down vote













First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.



Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.






share|cite|improve this answer























  • Cf. my reference link in the comments.
    – Anonymous I
    Jul 16 at 10:41










  • Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
    – Bernard
    Jul 16 at 10:48














up vote
1
down vote













First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.



Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.






share|cite|improve this answer























  • Cf. my reference link in the comments.
    – Anonymous I
    Jul 16 at 10:41










  • Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
    – Bernard
    Jul 16 at 10:48












up vote
1
down vote










up vote
1
down vote









First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.



Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.






share|cite|improve this answer















First, it is not stipulated the vector space is $K^3$ ($K$ being the base field), nor that it has finite dimension.



Second, the proof is not really by contradiction, but by contrapositive.
The hint suggests to assumesome vector $v$ can be written as two different linear combinations with finite support of the vectors in $P$:
$$v=sum_uin Plambda_u u=sum_uin Pmu_u u, tag1$$
and to deduce the set of vectors $P$ is not linearly independent. But that is obvious, since you can rewrite eq. $(1)$ as
$$sum_uin P(lambda_u-mu_u) u= 0$$
which is a non-trivial linear relation between the elements of $P$ since not all coefficients $lambda_u, mu_u$ are equal.







share|cite|improve this answer















share|cite|improve this answer



share|cite|improve this answer








edited Jul 16 at 10:46


























answered Jul 16 at 10:40









Bernard

110k635103




110k635103











  • Cf. my reference link in the comments.
    – Anonymous I
    Jul 16 at 10:41










  • Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
    – Bernard
    Jul 16 at 10:48
















  • Cf. my reference link in the comments.
    – Anonymous I
    Jul 16 at 10:41










  • Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
    – Bernard
    Jul 16 at 10:48















Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41




Cf. my reference link in the comments.
– Anonymous I
Jul 16 at 10:41












Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48




Yes. I wanted to insist on the difference with proofs by contradiction. Quite often, so-called ‘proofs by contradiction’ are really proofs by contrapositive.
– Bernard
Jul 16 at 10:48












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853286%2fprove-the-following-statements-linear-algebra-vector-spaces%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

What is the equation of a 3D cone with generalised tilt?

Color the edges and diagonals of a regular polygon

Relationship between determinant of matrix and determinant of adjoint?