Why is $T^* -barlambdaI$ not one-to-one?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite
1












From Friedberg's Linear Algebra




Let $T$ be a linear operator on a finite-dimensional inner product space $V$. If $T$ has an eigenvector, then so doess $T$*.



Proof. Suppose that $v$ is an eigenvector of $T$ with corresponding eigenvalue $lambda$. Then for any $x in V$,



$0 = langle 0,x rangle = langle(T - lambda I)(v),x rangle = langle v,(T-lambda I)^*(x)rangle = langle v, (T^*-barlambda I)(x)rangle$,



and hence $v$ is orthogonal to the range of $T^* -barlambdaI$. So $T^* -barlambdaI$ is not onto and hence is not one-to-one. Thus $T^* -barlambdaI$ has a nonzero null space, and any nonzero vector in this null space is an eigenvector of $T*$ with correspoinding eigenvalue $barlambda.$




I'm unable to see why $T^* -barlambdaI$ is not one-to-one, and why $T^* -barlambdaI$ has a nonzero null space.







share|cite|improve this question















  • 1




    If the range of $T* -barlambdaI$ was all of $V$, then how can $v$ be orthogonal to all of it? Unless $v$ is zero, but it cannot be zero because it is an eigenvector of something, and zero vectors are not eigenvectors of anything.
    – Marcus Aurelius
    Jul 22 at 16:49










  • I thought that would mean that it's not onto, but I still can't see why it's not one-to-one.
    – K.M
    Jul 22 at 17:14










  • This is just the dimension formula. Dimension of kernel + dimension of image = dimension of V (or more generally, the domain).
    – Marcus Aurelius
    Jul 22 at 17:15














up vote
1
down vote

favorite
1












From Friedberg's Linear Algebra




Let $T$ be a linear operator on a finite-dimensional inner product space $V$. If $T$ has an eigenvector, then so doess $T$*.



Proof. Suppose that $v$ is an eigenvector of $T$ with corresponding eigenvalue $lambda$. Then for any $x in V$,



$0 = langle 0,x rangle = langle(T - lambda I)(v),x rangle = langle v,(T-lambda I)^*(x)rangle = langle v, (T^*-barlambda I)(x)rangle$,



and hence $v$ is orthogonal to the range of $T^* -barlambdaI$. So $T^* -barlambdaI$ is not onto and hence is not one-to-one. Thus $T^* -barlambdaI$ has a nonzero null space, and any nonzero vector in this null space is an eigenvector of $T*$ with correspoinding eigenvalue $barlambda.$




I'm unable to see why $T^* -barlambdaI$ is not one-to-one, and why $T^* -barlambdaI$ has a nonzero null space.







share|cite|improve this question















  • 1




    If the range of $T* -barlambdaI$ was all of $V$, then how can $v$ be orthogonal to all of it? Unless $v$ is zero, but it cannot be zero because it is an eigenvector of something, and zero vectors are not eigenvectors of anything.
    – Marcus Aurelius
    Jul 22 at 16:49










  • I thought that would mean that it's not onto, but I still can't see why it's not one-to-one.
    – K.M
    Jul 22 at 17:14










  • This is just the dimension formula. Dimension of kernel + dimension of image = dimension of V (or more generally, the domain).
    – Marcus Aurelius
    Jul 22 at 17:15












up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





From Friedberg's Linear Algebra




Let $T$ be a linear operator on a finite-dimensional inner product space $V$. If $T$ has an eigenvector, then so doess $T$*.



Proof. Suppose that $v$ is an eigenvector of $T$ with corresponding eigenvalue $lambda$. Then for any $x in V$,



$0 = langle 0,x rangle = langle(T - lambda I)(v),x rangle = langle v,(T-lambda I)^*(x)rangle = langle v, (T^*-barlambda I)(x)rangle$,



and hence $v$ is orthogonal to the range of $T^* -barlambdaI$. So $T^* -barlambdaI$ is not onto and hence is not one-to-one. Thus $T^* -barlambdaI$ has a nonzero null space, and any nonzero vector in this null space is an eigenvector of $T*$ with correspoinding eigenvalue $barlambda.$




I'm unable to see why $T^* -barlambdaI$ is not one-to-one, and why $T^* -barlambdaI$ has a nonzero null space.







share|cite|improve this question











From Friedberg's Linear Algebra




Let $T$ be a linear operator on a finite-dimensional inner product space $V$. If $T$ has an eigenvector, then so doess $T$*.



Proof. Suppose that $v$ is an eigenvector of $T$ with corresponding eigenvalue $lambda$. Then for any $x in V$,



$0 = langle 0,x rangle = langle(T - lambda I)(v),x rangle = langle v,(T-lambda I)^*(x)rangle = langle v, (T^*-barlambda I)(x)rangle$,



and hence $v$ is orthogonal to the range of $T^* -barlambdaI$. So $T^* -barlambdaI$ is not onto and hence is not one-to-one. Thus $T^* -barlambdaI$ has a nonzero null space, and any nonzero vector in this null space is an eigenvector of $T*$ with correspoinding eigenvalue $barlambda.$




I'm unable to see why $T^* -barlambdaI$ is not one-to-one, and why $T^* -barlambdaI$ has a nonzero null space.









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 22 at 16:44









K.M

480312




480312







  • 1




    If the range of $T* -barlambdaI$ was all of $V$, then how can $v$ be orthogonal to all of it? Unless $v$ is zero, but it cannot be zero because it is an eigenvector of something, and zero vectors are not eigenvectors of anything.
    – Marcus Aurelius
    Jul 22 at 16:49










  • I thought that would mean that it's not onto, but I still can't see why it's not one-to-one.
    – K.M
    Jul 22 at 17:14










  • This is just the dimension formula. Dimension of kernel + dimension of image = dimension of V (or more generally, the domain).
    – Marcus Aurelius
    Jul 22 at 17:15












  • 1




    If the range of $T* -barlambdaI$ was all of $V$, then how can $v$ be orthogonal to all of it? Unless $v$ is zero, but it cannot be zero because it is an eigenvector of something, and zero vectors are not eigenvectors of anything.
    – Marcus Aurelius
    Jul 22 at 16:49










  • I thought that would mean that it's not onto, but I still can't see why it's not one-to-one.
    – K.M
    Jul 22 at 17:14










  • This is just the dimension formula. Dimension of kernel + dimension of image = dimension of V (or more generally, the domain).
    – Marcus Aurelius
    Jul 22 at 17:15







1




1




If the range of $T* -barlambdaI$ was all of $V$, then how can $v$ be orthogonal to all of it? Unless $v$ is zero, but it cannot be zero because it is an eigenvector of something, and zero vectors are not eigenvectors of anything.
– Marcus Aurelius
Jul 22 at 16:49




If the range of $T* -barlambdaI$ was all of $V$, then how can $v$ be orthogonal to all of it? Unless $v$ is zero, but it cannot be zero because it is an eigenvector of something, and zero vectors are not eigenvectors of anything.
– Marcus Aurelius
Jul 22 at 16:49












I thought that would mean that it's not onto, but I still can't see why it's not one-to-one.
– K.M
Jul 22 at 17:14




I thought that would mean that it's not onto, but I still can't see why it's not one-to-one.
– K.M
Jul 22 at 17:14












This is just the dimension formula. Dimension of kernel + dimension of image = dimension of V (or more generally, the domain).
– Marcus Aurelius
Jul 22 at 17:15




This is just the dimension formula. Dimension of kernel + dimension of image = dimension of V (or more generally, the domain).
– Marcus Aurelius
Jul 22 at 17:15










2 Answers
2






active

oldest

votes

















up vote
2
down vote













Let $lambda$ be a non-zero eigen value of $T$, i.e. $exists$ non-zero $x in V$ such that
$$Tx = lambda x$$
$$(T-lambda I)x = 0$$



Now let $y in V$ be non-zero.



$$langle Tx,y rangle = langle x,T^*y rangle $$



$$langle Tx,y rangle =langle lambda x,y rangle = langle x,overlinelambda y rangle$$



And therefore,
$$langle x,T^*y rangle = langle x,
overlinelambda y rangle$$.
$$implies langle x, (T^*-overlinelambda I )y rangle = 0$$



Therefore $x in (Rg(T^*-overlinelambda I))^perp$
And we know that $(Rg(T))^perp = N(T)$



Therefore, $x in N(T^*-overlinelambda I)$
i.e.



$$T^*x = overlinelambda x$$






share|cite|improve this answer




























    up vote
    0
    down vote













    Lauds to our colleague Vizag for his elegant demonstration that



    $lambda ; textan eigenvalue of ; T Longrightarrow bar lambda ; textan eigenvalue of ; T^dagger; tag 1$



    however, his work on this subject leaves unaddressed the title question, that is,



    $text"Why is ; T^dagger - bar lambda I ; textnot one-to-one?" tag 2$



    I wish to take up this specific topic here, and provide a sort of "classic" answer; specifically, I wish to demonstrate the essential and well-known result,



    "A linear map $S:V to V$ from a finite dimensional vector space to itself is one-to-one if and only if it is onto."



    Note: in what follows we allow $V$ to be a vector space over any base field $Bbb F$.



    Proof: The argument is based upon elementary notions of basis and linear independence.



    We first assume $S:V to V$ is onto. Then we let $w_1, w_2, dots w_n$ be a basis for $V$ over the field $Bbb F$, and we see, since $S$ is surjective, that there must be a set of vectors $v_i$, $1 le i le n$, with



    $Sv_i = w_i, ; 1 le i le n; tag 3$



    I claim the set $ v_i mid 1 le i le n $ is linearly independent over $Bbb F$; for if not, there would exist $alpha_i in Bbb F$, not all zero, with



    $displaystyle sum_1^n alpha_i v_i = 0; tag 4$



    then



    $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n alpha_i v_i right ) = S(0) = 0; tag 5$



    but this contradicts the linear independence of the $w_i$ unless



    $alpha_i = 0, ; 1 le i le n; tag 6$



    but condition (6) is precluded by our assumption that not all the $alpha_i = 0$; therefore the $v_i$ are linearly independent over $Bbb F$ and hence form a basis for $V$; then any $x in V$ may be written



    $x = displaystyle sum_1^n x_i v_i, ; x_i in Bbb F; tag 7$



    now suppose $S$ were not injective. Then we could find $x_1, x_2 in V$ with



    $Sx_1 = Sx_2; tag 8$



    if, in accord with (7) we set



    $x_1 = displaystyle sum_1^n alpha_i v_i, tag 9$



    $x_2 = displaystyle sum_1^n beta_i v_i, tag10$



    then from (8)-(10),



    $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n
    alpha_i v_i right ) = S left (sum_1^n beta_i v_i right) = sum_1^n beta_i Sv_i = sum_1^n beta_i w_i, tag11$



    whence



    $displaystyle sum_1^n (alpha_i - beta_i) w_i = 0; tag12$



    now the linear independence of the $w_i$ forces



    $alpha_i = beta_i, ; 1 le i le n, tag13$



    whence again via (9)-(10)



    $x_1 = x_2, tag14$



    and we see that $S$ is injective.



    Going the other way, we now suppose $S$ is injective; and let the set
    $v_i mid 1 le i le n $ form a basis for $V$. I claim that the vectors $Sv_1, Sv_2, ldots, Sv_n$ also form a basis; for if not, they must be linearly dependent and we may find $alpha_i in Bbb F$ such that



    $displaystyle S left ( sum_1^n alpha_i v_i right ) = sum_1^n alpha_i Sv_i = 0; tag15$



    now with $S$ injective this forces



    $displaystyle sum_1^n alpha_i v_i = 0, tag16$



    impossible by the assumed linear independence of the $v_i$; thus the $Sv_i$ do form a basis and hence any $y in V$ may be written



    $y = displaystyle sum_1^n beta_i Sv_i = S left ( sum_1^n beta_i v_i right ); tag17$



    thus every $y in V$ lies in the image of $S$ which at last seen to be onto.
    End: Proof.



    If we apply this result to $T^dagger - bar lambda I$ as in the body of the question, we see that, having shown that $T^dagger - bar lambda I$ is not onto, we may conclude it is also not injective by the preceding basic demonstration; but not injective implies the null space is not $ 0 $, since if $x_1 ne x_2$ but $Sx_1 = Sx_2$, we have $x_1 - x_2 ne 0$ but



    $S(x_1 - x_2) = Sx_1 - Sx_2 = 0, tag18$



    whence $0 ne x_1 - x_2 in ker S ne 0 $.






    share|cite|improve this answer























      Your Answer




      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2859561%2fwhy-is-t-bar-lambdai-not-one-to-one%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      2
      down vote













      Let $lambda$ be a non-zero eigen value of $T$, i.e. $exists$ non-zero $x in V$ such that
      $$Tx = lambda x$$
      $$(T-lambda I)x = 0$$



      Now let $y in V$ be non-zero.



      $$langle Tx,y rangle = langle x,T^*y rangle $$



      $$langle Tx,y rangle =langle lambda x,y rangle = langle x,overlinelambda y rangle$$



      And therefore,
      $$langle x,T^*y rangle = langle x,
      overlinelambda y rangle$$.
      $$implies langle x, (T^*-overlinelambda I )y rangle = 0$$



      Therefore $x in (Rg(T^*-overlinelambda I))^perp$
      And we know that $(Rg(T))^perp = N(T)$



      Therefore, $x in N(T^*-overlinelambda I)$
      i.e.



      $$T^*x = overlinelambda x$$






      share|cite|improve this answer

























        up vote
        2
        down vote













        Let $lambda$ be a non-zero eigen value of $T$, i.e. $exists$ non-zero $x in V$ such that
        $$Tx = lambda x$$
        $$(T-lambda I)x = 0$$



        Now let $y in V$ be non-zero.



        $$langle Tx,y rangle = langle x,T^*y rangle $$



        $$langle Tx,y rangle =langle lambda x,y rangle = langle x,overlinelambda y rangle$$



        And therefore,
        $$langle x,T^*y rangle = langle x,
        overlinelambda y rangle$$.
        $$implies langle x, (T^*-overlinelambda I )y rangle = 0$$



        Therefore $x in (Rg(T^*-overlinelambda I))^perp$
        And we know that $(Rg(T))^perp = N(T)$



        Therefore, $x in N(T^*-overlinelambda I)$
        i.e.



        $$T^*x = overlinelambda x$$






        share|cite|improve this answer























          up vote
          2
          down vote










          up vote
          2
          down vote









          Let $lambda$ be a non-zero eigen value of $T$, i.e. $exists$ non-zero $x in V$ such that
          $$Tx = lambda x$$
          $$(T-lambda I)x = 0$$



          Now let $y in V$ be non-zero.



          $$langle Tx,y rangle = langle x,T^*y rangle $$



          $$langle Tx,y rangle =langle lambda x,y rangle = langle x,overlinelambda y rangle$$



          And therefore,
          $$langle x,T^*y rangle = langle x,
          overlinelambda y rangle$$.
          $$implies langle x, (T^*-overlinelambda I )y rangle = 0$$



          Therefore $x in (Rg(T^*-overlinelambda I))^perp$
          And we know that $(Rg(T))^perp = N(T)$



          Therefore, $x in N(T^*-overlinelambda I)$
          i.e.



          $$T^*x = overlinelambda x$$






          share|cite|improve this answer













          Let $lambda$ be a non-zero eigen value of $T$, i.e. $exists$ non-zero $x in V$ such that
          $$Tx = lambda x$$
          $$(T-lambda I)x = 0$$



          Now let $y in V$ be non-zero.



          $$langle Tx,y rangle = langle x,T^*y rangle $$



          $$langle Tx,y rangle =langle lambda x,y rangle = langle x,overlinelambda y rangle$$



          And therefore,
          $$langle x,T^*y rangle = langle x,
          overlinelambda y rangle$$.
          $$implies langle x, (T^*-overlinelambda I )y rangle = 0$$



          Therefore $x in (Rg(T^*-overlinelambda I))^perp$
          And we know that $(Rg(T))^perp = N(T)$



          Therefore, $x in N(T^*-overlinelambda I)$
          i.e.



          $$T^*x = overlinelambda x$$







          share|cite|improve this answer













          share|cite|improve this answer



          share|cite|improve this answer











          answered Jul 22 at 21:01









          Vizag

          271111




          271111




















              up vote
              0
              down vote













              Lauds to our colleague Vizag for his elegant demonstration that



              $lambda ; textan eigenvalue of ; T Longrightarrow bar lambda ; textan eigenvalue of ; T^dagger; tag 1$



              however, his work on this subject leaves unaddressed the title question, that is,



              $text"Why is ; T^dagger - bar lambda I ; textnot one-to-one?" tag 2$



              I wish to take up this specific topic here, and provide a sort of "classic" answer; specifically, I wish to demonstrate the essential and well-known result,



              "A linear map $S:V to V$ from a finite dimensional vector space to itself is one-to-one if and only if it is onto."



              Note: in what follows we allow $V$ to be a vector space over any base field $Bbb F$.



              Proof: The argument is based upon elementary notions of basis and linear independence.



              We first assume $S:V to V$ is onto. Then we let $w_1, w_2, dots w_n$ be a basis for $V$ over the field $Bbb F$, and we see, since $S$ is surjective, that there must be a set of vectors $v_i$, $1 le i le n$, with



              $Sv_i = w_i, ; 1 le i le n; tag 3$



              I claim the set $ v_i mid 1 le i le n $ is linearly independent over $Bbb F$; for if not, there would exist $alpha_i in Bbb F$, not all zero, with



              $displaystyle sum_1^n alpha_i v_i = 0; tag 4$



              then



              $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n alpha_i v_i right ) = S(0) = 0; tag 5$



              but this contradicts the linear independence of the $w_i$ unless



              $alpha_i = 0, ; 1 le i le n; tag 6$



              but condition (6) is precluded by our assumption that not all the $alpha_i = 0$; therefore the $v_i$ are linearly independent over $Bbb F$ and hence form a basis for $V$; then any $x in V$ may be written



              $x = displaystyle sum_1^n x_i v_i, ; x_i in Bbb F; tag 7$



              now suppose $S$ were not injective. Then we could find $x_1, x_2 in V$ with



              $Sx_1 = Sx_2; tag 8$



              if, in accord with (7) we set



              $x_1 = displaystyle sum_1^n alpha_i v_i, tag 9$



              $x_2 = displaystyle sum_1^n beta_i v_i, tag10$



              then from (8)-(10),



              $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n
              alpha_i v_i right ) = S left (sum_1^n beta_i v_i right) = sum_1^n beta_i Sv_i = sum_1^n beta_i w_i, tag11$



              whence



              $displaystyle sum_1^n (alpha_i - beta_i) w_i = 0; tag12$



              now the linear independence of the $w_i$ forces



              $alpha_i = beta_i, ; 1 le i le n, tag13$



              whence again via (9)-(10)



              $x_1 = x_2, tag14$



              and we see that $S$ is injective.



              Going the other way, we now suppose $S$ is injective; and let the set
              $v_i mid 1 le i le n $ form a basis for $V$. I claim that the vectors $Sv_1, Sv_2, ldots, Sv_n$ also form a basis; for if not, they must be linearly dependent and we may find $alpha_i in Bbb F$ such that



              $displaystyle S left ( sum_1^n alpha_i v_i right ) = sum_1^n alpha_i Sv_i = 0; tag15$



              now with $S$ injective this forces



              $displaystyle sum_1^n alpha_i v_i = 0, tag16$



              impossible by the assumed linear independence of the $v_i$; thus the $Sv_i$ do form a basis and hence any $y in V$ may be written



              $y = displaystyle sum_1^n beta_i Sv_i = S left ( sum_1^n beta_i v_i right ); tag17$



              thus every $y in V$ lies in the image of $S$ which at last seen to be onto.
              End: Proof.



              If we apply this result to $T^dagger - bar lambda I$ as in the body of the question, we see that, having shown that $T^dagger - bar lambda I$ is not onto, we may conclude it is also not injective by the preceding basic demonstration; but not injective implies the null space is not $ 0 $, since if $x_1 ne x_2$ but $Sx_1 = Sx_2$, we have $x_1 - x_2 ne 0$ but



              $S(x_1 - x_2) = Sx_1 - Sx_2 = 0, tag18$



              whence $0 ne x_1 - x_2 in ker S ne 0 $.






              share|cite|improve this answer



























                up vote
                0
                down vote













                Lauds to our colleague Vizag for his elegant demonstration that



                $lambda ; textan eigenvalue of ; T Longrightarrow bar lambda ; textan eigenvalue of ; T^dagger; tag 1$



                however, his work on this subject leaves unaddressed the title question, that is,



                $text"Why is ; T^dagger - bar lambda I ; textnot one-to-one?" tag 2$



                I wish to take up this specific topic here, and provide a sort of "classic" answer; specifically, I wish to demonstrate the essential and well-known result,



                "A linear map $S:V to V$ from a finite dimensional vector space to itself is one-to-one if and only if it is onto."



                Note: in what follows we allow $V$ to be a vector space over any base field $Bbb F$.



                Proof: The argument is based upon elementary notions of basis and linear independence.



                We first assume $S:V to V$ is onto. Then we let $w_1, w_2, dots w_n$ be a basis for $V$ over the field $Bbb F$, and we see, since $S$ is surjective, that there must be a set of vectors $v_i$, $1 le i le n$, with



                $Sv_i = w_i, ; 1 le i le n; tag 3$



                I claim the set $ v_i mid 1 le i le n $ is linearly independent over $Bbb F$; for if not, there would exist $alpha_i in Bbb F$, not all zero, with



                $displaystyle sum_1^n alpha_i v_i = 0; tag 4$



                then



                $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n alpha_i v_i right ) = S(0) = 0; tag 5$



                but this contradicts the linear independence of the $w_i$ unless



                $alpha_i = 0, ; 1 le i le n; tag 6$



                but condition (6) is precluded by our assumption that not all the $alpha_i = 0$; therefore the $v_i$ are linearly independent over $Bbb F$ and hence form a basis for $V$; then any $x in V$ may be written



                $x = displaystyle sum_1^n x_i v_i, ; x_i in Bbb F; tag 7$



                now suppose $S$ were not injective. Then we could find $x_1, x_2 in V$ with



                $Sx_1 = Sx_2; tag 8$



                if, in accord with (7) we set



                $x_1 = displaystyle sum_1^n alpha_i v_i, tag 9$



                $x_2 = displaystyle sum_1^n beta_i v_i, tag10$



                then from (8)-(10),



                $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n
                alpha_i v_i right ) = S left (sum_1^n beta_i v_i right) = sum_1^n beta_i Sv_i = sum_1^n beta_i w_i, tag11$



                whence



                $displaystyle sum_1^n (alpha_i - beta_i) w_i = 0; tag12$



                now the linear independence of the $w_i$ forces



                $alpha_i = beta_i, ; 1 le i le n, tag13$



                whence again via (9)-(10)



                $x_1 = x_2, tag14$



                and we see that $S$ is injective.



                Going the other way, we now suppose $S$ is injective; and let the set
                $v_i mid 1 le i le n $ form a basis for $V$. I claim that the vectors $Sv_1, Sv_2, ldots, Sv_n$ also form a basis; for if not, they must be linearly dependent and we may find $alpha_i in Bbb F$ such that



                $displaystyle S left ( sum_1^n alpha_i v_i right ) = sum_1^n alpha_i Sv_i = 0; tag15$



                now with $S$ injective this forces



                $displaystyle sum_1^n alpha_i v_i = 0, tag16$



                impossible by the assumed linear independence of the $v_i$; thus the $Sv_i$ do form a basis and hence any $y in V$ may be written



                $y = displaystyle sum_1^n beta_i Sv_i = S left ( sum_1^n beta_i v_i right ); tag17$



                thus every $y in V$ lies in the image of $S$ which at last seen to be onto.
                End: Proof.



                If we apply this result to $T^dagger - bar lambda I$ as in the body of the question, we see that, having shown that $T^dagger - bar lambda I$ is not onto, we may conclude it is also not injective by the preceding basic demonstration; but not injective implies the null space is not $ 0 $, since if $x_1 ne x_2$ but $Sx_1 = Sx_2$, we have $x_1 - x_2 ne 0$ but



                $S(x_1 - x_2) = Sx_1 - Sx_2 = 0, tag18$



                whence $0 ne x_1 - x_2 in ker S ne 0 $.






                share|cite|improve this answer

























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  Lauds to our colleague Vizag for his elegant demonstration that



                  $lambda ; textan eigenvalue of ; T Longrightarrow bar lambda ; textan eigenvalue of ; T^dagger; tag 1$



                  however, his work on this subject leaves unaddressed the title question, that is,



                  $text"Why is ; T^dagger - bar lambda I ; textnot one-to-one?" tag 2$



                  I wish to take up this specific topic here, and provide a sort of "classic" answer; specifically, I wish to demonstrate the essential and well-known result,



                  "A linear map $S:V to V$ from a finite dimensional vector space to itself is one-to-one if and only if it is onto."



                  Note: in what follows we allow $V$ to be a vector space over any base field $Bbb F$.



                  Proof: The argument is based upon elementary notions of basis and linear independence.



                  We first assume $S:V to V$ is onto. Then we let $w_1, w_2, dots w_n$ be a basis for $V$ over the field $Bbb F$, and we see, since $S$ is surjective, that there must be a set of vectors $v_i$, $1 le i le n$, with



                  $Sv_i = w_i, ; 1 le i le n; tag 3$



                  I claim the set $ v_i mid 1 le i le n $ is linearly independent over $Bbb F$; for if not, there would exist $alpha_i in Bbb F$, not all zero, with



                  $displaystyle sum_1^n alpha_i v_i = 0; tag 4$



                  then



                  $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n alpha_i v_i right ) = S(0) = 0; tag 5$



                  but this contradicts the linear independence of the $w_i$ unless



                  $alpha_i = 0, ; 1 le i le n; tag 6$



                  but condition (6) is precluded by our assumption that not all the $alpha_i = 0$; therefore the $v_i$ are linearly independent over $Bbb F$ and hence form a basis for $V$; then any $x in V$ may be written



                  $x = displaystyle sum_1^n x_i v_i, ; x_i in Bbb F; tag 7$



                  now suppose $S$ were not injective. Then we could find $x_1, x_2 in V$ with



                  $Sx_1 = Sx_2; tag 8$



                  if, in accord with (7) we set



                  $x_1 = displaystyle sum_1^n alpha_i v_i, tag 9$



                  $x_2 = displaystyle sum_1^n beta_i v_i, tag10$



                  then from (8)-(10),



                  $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n
                  alpha_i v_i right ) = S left (sum_1^n beta_i v_i right) = sum_1^n beta_i Sv_i = sum_1^n beta_i w_i, tag11$



                  whence



                  $displaystyle sum_1^n (alpha_i - beta_i) w_i = 0; tag12$



                  now the linear independence of the $w_i$ forces



                  $alpha_i = beta_i, ; 1 le i le n, tag13$



                  whence again via (9)-(10)



                  $x_1 = x_2, tag14$



                  and we see that $S$ is injective.



                  Going the other way, we now suppose $S$ is injective; and let the set
                  $v_i mid 1 le i le n $ form a basis for $V$. I claim that the vectors $Sv_1, Sv_2, ldots, Sv_n$ also form a basis; for if not, they must be linearly dependent and we may find $alpha_i in Bbb F$ such that



                  $displaystyle S left ( sum_1^n alpha_i v_i right ) = sum_1^n alpha_i Sv_i = 0; tag15$



                  now with $S$ injective this forces



                  $displaystyle sum_1^n alpha_i v_i = 0, tag16$



                  impossible by the assumed linear independence of the $v_i$; thus the $Sv_i$ do form a basis and hence any $y in V$ may be written



                  $y = displaystyle sum_1^n beta_i Sv_i = S left ( sum_1^n beta_i v_i right ); tag17$



                  thus every $y in V$ lies in the image of $S$ which at last seen to be onto.
                  End: Proof.



                  If we apply this result to $T^dagger - bar lambda I$ as in the body of the question, we see that, having shown that $T^dagger - bar lambda I$ is not onto, we may conclude it is also not injective by the preceding basic demonstration; but not injective implies the null space is not $ 0 $, since if $x_1 ne x_2$ but $Sx_1 = Sx_2$, we have $x_1 - x_2 ne 0$ but



                  $S(x_1 - x_2) = Sx_1 - Sx_2 = 0, tag18$



                  whence $0 ne x_1 - x_2 in ker S ne 0 $.






                  share|cite|improve this answer















                  Lauds to our colleague Vizag for his elegant demonstration that



                  $lambda ; textan eigenvalue of ; T Longrightarrow bar lambda ; textan eigenvalue of ; T^dagger; tag 1$



                  however, his work on this subject leaves unaddressed the title question, that is,



                  $text"Why is ; T^dagger - bar lambda I ; textnot one-to-one?" tag 2$



                  I wish to take up this specific topic here, and provide a sort of "classic" answer; specifically, I wish to demonstrate the essential and well-known result,



                  "A linear map $S:V to V$ from a finite dimensional vector space to itself is one-to-one if and only if it is onto."



                  Note: in what follows we allow $V$ to be a vector space over any base field $Bbb F$.



                  Proof: The argument is based upon elementary notions of basis and linear independence.



                  We first assume $S:V to V$ is onto. Then we let $w_1, w_2, dots w_n$ be a basis for $V$ over the field $Bbb F$, and we see, since $S$ is surjective, that there must be a set of vectors $v_i$, $1 le i le n$, with



                  $Sv_i = w_i, ; 1 le i le n; tag 3$



                  I claim the set $ v_i mid 1 le i le n $ is linearly independent over $Bbb F$; for if not, there would exist $alpha_i in Bbb F$, not all zero, with



                  $displaystyle sum_1^n alpha_i v_i = 0; tag 4$



                  then



                  $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n alpha_i v_i right ) = S(0) = 0; tag 5$



                  but this contradicts the linear independence of the $w_i$ unless



                  $alpha_i = 0, ; 1 le i le n; tag 6$



                  but condition (6) is precluded by our assumption that not all the $alpha_i = 0$; therefore the $v_i$ are linearly independent over $Bbb F$ and hence form a basis for $V$; then any $x in V$ may be written



                  $x = displaystyle sum_1^n x_i v_i, ; x_i in Bbb F; tag 7$



                  now suppose $S$ were not injective. Then we could find $x_1, x_2 in V$ with



                  $Sx_1 = Sx_2; tag 8$



                  if, in accord with (7) we set



                  $x_1 = displaystyle sum_1^n alpha_i v_i, tag 9$



                  $x_2 = displaystyle sum_1^n beta_i v_i, tag10$



                  then from (8)-(10),



                  $displaystyle sum_1^n alpha_i w_i = sum_1^n alpha_i Sv_i = S left (sum_1^n
                  alpha_i v_i right ) = S left (sum_1^n beta_i v_i right) = sum_1^n beta_i Sv_i = sum_1^n beta_i w_i, tag11$



                  whence



                  $displaystyle sum_1^n (alpha_i - beta_i) w_i = 0; tag12$



                  now the linear independence of the $w_i$ forces



                  $alpha_i = beta_i, ; 1 le i le n, tag13$



                  whence again via (9)-(10)



                  $x_1 = x_2, tag14$



                  and we see that $S$ is injective.



                  Going the other way, we now suppose $S$ is injective; and let the set
                  $v_i mid 1 le i le n $ form a basis for $V$. I claim that the vectors $Sv_1, Sv_2, ldots, Sv_n$ also form a basis; for if not, they must be linearly dependent and we may find $alpha_i in Bbb F$ such that



                  $displaystyle S left ( sum_1^n alpha_i v_i right ) = sum_1^n alpha_i Sv_i = 0; tag15$



                  now with $S$ injective this forces



                  $displaystyle sum_1^n alpha_i v_i = 0, tag16$



                  impossible by the assumed linear independence of the $v_i$; thus the $Sv_i$ do form a basis and hence any $y in V$ may be written



                  $y = displaystyle sum_1^n beta_i Sv_i = S left ( sum_1^n beta_i v_i right ); tag17$



                  thus every $y in V$ lies in the image of $S$ which at last seen to be onto.
                  End: Proof.



                  If we apply this result to $T^dagger - bar lambda I$ as in the body of the question, we see that, having shown that $T^dagger - bar lambda I$ is not onto, we may conclude it is also not injective by the preceding basic demonstration; but not injective implies the null space is not $ 0 $, since if $x_1 ne x_2$ but $Sx_1 = Sx_2$, we have $x_1 - x_2 ne 0$ but



                  $S(x_1 - x_2) = Sx_1 - Sx_2 = 0, tag18$



                  whence $0 ne x_1 - x_2 in ker S ne 0 $.







                  share|cite|improve this answer















                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Jul 24 at 0:46


























                  answered Jul 24 at 0:28









                  Robert Lewis

                  36.9k22255




                  36.9k22255






















                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2859561%2fwhy-is-t-bar-lambdai-not-one-to-one%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What is the equation of a 3D cone with generalised tilt?

                      Relationship between determinant of matrix and determinant of adjoint?

                      Color the edges and diagonals of a regular polygon