Help showing aMarkov chain with a doubly-stochastic matrix has uniform limiting distribution

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite
1












I have a lot of difficulty with proofs; could someone help me with this question that really can not solve? I would also like some indication of material to get through with this kind of question and some hint of material about Markov chain. Thanks in advance.



"A stochastic matrix is called doubly stochastic if its columns sum to 1. Let
$X_0
, X_1, dots$ be a Markov chain on $1,dots, k$ with a doubly stochastic transition
matrix and initial distribution that is uniform on $1, dots, k.$ Show that the distribution of $X_n$ is uniform on $1.dots, k,$ for all $n ge 0."$







share|cite|improve this question

























    up vote
    0
    down vote

    favorite
    1












    I have a lot of difficulty with proofs; could someone help me with this question that really can not solve? I would also like some indication of material to get through with this kind of question and some hint of material about Markov chain. Thanks in advance.



    "A stochastic matrix is called doubly stochastic if its columns sum to 1. Let
    $X_0
    , X_1, dots$ be a Markov chain on $1,dots, k$ with a doubly stochastic transition
    matrix and initial distribution that is uniform on $1, dots, k.$ Show that the distribution of $X_n$ is uniform on $1.dots, k,$ for all $n ge 0."$







    share|cite|improve this question























      up vote
      0
      down vote

      favorite
      1









      up vote
      0
      down vote

      favorite
      1






      1





      I have a lot of difficulty with proofs; could someone help me with this question that really can not solve? I would also like some indication of material to get through with this kind of question and some hint of material about Markov chain. Thanks in advance.



      "A stochastic matrix is called doubly stochastic if its columns sum to 1. Let
      $X_0
      , X_1, dots$ be a Markov chain on $1,dots, k$ with a doubly stochastic transition
      matrix and initial distribution that is uniform on $1, dots, k.$ Show that the distribution of $X_n$ is uniform on $1.dots, k,$ for all $n ge 0."$







      share|cite|improve this question













      I have a lot of difficulty with proofs; could someone help me with this question that really can not solve? I would also like some indication of material to get through with this kind of question and some hint of material about Markov chain. Thanks in advance.



      "A stochastic matrix is called doubly stochastic if its columns sum to 1. Let
      $X_0
      , X_1, dots$ be a Markov chain on $1,dots, k$ with a doubly stochastic transition
      matrix and initial distribution that is uniform on $1, dots, k.$ Show that the distribution of $X_n$ is uniform on $1.dots, k,$ for all $n ge 0."$









      share|cite|improve this question












      share|cite|improve this question




      share|cite|improve this question








      edited Jul 15 at 6:08









      BruceET

      33.3k61440




      33.3k61440









      asked Jul 14 at 23:34









      Lucas Oliveira Freitas

      194




      194




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          2
          down vote













          $X_1=X_0P$, where $P$ is the transition matrix. As $X_0=[1/k,ldots,1/k]$, one would have $X_1^i=frac1k(P_2i+P_1i+ldots+P_ki)=frac1k$ by the double stochastic property (sum of the entries on colums are $1$). The general result follows by induction.






          share|cite|improve this answer




























            up vote
            1
            down vote













            If a Markov chain with state space $1, 2, dots, k$ is ergodic then its stationary distribution is its limiting distribution. As I read your question, it seems you are trying to show that
            a chain with a doubly stochastic matrix has a uniform stationary distribution on
            the state space.



            You are using the convention that element $p_ij$ for $1 le i,j le k,$ of the transition matrix $mathbfP$ has $p_ij = P(X_n = j | X_n-1 = i).$
            Let vector $sigma = (sigma_1, sigma_2, dots, sigma_k).$ If $sigma$ is uniform, then $sigma_i = 1/k,$ for $i = 1, dots, k.$
            $$sigma_imathbfP = sum_j=1^k frac 1 k p_ij =
            frac 1 ksum_j=1p_ij = frac 1 k(1) = frac 1 k,$$



            where the last equality is due to the doubly stochastic property of the transition matrix. The argument for all $i = 1, dots k$ is the same so
            $sigmamathsfP = sigma,$ and $sigma$ is a stationalry distribution.



            If your question is not covered by this answer and the Answer of @Momo (+1), please
            explain what part you don't understand, and maybe someone here can help.






            share|cite|improve this answer





















              Your Answer




              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "69"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              convertImagesToLinks: true,
              noModals: false,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );








               

              draft saved


              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852052%2fhelp-showing-amarkov-chain-with-a-doubly-stochastic-matrix-has-uniform-limiting%23new-answer', 'question_page');

              );

              Post as a guest






























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes








              up vote
              2
              down vote













              $X_1=X_0P$, where $P$ is the transition matrix. As $X_0=[1/k,ldots,1/k]$, one would have $X_1^i=frac1k(P_2i+P_1i+ldots+P_ki)=frac1k$ by the double stochastic property (sum of the entries on colums are $1$). The general result follows by induction.






              share|cite|improve this answer

























                up vote
                2
                down vote













                $X_1=X_0P$, where $P$ is the transition matrix. As $X_0=[1/k,ldots,1/k]$, one would have $X_1^i=frac1k(P_2i+P_1i+ldots+P_ki)=frac1k$ by the double stochastic property (sum of the entries on colums are $1$). The general result follows by induction.






                share|cite|improve this answer























                  up vote
                  2
                  down vote










                  up vote
                  2
                  down vote









                  $X_1=X_0P$, where $P$ is the transition matrix. As $X_0=[1/k,ldots,1/k]$, one would have $X_1^i=frac1k(P_2i+P_1i+ldots+P_ki)=frac1k$ by the double stochastic property (sum of the entries on colums are $1$). The general result follows by induction.






                  share|cite|improve this answer













                  $X_1=X_0P$, where $P$ is the transition matrix. As $X_0=[1/k,ldots,1/k]$, one would have $X_1^i=frac1k(P_2i+P_1i+ldots+P_ki)=frac1k$ by the double stochastic property (sum of the entries on colums are $1$). The general result follows by induction.







                  share|cite|improve this answer













                  share|cite|improve this answer



                  share|cite|improve this answer











                  answered Jul 15 at 1:32









                  Momo

                  11.9k21330




                  11.9k21330




















                      up vote
                      1
                      down vote













                      If a Markov chain with state space $1, 2, dots, k$ is ergodic then its stationary distribution is its limiting distribution. As I read your question, it seems you are trying to show that
                      a chain with a doubly stochastic matrix has a uniform stationary distribution on
                      the state space.



                      You are using the convention that element $p_ij$ for $1 le i,j le k,$ of the transition matrix $mathbfP$ has $p_ij = P(X_n = j | X_n-1 = i).$
                      Let vector $sigma = (sigma_1, sigma_2, dots, sigma_k).$ If $sigma$ is uniform, then $sigma_i = 1/k,$ for $i = 1, dots, k.$
                      $$sigma_imathbfP = sum_j=1^k frac 1 k p_ij =
                      frac 1 ksum_j=1p_ij = frac 1 k(1) = frac 1 k,$$



                      where the last equality is due to the doubly stochastic property of the transition matrix. The argument for all $i = 1, dots k$ is the same so
                      $sigmamathsfP = sigma,$ and $sigma$ is a stationalry distribution.



                      If your question is not covered by this answer and the Answer of @Momo (+1), please
                      explain what part you don't understand, and maybe someone here can help.






                      share|cite|improve this answer

























                        up vote
                        1
                        down vote













                        If a Markov chain with state space $1, 2, dots, k$ is ergodic then its stationary distribution is its limiting distribution. As I read your question, it seems you are trying to show that
                        a chain with a doubly stochastic matrix has a uniform stationary distribution on
                        the state space.



                        You are using the convention that element $p_ij$ for $1 le i,j le k,$ of the transition matrix $mathbfP$ has $p_ij = P(X_n = j | X_n-1 = i).$
                        Let vector $sigma = (sigma_1, sigma_2, dots, sigma_k).$ If $sigma$ is uniform, then $sigma_i = 1/k,$ for $i = 1, dots, k.$
                        $$sigma_imathbfP = sum_j=1^k frac 1 k p_ij =
                        frac 1 ksum_j=1p_ij = frac 1 k(1) = frac 1 k,$$



                        where the last equality is due to the doubly stochastic property of the transition matrix. The argument for all $i = 1, dots k$ is the same so
                        $sigmamathsfP = sigma,$ and $sigma$ is a stationalry distribution.



                        If your question is not covered by this answer and the Answer of @Momo (+1), please
                        explain what part you don't understand, and maybe someone here can help.






                        share|cite|improve this answer























                          up vote
                          1
                          down vote










                          up vote
                          1
                          down vote









                          If a Markov chain with state space $1, 2, dots, k$ is ergodic then its stationary distribution is its limiting distribution. As I read your question, it seems you are trying to show that
                          a chain with a doubly stochastic matrix has a uniform stationary distribution on
                          the state space.



                          You are using the convention that element $p_ij$ for $1 le i,j le k,$ of the transition matrix $mathbfP$ has $p_ij = P(X_n = j | X_n-1 = i).$
                          Let vector $sigma = (sigma_1, sigma_2, dots, sigma_k).$ If $sigma$ is uniform, then $sigma_i = 1/k,$ for $i = 1, dots, k.$
                          $$sigma_imathbfP = sum_j=1^k frac 1 k p_ij =
                          frac 1 ksum_j=1p_ij = frac 1 k(1) = frac 1 k,$$



                          where the last equality is due to the doubly stochastic property of the transition matrix. The argument for all $i = 1, dots k$ is the same so
                          $sigmamathsfP = sigma,$ and $sigma$ is a stationalry distribution.



                          If your question is not covered by this answer and the Answer of @Momo (+1), please
                          explain what part you don't understand, and maybe someone here can help.






                          share|cite|improve this answer













                          If a Markov chain with state space $1, 2, dots, k$ is ergodic then its stationary distribution is its limiting distribution. As I read your question, it seems you are trying to show that
                          a chain with a doubly stochastic matrix has a uniform stationary distribution on
                          the state space.



                          You are using the convention that element $p_ij$ for $1 le i,j le k,$ of the transition matrix $mathbfP$ has $p_ij = P(X_n = j | X_n-1 = i).$
                          Let vector $sigma = (sigma_1, sigma_2, dots, sigma_k).$ If $sigma$ is uniform, then $sigma_i = 1/k,$ for $i = 1, dots, k.$
                          $$sigma_imathbfP = sum_j=1^k frac 1 k p_ij =
                          frac 1 ksum_j=1p_ij = frac 1 k(1) = frac 1 k,$$



                          where the last equality is due to the doubly stochastic property of the transition matrix. The argument for all $i = 1, dots k$ is the same so
                          $sigmamathsfP = sigma,$ and $sigma$ is a stationalry distribution.



                          If your question is not covered by this answer and the Answer of @Momo (+1), please
                          explain what part you don't understand, and maybe someone here can help.







                          share|cite|improve this answer













                          share|cite|improve this answer



                          share|cite|improve this answer











                          answered Jul 15 at 6:35









                          BruceET

                          33.3k61440




                          33.3k61440






















                               

                              draft saved


                              draft discarded


























                               


                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2852052%2fhelp-showing-amarkov-chain-with-a-doubly-stochastic-matrix-has-uniform-limiting%23new-answer', 'question_page');

                              );

                              Post as a guest













































































                              Comments

                              Popular posts from this blog

                              What is the equation of a 3D cone with generalised tilt?

                              Relationship between determinant of matrix and determinant of adjoint?

                              Color the edges and diagonals of a regular polygon