Existence of Solution, System of Equations

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
5
down vote

favorite
3












Suppose $P(lambda, i)$ is the probability that a Poisson random variable with average $lambda$ is equal to $i$, i.e. $fraclambda^ie^lambdai!$



I think the following system of equations always has solution in $x$ and $y$, non-negative real numbers, for any $alpha>0$ and $kin mathbbN_+$



begincases
alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i) \
alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i+1)
endcases



where the necessary condition $alphaleq P(k+1, k+1)$ holds. It is easy to prove that this is indeed a necessary condition, equivalent to the condition that $alpha=P(lambda,k+1)$ has a solution. It is also easy to see that solution $y$ of the system is smaller or equal to $lambda$, the largest solution to the equation $alpha=P(lambda,k+1)$. Experiments show that for each fixed $alpha$ and $k$, there is a solution, but I did not manage to prove it analytically.



Is there any analogue for mean value theorem for multidimensional functions?
Any suggestion for the proof directions will be appreciated.







share|cite|improve this question























    up vote
    5
    down vote

    favorite
    3












    Suppose $P(lambda, i)$ is the probability that a Poisson random variable with average $lambda$ is equal to $i$, i.e. $fraclambda^ie^lambdai!$



    I think the following system of equations always has solution in $x$ and $y$, non-negative real numbers, for any $alpha>0$ and $kin mathbbN_+$



    begincases
    alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i) \
    alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i+1)
    endcases



    where the necessary condition $alphaleq P(k+1, k+1)$ holds. It is easy to prove that this is indeed a necessary condition, equivalent to the condition that $alpha=P(lambda,k+1)$ has a solution. It is also easy to see that solution $y$ of the system is smaller or equal to $lambda$, the largest solution to the equation $alpha=P(lambda,k+1)$. Experiments show that for each fixed $alpha$ and $k$, there is a solution, but I did not manage to prove it analytically.



    Is there any analogue for mean value theorem for multidimensional functions?
    Any suggestion for the proof directions will be appreciated.







    share|cite|improve this question





















      up vote
      5
      down vote

      favorite
      3









      up vote
      5
      down vote

      favorite
      3






      3





      Suppose $P(lambda, i)$ is the probability that a Poisson random variable with average $lambda$ is equal to $i$, i.e. $fraclambda^ie^lambdai!$



      I think the following system of equations always has solution in $x$ and $y$, non-negative real numbers, for any $alpha>0$ and $kin mathbbN_+$



      begincases
      alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i) \
      alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i+1)
      endcases



      where the necessary condition $alphaleq P(k+1, k+1)$ holds. It is easy to prove that this is indeed a necessary condition, equivalent to the condition that $alpha=P(lambda,k+1)$ has a solution. It is also easy to see that solution $y$ of the system is smaller or equal to $lambda$, the largest solution to the equation $alpha=P(lambda,k+1)$. Experiments show that for each fixed $alpha$ and $k$, there is a solution, but I did not manage to prove it analytically.



      Is there any analogue for mean value theorem for multidimensional functions?
      Any suggestion for the proof directions will be appreciated.







      share|cite|improve this question











      Suppose $P(lambda, i)$ is the probability that a Poisson random variable with average $lambda$ is equal to $i$, i.e. $fraclambda^ie^lambdai!$



      I think the following system of equations always has solution in $x$ and $y$, non-negative real numbers, for any $alpha>0$ and $kin mathbbN_+$



      begincases
      alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i) \
      alpha=sum_i=0^inftyP(x, i)cdot P(y, k+i+1)
      endcases



      where the necessary condition $alphaleq P(k+1, k+1)$ holds. It is easy to prove that this is indeed a necessary condition, equivalent to the condition that $alpha=P(lambda,k+1)$ has a solution. It is also easy to see that solution $y$ of the system is smaller or equal to $lambda$, the largest solution to the equation $alpha=P(lambda,k+1)$. Experiments show that for each fixed $alpha$ and $k$, there is a solution, but I did not manage to prove it analytically.



      Is there any analogue for mean value theorem for multidimensional functions?
      Any suggestion for the proof directions will be appreciated.









      share|cite|improve this question










      share|cite|improve this question




      share|cite|improve this question









      asked 12 hours ago









      Tina

      283




      283




















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          4
          down vote



          accepted










          Let $a:=alpha$ and
          beginequation*
          F_k(x,y):=sum_j=0^infty fracx^jj!fracy^k+j(k+j)!,e^-x-y,
          endequation*
          assuming the standard convention $0^0:=1$.
          We have to consider the existence of a solution in $x$ and $y$ of the system
          beginequation*
          a=F_k(x,y)=F_k+1(x,y). tag1
          endequation*
          We shall prove the following.




          Theorem 1. Take any natural $k$ and any
          beginequation*
          ain(0,a_k],quadtextwherequad a_k:=sup_x,yge0F_k+1(x,y). tag1.5
          endequation*
          Then the system (1) has a solution $x,yge0$.




          Remark 1. Since $F_k>0$, the condition $ain(0,a_k]$ is obviously necessary in Theorem 1.



          Proof of Theorem 1. Note that $F_k(x,y)ge0$ for any real $x,yge0$ and $F_k(x,y)$ is continuous in real $x,yge0$. The crucial observation is the identity
          beginequation*
          partial_y F_k+1(x,y)=F_k(x,y)-F_k+1(x,y) tag2
          endequation*
          for real $x,y$.



          Next, fix for a moment any real $xge0$. Then $F_k+1(x,0)=0$ and, by dominated convergence, $F_k+1(x,infty-)=0$. So, $F_k+1(x,y)$ attains its maximum in $y$ at some real point $y=y_xge0$. At this point, we have $partial_y F_k+1(x,y)=0$. So, by (2),
          beginequation*
          F_k(x,y_x)=F_k+1(x,y_x)=max_yge0F_k+1(x,y)=:M_k+1(x), tag3
          endequation*
          for all real $xge0$.



          Next,
          beginalign*
          M_k(x)&lesum_j=0^infty fracx^jj!max_yge0fracy^k+j(k+j)!,e^-x-y \
          &=sum_j=0^infty fracx^jj!e^-xfrac(k+j)^k+j(k+j)!,e^-k-j tag4 \
          &llsum_j=0^infty fracx^jj!e^-xfrac1sqrtk+j=Efrac1sqrtk+Pi_x
          undersetxtoinftylongrightarrow0
          endalign*
          by dominated convergence and because $Pi_xundersetxtoinftylongrightarrowinfty$ in probability, where $Pi_x$ is a Poisson random variable with parameter $x$. So,
          $M_k(infty-)=0$. It is also not hard to see that
          $F_k(x,y)$ is continuous in real $xge0$ uniformly in real $yge0$ (see the Appendix), so that $M_k(x)$ is continuous in $xge0$. So, $M_k+1(x)$ attains its maximum in $xge0$ (equal $a_k$, by (1.5)) and takes all values in the interval $(0,a_k]$. Now Theorem 1 follows by (3). $qquadBox$



          Appendix. Similarly to (2),
          beginequation*
          partial_x F_k(x,y)=F_k+1(x,y)-F_k(x,y).
          endequation*
          for real $x,y$. Therefore and because $0le F_kle1$, we have $|partial_x F_k(x,y)|le1$ for real $x,y$, so that $F_k(x,y)$ is indeed continuous in real $xge0$ uniformly in real $yge0$.



          Added: Let us now show that
          beginequation*
          a_k=c_k+1,quadtextwherequad c_k:=frack^kk!,e^-k
          simfrac1sqrt2pi k
          endequation*
          as $ktoinfty$. To this end, note first that
          beginequation*
          c_k+1/c_k=(1+1/k)^k/e<1,
          endequation*
          and so, $c_k$ is decreasing in $k$.
          So, recalling (4), we have
          beginequation*
          M_k(x)lesum_j=0^infty fracx^jj!e^-x,c_k+j
          lesum_j=0^infty fracx^jj!e^-x,c_k=c_k=M_k(0).
          endequation*
          Thus, in view of (1.5) and (2),
          beginequation*
          a_k=max_xge0M_k+1(x)=M_k+1(0)=c_k+1,
          endequation*
          as desired.



          In particular, for $k=0,1,2,3$ the values of $a_k$ are $approx0.367879, 0.270671, 0.224042$.






          share|cite|improve this answer























          • I have added Remark 2 concerning the values of $a_k$.
            – Iosif Pinelis
            7 hours ago











          • The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
            – Tina
            6 hours ago










          • @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
            – Iosif Pinelis
            6 hours ago










          • (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
            – Tina
            6 hours ago











          • @Tina : I am sorry, I still don't understand your comment.
            – Iosif Pinelis
            5 hours ago

















          up vote
          2
          down vote













          The sums over $P(lambda,i)=fraclambda^ie^lambdai!$ are evaluated in terms of a Bessel function as
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i)=y^k e^-x-y left(xyright)^-k/2 I_kleft(2 sqrtxyright)$$
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i+1)=(y/x)^1/2,y^k e^-x-y left(xyright)^-k/2 I_k+1left(2 sqrtxyright)$$
          for any positive integer $k$ these two expressions should be equal for some $x,y>0$. (For $x=y=0$ both expressions are identically zero.) So the function
          $$F_k(x,y)=sqrtx, I_kleft(2 sqrtxyright)-sqrty ,I_k+1left(2 sqrtxyright)$$
          should pass through zero in the quadrant $x,y>0$ for any positive integer $k$.



          For large $z=2sqrtxy$ both Bessel functions $I_k(z)$ and $I_k+1(z)$ grow as $(2pi z)^-1/2e^z$, so by making $x$ much larger than $y$ the function $F_k(x,y)$ is positive and by making $y$ much larger than $x$ it is negative, hence it must go through zero when $xapprox y$.




          I had not appreciated that $alpha$ is fixed from the beginning like $k$, not a variable like $x$ and $y$. So we also need to show that $xapprox ygg 1$ allows the sum to reach any $alpha>0$, so
          $$alpha=e^-2x I_kleft(2xright)approx (4pi x)^-1/2,;;xgg 1.$$
          This is possible only for $alphall 1$. The OP lists as necessary condition $alphaleq P(k+1,k+1)$, it is not clear to me this is sufficient.






          share|cite|improve this answer























          • Thank you! But I do not understand why both terms are equal to $alpha$?
            – Tina
            11 hours ago










          • ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
            – Carlo Beenakker
            9 hours ago






          • 2




            Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
            – Tina
            9 hours ago










          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "504"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );








           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f307593%2fexistence-of-solution-system-of-equations%23new-answer', 'question_page');

          );

          Post as a guest






























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          4
          down vote



          accepted










          Let $a:=alpha$ and
          beginequation*
          F_k(x,y):=sum_j=0^infty fracx^jj!fracy^k+j(k+j)!,e^-x-y,
          endequation*
          assuming the standard convention $0^0:=1$.
          We have to consider the existence of a solution in $x$ and $y$ of the system
          beginequation*
          a=F_k(x,y)=F_k+1(x,y). tag1
          endequation*
          We shall prove the following.




          Theorem 1. Take any natural $k$ and any
          beginequation*
          ain(0,a_k],quadtextwherequad a_k:=sup_x,yge0F_k+1(x,y). tag1.5
          endequation*
          Then the system (1) has a solution $x,yge0$.




          Remark 1. Since $F_k>0$, the condition $ain(0,a_k]$ is obviously necessary in Theorem 1.



          Proof of Theorem 1. Note that $F_k(x,y)ge0$ for any real $x,yge0$ and $F_k(x,y)$ is continuous in real $x,yge0$. The crucial observation is the identity
          beginequation*
          partial_y F_k+1(x,y)=F_k(x,y)-F_k+1(x,y) tag2
          endequation*
          for real $x,y$.



          Next, fix for a moment any real $xge0$. Then $F_k+1(x,0)=0$ and, by dominated convergence, $F_k+1(x,infty-)=0$. So, $F_k+1(x,y)$ attains its maximum in $y$ at some real point $y=y_xge0$. At this point, we have $partial_y F_k+1(x,y)=0$. So, by (2),
          beginequation*
          F_k(x,y_x)=F_k+1(x,y_x)=max_yge0F_k+1(x,y)=:M_k+1(x), tag3
          endequation*
          for all real $xge0$.



          Next,
          beginalign*
          M_k(x)&lesum_j=0^infty fracx^jj!max_yge0fracy^k+j(k+j)!,e^-x-y \
          &=sum_j=0^infty fracx^jj!e^-xfrac(k+j)^k+j(k+j)!,e^-k-j tag4 \
          &llsum_j=0^infty fracx^jj!e^-xfrac1sqrtk+j=Efrac1sqrtk+Pi_x
          undersetxtoinftylongrightarrow0
          endalign*
          by dominated convergence and because $Pi_xundersetxtoinftylongrightarrowinfty$ in probability, where $Pi_x$ is a Poisson random variable with parameter $x$. So,
          $M_k(infty-)=0$. It is also not hard to see that
          $F_k(x,y)$ is continuous in real $xge0$ uniformly in real $yge0$ (see the Appendix), so that $M_k(x)$ is continuous in $xge0$. So, $M_k+1(x)$ attains its maximum in $xge0$ (equal $a_k$, by (1.5)) and takes all values in the interval $(0,a_k]$. Now Theorem 1 follows by (3). $qquadBox$



          Appendix. Similarly to (2),
          beginequation*
          partial_x F_k(x,y)=F_k+1(x,y)-F_k(x,y).
          endequation*
          for real $x,y$. Therefore and because $0le F_kle1$, we have $|partial_x F_k(x,y)|le1$ for real $x,y$, so that $F_k(x,y)$ is indeed continuous in real $xge0$ uniformly in real $yge0$.



          Added: Let us now show that
          beginequation*
          a_k=c_k+1,quadtextwherequad c_k:=frack^kk!,e^-k
          simfrac1sqrt2pi k
          endequation*
          as $ktoinfty$. To this end, note first that
          beginequation*
          c_k+1/c_k=(1+1/k)^k/e<1,
          endequation*
          and so, $c_k$ is decreasing in $k$.
          So, recalling (4), we have
          beginequation*
          M_k(x)lesum_j=0^infty fracx^jj!e^-x,c_k+j
          lesum_j=0^infty fracx^jj!e^-x,c_k=c_k=M_k(0).
          endequation*
          Thus, in view of (1.5) and (2),
          beginequation*
          a_k=max_xge0M_k+1(x)=M_k+1(0)=c_k+1,
          endequation*
          as desired.



          In particular, for $k=0,1,2,3$ the values of $a_k$ are $approx0.367879, 0.270671, 0.224042$.






          share|cite|improve this answer























          • I have added Remark 2 concerning the values of $a_k$.
            – Iosif Pinelis
            7 hours ago











          • The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
            – Tina
            6 hours ago










          • @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
            – Iosif Pinelis
            6 hours ago










          • (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
            – Tina
            6 hours ago











          • @Tina : I am sorry, I still don't understand your comment.
            – Iosif Pinelis
            5 hours ago














          up vote
          4
          down vote



          accepted










          Let $a:=alpha$ and
          beginequation*
          F_k(x,y):=sum_j=0^infty fracx^jj!fracy^k+j(k+j)!,e^-x-y,
          endequation*
          assuming the standard convention $0^0:=1$.
          We have to consider the existence of a solution in $x$ and $y$ of the system
          beginequation*
          a=F_k(x,y)=F_k+1(x,y). tag1
          endequation*
          We shall prove the following.




          Theorem 1. Take any natural $k$ and any
          beginequation*
          ain(0,a_k],quadtextwherequad a_k:=sup_x,yge0F_k+1(x,y). tag1.5
          endequation*
          Then the system (1) has a solution $x,yge0$.




          Remark 1. Since $F_k>0$, the condition $ain(0,a_k]$ is obviously necessary in Theorem 1.



          Proof of Theorem 1. Note that $F_k(x,y)ge0$ for any real $x,yge0$ and $F_k(x,y)$ is continuous in real $x,yge0$. The crucial observation is the identity
          beginequation*
          partial_y F_k+1(x,y)=F_k(x,y)-F_k+1(x,y) tag2
          endequation*
          for real $x,y$.



          Next, fix for a moment any real $xge0$. Then $F_k+1(x,0)=0$ and, by dominated convergence, $F_k+1(x,infty-)=0$. So, $F_k+1(x,y)$ attains its maximum in $y$ at some real point $y=y_xge0$. At this point, we have $partial_y F_k+1(x,y)=0$. So, by (2),
          beginequation*
          F_k(x,y_x)=F_k+1(x,y_x)=max_yge0F_k+1(x,y)=:M_k+1(x), tag3
          endequation*
          for all real $xge0$.



          Next,
          beginalign*
          M_k(x)&lesum_j=0^infty fracx^jj!max_yge0fracy^k+j(k+j)!,e^-x-y \
          &=sum_j=0^infty fracx^jj!e^-xfrac(k+j)^k+j(k+j)!,e^-k-j tag4 \
          &llsum_j=0^infty fracx^jj!e^-xfrac1sqrtk+j=Efrac1sqrtk+Pi_x
          undersetxtoinftylongrightarrow0
          endalign*
          by dominated convergence and because $Pi_xundersetxtoinftylongrightarrowinfty$ in probability, where $Pi_x$ is a Poisson random variable with parameter $x$. So,
          $M_k(infty-)=0$. It is also not hard to see that
          $F_k(x,y)$ is continuous in real $xge0$ uniformly in real $yge0$ (see the Appendix), so that $M_k(x)$ is continuous in $xge0$. So, $M_k+1(x)$ attains its maximum in $xge0$ (equal $a_k$, by (1.5)) and takes all values in the interval $(0,a_k]$. Now Theorem 1 follows by (3). $qquadBox$



          Appendix. Similarly to (2),
          beginequation*
          partial_x F_k(x,y)=F_k+1(x,y)-F_k(x,y).
          endequation*
          for real $x,y$. Therefore and because $0le F_kle1$, we have $|partial_x F_k(x,y)|le1$ for real $x,y$, so that $F_k(x,y)$ is indeed continuous in real $xge0$ uniformly in real $yge0$.



          Added: Let us now show that
          beginequation*
          a_k=c_k+1,quadtextwherequad c_k:=frack^kk!,e^-k
          simfrac1sqrt2pi k
          endequation*
          as $ktoinfty$. To this end, note first that
          beginequation*
          c_k+1/c_k=(1+1/k)^k/e<1,
          endequation*
          and so, $c_k$ is decreasing in $k$.
          So, recalling (4), we have
          beginequation*
          M_k(x)lesum_j=0^infty fracx^jj!e^-x,c_k+j
          lesum_j=0^infty fracx^jj!e^-x,c_k=c_k=M_k(0).
          endequation*
          Thus, in view of (1.5) and (2),
          beginequation*
          a_k=max_xge0M_k+1(x)=M_k+1(0)=c_k+1,
          endequation*
          as desired.



          In particular, for $k=0,1,2,3$ the values of $a_k$ are $approx0.367879, 0.270671, 0.224042$.






          share|cite|improve this answer























          • I have added Remark 2 concerning the values of $a_k$.
            – Iosif Pinelis
            7 hours ago











          • The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
            – Tina
            6 hours ago










          • @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
            – Iosif Pinelis
            6 hours ago










          • (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
            – Tina
            6 hours ago











          • @Tina : I am sorry, I still don't understand your comment.
            – Iosif Pinelis
            5 hours ago












          up vote
          4
          down vote



          accepted







          up vote
          4
          down vote



          accepted






          Let $a:=alpha$ and
          beginequation*
          F_k(x,y):=sum_j=0^infty fracx^jj!fracy^k+j(k+j)!,e^-x-y,
          endequation*
          assuming the standard convention $0^0:=1$.
          We have to consider the existence of a solution in $x$ and $y$ of the system
          beginequation*
          a=F_k(x,y)=F_k+1(x,y). tag1
          endequation*
          We shall prove the following.




          Theorem 1. Take any natural $k$ and any
          beginequation*
          ain(0,a_k],quadtextwherequad a_k:=sup_x,yge0F_k+1(x,y). tag1.5
          endequation*
          Then the system (1) has a solution $x,yge0$.




          Remark 1. Since $F_k>0$, the condition $ain(0,a_k]$ is obviously necessary in Theorem 1.



          Proof of Theorem 1. Note that $F_k(x,y)ge0$ for any real $x,yge0$ and $F_k(x,y)$ is continuous in real $x,yge0$. The crucial observation is the identity
          beginequation*
          partial_y F_k+1(x,y)=F_k(x,y)-F_k+1(x,y) tag2
          endequation*
          for real $x,y$.



          Next, fix for a moment any real $xge0$. Then $F_k+1(x,0)=0$ and, by dominated convergence, $F_k+1(x,infty-)=0$. So, $F_k+1(x,y)$ attains its maximum in $y$ at some real point $y=y_xge0$. At this point, we have $partial_y F_k+1(x,y)=0$. So, by (2),
          beginequation*
          F_k(x,y_x)=F_k+1(x,y_x)=max_yge0F_k+1(x,y)=:M_k+1(x), tag3
          endequation*
          for all real $xge0$.



          Next,
          beginalign*
          M_k(x)&lesum_j=0^infty fracx^jj!max_yge0fracy^k+j(k+j)!,e^-x-y \
          &=sum_j=0^infty fracx^jj!e^-xfrac(k+j)^k+j(k+j)!,e^-k-j tag4 \
          &llsum_j=0^infty fracx^jj!e^-xfrac1sqrtk+j=Efrac1sqrtk+Pi_x
          undersetxtoinftylongrightarrow0
          endalign*
          by dominated convergence and because $Pi_xundersetxtoinftylongrightarrowinfty$ in probability, where $Pi_x$ is a Poisson random variable with parameter $x$. So,
          $M_k(infty-)=0$. It is also not hard to see that
          $F_k(x,y)$ is continuous in real $xge0$ uniformly in real $yge0$ (see the Appendix), so that $M_k(x)$ is continuous in $xge0$. So, $M_k+1(x)$ attains its maximum in $xge0$ (equal $a_k$, by (1.5)) and takes all values in the interval $(0,a_k]$. Now Theorem 1 follows by (3). $qquadBox$



          Appendix. Similarly to (2),
          beginequation*
          partial_x F_k(x,y)=F_k+1(x,y)-F_k(x,y).
          endequation*
          for real $x,y$. Therefore and because $0le F_kle1$, we have $|partial_x F_k(x,y)|le1$ for real $x,y$, so that $F_k(x,y)$ is indeed continuous in real $xge0$ uniformly in real $yge0$.



          Added: Let us now show that
          beginequation*
          a_k=c_k+1,quadtextwherequad c_k:=frack^kk!,e^-k
          simfrac1sqrt2pi k
          endequation*
          as $ktoinfty$. To this end, note first that
          beginequation*
          c_k+1/c_k=(1+1/k)^k/e<1,
          endequation*
          and so, $c_k$ is decreasing in $k$.
          So, recalling (4), we have
          beginequation*
          M_k(x)lesum_j=0^infty fracx^jj!e^-x,c_k+j
          lesum_j=0^infty fracx^jj!e^-x,c_k=c_k=M_k(0).
          endequation*
          Thus, in view of (1.5) and (2),
          beginequation*
          a_k=max_xge0M_k+1(x)=M_k+1(0)=c_k+1,
          endequation*
          as desired.



          In particular, for $k=0,1,2,3$ the values of $a_k$ are $approx0.367879, 0.270671, 0.224042$.






          share|cite|improve this answer















          Let $a:=alpha$ and
          beginequation*
          F_k(x,y):=sum_j=0^infty fracx^jj!fracy^k+j(k+j)!,e^-x-y,
          endequation*
          assuming the standard convention $0^0:=1$.
          We have to consider the existence of a solution in $x$ and $y$ of the system
          beginequation*
          a=F_k(x,y)=F_k+1(x,y). tag1
          endequation*
          We shall prove the following.




          Theorem 1. Take any natural $k$ and any
          beginequation*
          ain(0,a_k],quadtextwherequad a_k:=sup_x,yge0F_k+1(x,y). tag1.5
          endequation*
          Then the system (1) has a solution $x,yge0$.




          Remark 1. Since $F_k>0$, the condition $ain(0,a_k]$ is obviously necessary in Theorem 1.



          Proof of Theorem 1. Note that $F_k(x,y)ge0$ for any real $x,yge0$ and $F_k(x,y)$ is continuous in real $x,yge0$. The crucial observation is the identity
          beginequation*
          partial_y F_k+1(x,y)=F_k(x,y)-F_k+1(x,y) tag2
          endequation*
          for real $x,y$.



          Next, fix for a moment any real $xge0$. Then $F_k+1(x,0)=0$ and, by dominated convergence, $F_k+1(x,infty-)=0$. So, $F_k+1(x,y)$ attains its maximum in $y$ at some real point $y=y_xge0$. At this point, we have $partial_y F_k+1(x,y)=0$. So, by (2),
          beginequation*
          F_k(x,y_x)=F_k+1(x,y_x)=max_yge0F_k+1(x,y)=:M_k+1(x), tag3
          endequation*
          for all real $xge0$.



          Next,
          beginalign*
          M_k(x)&lesum_j=0^infty fracx^jj!max_yge0fracy^k+j(k+j)!,e^-x-y \
          &=sum_j=0^infty fracx^jj!e^-xfrac(k+j)^k+j(k+j)!,e^-k-j tag4 \
          &llsum_j=0^infty fracx^jj!e^-xfrac1sqrtk+j=Efrac1sqrtk+Pi_x
          undersetxtoinftylongrightarrow0
          endalign*
          by dominated convergence and because $Pi_xundersetxtoinftylongrightarrowinfty$ in probability, where $Pi_x$ is a Poisson random variable with parameter $x$. So,
          $M_k(infty-)=0$. It is also not hard to see that
          $F_k(x,y)$ is continuous in real $xge0$ uniformly in real $yge0$ (see the Appendix), so that $M_k(x)$ is continuous in $xge0$. So, $M_k+1(x)$ attains its maximum in $xge0$ (equal $a_k$, by (1.5)) and takes all values in the interval $(0,a_k]$. Now Theorem 1 follows by (3). $qquadBox$



          Appendix. Similarly to (2),
          beginequation*
          partial_x F_k(x,y)=F_k+1(x,y)-F_k(x,y).
          endequation*
          for real $x,y$. Therefore and because $0le F_kle1$, we have $|partial_x F_k(x,y)|le1$ for real $x,y$, so that $F_k(x,y)$ is indeed continuous in real $xge0$ uniformly in real $yge0$.



          Added: Let us now show that
          beginequation*
          a_k=c_k+1,quadtextwherequad c_k:=frack^kk!,e^-k
          simfrac1sqrt2pi k
          endequation*
          as $ktoinfty$. To this end, note first that
          beginequation*
          c_k+1/c_k=(1+1/k)^k/e<1,
          endequation*
          and so, $c_k$ is decreasing in $k$.
          So, recalling (4), we have
          beginequation*
          M_k(x)lesum_j=0^infty fracx^jj!e^-x,c_k+j
          lesum_j=0^infty fracx^jj!e^-x,c_k=c_k=M_k(0).
          endequation*
          Thus, in view of (1.5) and (2),
          beginequation*
          a_k=max_xge0M_k+1(x)=M_k+1(0)=c_k+1,
          endequation*
          as desired.



          In particular, for $k=0,1,2,3$ the values of $a_k$ are $approx0.367879, 0.270671, 0.224042$.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited 5 hours ago


























          answered 8 hours ago









          Iosif Pinelis

          14.1k12154




          14.1k12154











          • I have added Remark 2 concerning the values of $a_k$.
            – Iosif Pinelis
            7 hours ago











          • The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
            – Tina
            6 hours ago










          • @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
            – Iosif Pinelis
            6 hours ago










          • (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
            – Tina
            6 hours ago











          • @Tina : I am sorry, I still don't understand your comment.
            – Iosif Pinelis
            5 hours ago
















          • I have added Remark 2 concerning the values of $a_k$.
            – Iosif Pinelis
            7 hours ago











          • The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
            – Tina
            6 hours ago










          • @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
            – Iosif Pinelis
            6 hours ago










          • (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
            – Tina
            6 hours ago











          • @Tina : I am sorry, I still don't understand your comment.
            – Iosif Pinelis
            5 hours ago















          I have added Remark 2 concerning the values of $a_k$.
          – Iosif Pinelis
          7 hours ago





          I have added Remark 2 concerning the values of $a_k$.
          – Iosif Pinelis
          7 hours ago













          The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
          – Tina
          6 hours ago




          The proof of $a_k=P(k+1,k+1)$ is very easy, if $alpha>P(k+1,k+1)$ then it implies $alpha>P(lambda, k+1)$ for each $lambda$, since $P(lambda,k+1)$ is maximized for $lambda=k+1$. On the other hand, by induction we can prove that $alpha>P(lambda,k+1+i)$ for any $i>0$ and $lambda$, therefore the second equation does not have a solution!
          – Tina
          6 hours ago












          @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
          – Iosif Pinelis
          6 hours ago




          @Tina : I am afraid I don't understand your comment: (i) So, what if $alpha>P(lambda,k+1)$? What does this imply? (ii) What is your $alpha$ in the inequality $alpha>P(lambda,k+1+i)$? (iii) How do you prove this inequality? (iv) What does this latter inequality imply?
          – Iosif Pinelis
          6 hours ago












          (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
          – Tina
          6 hours ago





          (i) if $alpha>P(lambda, k+1)$ for any $lambda$ this implies that $alpha>P(lambda,k+2)$ or for any $i>0$, $alpha>P(lambda,k+1+i)$. (ii) $alpha$ is the parameter from the system of equations. (iii) proof is by induction, if $alpha>P(lambda,k+1)$, this means $alphacdot e^lambda - fraclambda^k+1(k+1)!>0$, then $alphacdot e^lambda - fraclambda^k+2(k+2)!>0$, since the latter holds for $lambda=0$ and its derivative is the former, always positive. (iv) this implies that the second inequality of the system is not solvable, since $sum_i=0^inftyP(x,i)=1$.
          – Tina
          6 hours ago













          @Tina : I am sorry, I still don't understand your comment.
          – Iosif Pinelis
          5 hours ago




          @Tina : I am sorry, I still don't understand your comment.
          – Iosif Pinelis
          5 hours ago










          up vote
          2
          down vote













          The sums over $P(lambda,i)=fraclambda^ie^lambdai!$ are evaluated in terms of a Bessel function as
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i)=y^k e^-x-y left(xyright)^-k/2 I_kleft(2 sqrtxyright)$$
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i+1)=(y/x)^1/2,y^k e^-x-y left(xyright)^-k/2 I_k+1left(2 sqrtxyright)$$
          for any positive integer $k$ these two expressions should be equal for some $x,y>0$. (For $x=y=0$ both expressions are identically zero.) So the function
          $$F_k(x,y)=sqrtx, I_kleft(2 sqrtxyright)-sqrty ,I_k+1left(2 sqrtxyright)$$
          should pass through zero in the quadrant $x,y>0$ for any positive integer $k$.



          For large $z=2sqrtxy$ both Bessel functions $I_k(z)$ and $I_k+1(z)$ grow as $(2pi z)^-1/2e^z$, so by making $x$ much larger than $y$ the function $F_k(x,y)$ is positive and by making $y$ much larger than $x$ it is negative, hence it must go through zero when $xapprox y$.




          I had not appreciated that $alpha$ is fixed from the beginning like $k$, not a variable like $x$ and $y$. So we also need to show that $xapprox ygg 1$ allows the sum to reach any $alpha>0$, so
          $$alpha=e^-2x I_kleft(2xright)approx (4pi x)^-1/2,;;xgg 1.$$
          This is possible only for $alphall 1$. The OP lists as necessary condition $alphaleq P(k+1,k+1)$, it is not clear to me this is sufficient.






          share|cite|improve this answer























          • Thank you! But I do not understand why both terms are equal to $alpha$?
            – Tina
            11 hours ago










          • ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
            – Carlo Beenakker
            9 hours ago






          • 2




            Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
            – Tina
            9 hours ago














          up vote
          2
          down vote













          The sums over $P(lambda,i)=fraclambda^ie^lambdai!$ are evaluated in terms of a Bessel function as
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i)=y^k e^-x-y left(xyright)^-k/2 I_kleft(2 sqrtxyright)$$
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i+1)=(y/x)^1/2,y^k e^-x-y left(xyright)^-k/2 I_k+1left(2 sqrtxyright)$$
          for any positive integer $k$ these two expressions should be equal for some $x,y>0$. (For $x=y=0$ both expressions are identically zero.) So the function
          $$F_k(x,y)=sqrtx, I_kleft(2 sqrtxyright)-sqrty ,I_k+1left(2 sqrtxyright)$$
          should pass through zero in the quadrant $x,y>0$ for any positive integer $k$.



          For large $z=2sqrtxy$ both Bessel functions $I_k(z)$ and $I_k+1(z)$ grow as $(2pi z)^-1/2e^z$, so by making $x$ much larger than $y$ the function $F_k(x,y)$ is positive and by making $y$ much larger than $x$ it is negative, hence it must go through zero when $xapprox y$.




          I had not appreciated that $alpha$ is fixed from the beginning like $k$, not a variable like $x$ and $y$. So we also need to show that $xapprox ygg 1$ allows the sum to reach any $alpha>0$, so
          $$alpha=e^-2x I_kleft(2xright)approx (4pi x)^-1/2,;;xgg 1.$$
          This is possible only for $alphall 1$. The OP lists as necessary condition $alphaleq P(k+1,k+1)$, it is not clear to me this is sufficient.






          share|cite|improve this answer























          • Thank you! But I do not understand why both terms are equal to $alpha$?
            – Tina
            11 hours ago










          • ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
            – Carlo Beenakker
            9 hours ago






          • 2




            Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
            – Tina
            9 hours ago












          up vote
          2
          down vote










          up vote
          2
          down vote









          The sums over $P(lambda,i)=fraclambda^ie^lambdai!$ are evaluated in terms of a Bessel function as
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i)=y^k e^-x-y left(xyright)^-k/2 I_kleft(2 sqrtxyright)$$
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i+1)=(y/x)^1/2,y^k e^-x-y left(xyright)^-k/2 I_k+1left(2 sqrtxyright)$$
          for any positive integer $k$ these two expressions should be equal for some $x,y>0$. (For $x=y=0$ both expressions are identically zero.) So the function
          $$F_k(x,y)=sqrtx, I_kleft(2 sqrtxyright)-sqrty ,I_k+1left(2 sqrtxyright)$$
          should pass through zero in the quadrant $x,y>0$ for any positive integer $k$.



          For large $z=2sqrtxy$ both Bessel functions $I_k(z)$ and $I_k+1(z)$ grow as $(2pi z)^-1/2e^z$, so by making $x$ much larger than $y$ the function $F_k(x,y)$ is positive and by making $y$ much larger than $x$ it is negative, hence it must go through zero when $xapprox y$.




          I had not appreciated that $alpha$ is fixed from the beginning like $k$, not a variable like $x$ and $y$. So we also need to show that $xapprox ygg 1$ allows the sum to reach any $alpha>0$, so
          $$alpha=e^-2x I_kleft(2xright)approx (4pi x)^-1/2,;;xgg 1.$$
          This is possible only for $alphall 1$. The OP lists as necessary condition $alphaleq P(k+1,k+1)$, it is not clear to me this is sufficient.






          share|cite|improve this answer















          The sums over $P(lambda,i)=fraclambda^ie^lambdai!$ are evaluated in terms of a Bessel function as
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i)=y^k e^-x-y left(xyright)^-k/2 I_kleft(2 sqrtxyright)$$
          $$sum_i=0^inftyP(x, i)cdot P(y, k+i+1)=(y/x)^1/2,y^k e^-x-y left(xyright)^-k/2 I_k+1left(2 sqrtxyright)$$
          for any positive integer $k$ these two expressions should be equal for some $x,y>0$. (For $x=y=0$ both expressions are identically zero.) So the function
          $$F_k(x,y)=sqrtx, I_kleft(2 sqrtxyright)-sqrty ,I_k+1left(2 sqrtxyright)$$
          should pass through zero in the quadrant $x,y>0$ for any positive integer $k$.



          For large $z=2sqrtxy$ both Bessel functions $I_k(z)$ and $I_k+1(z)$ grow as $(2pi z)^-1/2e^z$, so by making $x$ much larger than $y$ the function $F_k(x,y)$ is positive and by making $y$ much larger than $x$ it is negative, hence it must go through zero when $xapprox y$.




          I had not appreciated that $alpha$ is fixed from the beginning like $k$, not a variable like $x$ and $y$. So we also need to show that $xapprox ygg 1$ allows the sum to reach any $alpha>0$, so
          $$alpha=e^-2x I_kleft(2xright)approx (4pi x)^-1/2,;;xgg 1.$$
          This is possible only for $alphall 1$. The OP lists as necessary condition $alphaleq P(k+1,k+1)$, it is not clear to me this is sufficient.







          share|cite|improve this answer















          share|cite|improve this answer



          share|cite|improve this answer








          edited 9 hours ago


























          answered 11 hours ago









          Carlo Beenakker

          67.3k6153252




          67.3k6153252











          • Thank you! But I do not understand why both terms are equal to $alpha$?
            – Tina
            11 hours ago










          • ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
            – Carlo Beenakker
            9 hours ago






          • 2




            Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
            – Tina
            9 hours ago
















          • Thank you! But I do not understand why both terms are equal to $alpha$?
            – Tina
            11 hours ago










          • ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
            – Carlo Beenakker
            9 hours ago






          • 2




            Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
            – Tina
            9 hours ago















          Thank you! But I do not understand why both terms are equal to $alpha$?
          – Tina
          11 hours ago




          Thank you! But I do not understand why both terms are equal to $alpha$?
          – Tina
          11 hours ago












          ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
          – Carlo Beenakker
          9 hours ago




          ah wait, $alpha$ is fixed from the beginning like $k$ and not a variable like $x$ and $y$?
          – Carlo Beenakker
          9 hours ago




          2




          2




          Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
          – Tina
          9 hours ago




          Yes, it is a fixed parameter of the system, like $k$. We should somehow use the necessary condition from the statement.
          – Tina
          9 hours ago












           

          draft saved


          draft discarded


























           


          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f307593%2fexistence-of-solution-system-of-equations%23new-answer', 'question_page');

          );

          Post as a guest













































































          Comments

          Popular posts from this blog

          What is the equation of a 3D cone with generalised tilt?

          Relationship between determinant of matrix and determinant of adjoint?

          Color the edges and diagonals of a regular polygon