Maximum likelihood using simulated annealing

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I have a sequence of random variable $Y_i=X_ieta_i$ where $X_i sim mathcalP(lambda)$ and are independent, $eta _i sim mathcalN(0,1)$ also, and a set of values $(y_1,...,y_n)$. I would like to find the best parameter $lambda$ that maximizes the likelihood function defined as :

$mathcalL_lambda(y_1,...y_n)=prodlimits_i=1^nmathcalL_lambda(y_i)=prodlimits_i=1^nintmathcalL_lambda(y_i /x_i)mathcalL_lambda(x_i)dx_i=prodlimits_i=1^nintfrace^frac-y_i^22x_i^2sqrt2pix_ilambda e^-lambda x_idx_i=prodlimits_i=1^nmathbbE(frace^frac-y_i^22X_i^2sqrt2piX_i)=mathbbE(prodlimits_i=1^nfrace^frac-y_i^22X_i^2sqrt2piX_i)$

The first problem that i am encountering is that when i plot the expected value above using the approximation $sumfrac frace^frac-y_i^22X_i^2sqrt2piX_iN$ with respect to different values of $lambda $, i find that it is a decreasing function and that a value of $lambda$ close to $0$ is a solution to my problem. This seems strange because i am asked to find the best parameter $lambda$ using the simulated annealing algorithm. Also, the calculations i am doing to evaluate the likelihood function for a certain $(y_1,...,y_n)$ are really heavy, which makes the simulated annealing algorithm obsolete.
Can anyone help me? thanks







share|cite|improve this question



















  • Have you considered a method of moments estimator? For this the estimate of $lambda$ is $frac12 left(sqrt4 textVariance[y]+1-1right)$. This assumes independence of the Poisson and Normal random variables. Also your notation above seems (to me) to assume that $X_i$ is continuous when it is discrete. And what happens when $x_i=0$ ?
    – JimB
    Jul 17 at 3:42











  • I am sorry i meant that X follows an exponential distribution of parameter $lambda$
    – hfdhsbnd
    Jul 17 at 7:36










  • Sorry, I was not thinking and incorrectly assumed that $X_i sim Poisson(lambda)$. But the method of moments still works with $X_isim Exponential(lambda)$. That estimator of $lambda$ will be $sqrt2/ s^2$ where $s^2$ is just the sample variance of the $Y$ values.
    – JimB
    Jul 17 at 21:32











  • For the data do you not just have the $Y$ values? In other words, you integrate out the $X$ values so how can you have $X$ values in the approximation?
    – JimB
    Jul 18 at 0:42














up vote
1
down vote

favorite












I have a sequence of random variable $Y_i=X_ieta_i$ where $X_i sim mathcalP(lambda)$ and are independent, $eta _i sim mathcalN(0,1)$ also, and a set of values $(y_1,...,y_n)$. I would like to find the best parameter $lambda$ that maximizes the likelihood function defined as :

$mathcalL_lambda(y_1,...y_n)=prodlimits_i=1^nmathcalL_lambda(y_i)=prodlimits_i=1^nintmathcalL_lambda(y_i /x_i)mathcalL_lambda(x_i)dx_i=prodlimits_i=1^nintfrace^frac-y_i^22x_i^2sqrt2pix_ilambda e^-lambda x_idx_i=prodlimits_i=1^nmathbbE(frace^frac-y_i^22X_i^2sqrt2piX_i)=mathbbE(prodlimits_i=1^nfrace^frac-y_i^22X_i^2sqrt2piX_i)$

The first problem that i am encountering is that when i plot the expected value above using the approximation $sumfrac frace^frac-y_i^22X_i^2sqrt2piX_iN$ with respect to different values of $lambda $, i find that it is a decreasing function and that a value of $lambda$ close to $0$ is a solution to my problem. This seems strange because i am asked to find the best parameter $lambda$ using the simulated annealing algorithm. Also, the calculations i am doing to evaluate the likelihood function for a certain $(y_1,...,y_n)$ are really heavy, which makes the simulated annealing algorithm obsolete.
Can anyone help me? thanks







share|cite|improve this question



















  • Have you considered a method of moments estimator? For this the estimate of $lambda$ is $frac12 left(sqrt4 textVariance[y]+1-1right)$. This assumes independence of the Poisson and Normal random variables. Also your notation above seems (to me) to assume that $X_i$ is continuous when it is discrete. And what happens when $x_i=0$ ?
    – JimB
    Jul 17 at 3:42











  • I am sorry i meant that X follows an exponential distribution of parameter $lambda$
    – hfdhsbnd
    Jul 17 at 7:36










  • Sorry, I was not thinking and incorrectly assumed that $X_i sim Poisson(lambda)$. But the method of moments still works with $X_isim Exponential(lambda)$. That estimator of $lambda$ will be $sqrt2/ s^2$ where $s^2$ is just the sample variance of the $Y$ values.
    – JimB
    Jul 17 at 21:32











  • For the data do you not just have the $Y$ values? In other words, you integrate out the $X$ values so how can you have $X$ values in the approximation?
    – JimB
    Jul 18 at 0:42












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I have a sequence of random variable $Y_i=X_ieta_i$ where $X_i sim mathcalP(lambda)$ and are independent, $eta _i sim mathcalN(0,1)$ also, and a set of values $(y_1,...,y_n)$. I would like to find the best parameter $lambda$ that maximizes the likelihood function defined as :

$mathcalL_lambda(y_1,...y_n)=prodlimits_i=1^nmathcalL_lambda(y_i)=prodlimits_i=1^nintmathcalL_lambda(y_i /x_i)mathcalL_lambda(x_i)dx_i=prodlimits_i=1^nintfrace^frac-y_i^22x_i^2sqrt2pix_ilambda e^-lambda x_idx_i=prodlimits_i=1^nmathbbE(frace^frac-y_i^22X_i^2sqrt2piX_i)=mathbbE(prodlimits_i=1^nfrace^frac-y_i^22X_i^2sqrt2piX_i)$

The first problem that i am encountering is that when i plot the expected value above using the approximation $sumfrac frace^frac-y_i^22X_i^2sqrt2piX_iN$ with respect to different values of $lambda $, i find that it is a decreasing function and that a value of $lambda$ close to $0$ is a solution to my problem. This seems strange because i am asked to find the best parameter $lambda$ using the simulated annealing algorithm. Also, the calculations i am doing to evaluate the likelihood function for a certain $(y_1,...,y_n)$ are really heavy, which makes the simulated annealing algorithm obsolete.
Can anyone help me? thanks







share|cite|improve this question











I have a sequence of random variable $Y_i=X_ieta_i$ where $X_i sim mathcalP(lambda)$ and are independent, $eta _i sim mathcalN(0,1)$ also, and a set of values $(y_1,...,y_n)$. I would like to find the best parameter $lambda$ that maximizes the likelihood function defined as :

$mathcalL_lambda(y_1,...y_n)=prodlimits_i=1^nmathcalL_lambda(y_i)=prodlimits_i=1^nintmathcalL_lambda(y_i /x_i)mathcalL_lambda(x_i)dx_i=prodlimits_i=1^nintfrace^frac-y_i^22x_i^2sqrt2pix_ilambda e^-lambda x_idx_i=prodlimits_i=1^nmathbbE(frace^frac-y_i^22X_i^2sqrt2piX_i)=mathbbE(prodlimits_i=1^nfrace^frac-y_i^22X_i^2sqrt2piX_i)$

The first problem that i am encountering is that when i plot the expected value above using the approximation $sumfrac frace^frac-y_i^22X_i^2sqrt2piX_iN$ with respect to different values of $lambda $, i find that it is a decreasing function and that a value of $lambda$ close to $0$ is a solution to my problem. This seems strange because i am asked to find the best parameter $lambda$ using the simulated annealing algorithm. Also, the calculations i am doing to evaluate the likelihood function for a certain $(y_1,...,y_n)$ are really heavy, which makes the simulated annealing algorithm obsolete.
Can anyone help me? thanks









share|cite|improve this question










share|cite|improve this question




share|cite|improve this question









asked Jul 16 at 15:42









hfdhsbnd

62




62











  • Have you considered a method of moments estimator? For this the estimate of $lambda$ is $frac12 left(sqrt4 textVariance[y]+1-1right)$. This assumes independence of the Poisson and Normal random variables. Also your notation above seems (to me) to assume that $X_i$ is continuous when it is discrete. And what happens when $x_i=0$ ?
    – JimB
    Jul 17 at 3:42











  • I am sorry i meant that X follows an exponential distribution of parameter $lambda$
    – hfdhsbnd
    Jul 17 at 7:36










  • Sorry, I was not thinking and incorrectly assumed that $X_i sim Poisson(lambda)$. But the method of moments still works with $X_isim Exponential(lambda)$. That estimator of $lambda$ will be $sqrt2/ s^2$ where $s^2$ is just the sample variance of the $Y$ values.
    – JimB
    Jul 17 at 21:32











  • For the data do you not just have the $Y$ values? In other words, you integrate out the $X$ values so how can you have $X$ values in the approximation?
    – JimB
    Jul 18 at 0:42
















  • Have you considered a method of moments estimator? For this the estimate of $lambda$ is $frac12 left(sqrt4 textVariance[y]+1-1right)$. This assumes independence of the Poisson and Normal random variables. Also your notation above seems (to me) to assume that $X_i$ is continuous when it is discrete. And what happens when $x_i=0$ ?
    – JimB
    Jul 17 at 3:42











  • I am sorry i meant that X follows an exponential distribution of parameter $lambda$
    – hfdhsbnd
    Jul 17 at 7:36










  • Sorry, I was not thinking and incorrectly assumed that $X_i sim Poisson(lambda)$. But the method of moments still works with $X_isim Exponential(lambda)$. That estimator of $lambda$ will be $sqrt2/ s^2$ where $s^2$ is just the sample variance of the $Y$ values.
    – JimB
    Jul 17 at 21:32











  • For the data do you not just have the $Y$ values? In other words, you integrate out the $X$ values so how can you have $X$ values in the approximation?
    – JimB
    Jul 18 at 0:42















Have you considered a method of moments estimator? For this the estimate of $lambda$ is $frac12 left(sqrt4 textVariance[y]+1-1right)$. This assumes independence of the Poisson and Normal random variables. Also your notation above seems (to me) to assume that $X_i$ is continuous when it is discrete. And what happens when $x_i=0$ ?
– JimB
Jul 17 at 3:42





Have you considered a method of moments estimator? For this the estimate of $lambda$ is $frac12 left(sqrt4 textVariance[y]+1-1right)$. This assumes independence of the Poisson and Normal random variables. Also your notation above seems (to me) to assume that $X_i$ is continuous when it is discrete. And what happens when $x_i=0$ ?
– JimB
Jul 17 at 3:42













I am sorry i meant that X follows an exponential distribution of parameter $lambda$
– hfdhsbnd
Jul 17 at 7:36




I am sorry i meant that X follows an exponential distribution of parameter $lambda$
– hfdhsbnd
Jul 17 at 7:36












Sorry, I was not thinking and incorrectly assumed that $X_i sim Poisson(lambda)$. But the method of moments still works with $X_isim Exponential(lambda)$. That estimator of $lambda$ will be $sqrt2/ s^2$ where $s^2$ is just the sample variance of the $Y$ values.
– JimB
Jul 17 at 21:32





Sorry, I was not thinking and incorrectly assumed that $X_i sim Poisson(lambda)$. But the method of moments still works with $X_isim Exponential(lambda)$. That estimator of $lambda$ will be $sqrt2/ s^2$ where $s^2$ is just the sample variance of the $Y$ values.
– JimB
Jul 17 at 21:32













For the data do you not just have the $Y$ values? In other words, you integrate out the $X$ values so how can you have $X$ values in the approximation?
– JimB
Jul 18 at 0:42




For the data do you not just have the $Y$ values? In other words, you integrate out the $X$ values so how can you have $X$ values in the approximation?
– JimB
Jul 18 at 0:42










1 Answer
1






active

oldest

votes

















up vote
0
down vote













Here is an approach using Mathematica.



First construct the density function for $Y_i$:



PDF[TransformedDistribution[x z, x [Distributed] ExponentialDistribution[[Lambda]], 
z [Distributed] NormalDistribution[0, 1]], y]


$$beginarraycc
{ &
beginarraycc
fraclambda G_0,3^3,0left(fracy^2 lambda ^282 sqrt2 pi & yneq 0 \
fraclambda G_0,3^3,0left(fracy^2 lambda ^28sqrt2 pi & y=0 \
endarray
\
endarray$$



Now in the following we assume that no value of $Y_i$ is exactly zero:



(* Generate some data *)
SeedRandom[12345];
[Lambda]0 = 3;
n = 100;
x = RandomVariate[ExponentialDistribution[[Lambda]0], n];
z = RandomVariate[NormalDistribution, n];
y = x z;

(* Log likelihood *)
logL = n Log[[Lambda]] - (3 n/2) Log[2] - n Log[[Pi]] +
Sum[Log[MeijerG[, , 0, 0, 1/2, , (
y[[i]]^2 [Lambda]^2)/8]], i, n];

(* Method of moments estimate *)
[Lambda]Initial = Sqrt[2]/StandardDeviation[y]
(* 3.159259259137787 *)

(* Maximum likelihood estimate *)
mle = FindMaximum[logL, [Lambda] > 0, [Lambda], [Lambda]Initial]
(* 33.75465530200371,[Lambda] -> 2.7946001175877813 *)
(* Alternative (but slower) *)
(* mle=NMaximize[logL,[Lambda]>0,[Lambda],Method -> "SimulatedAnnealing"] *)

(* Check to see if first partial derivative is approximately zero *)
dlogL[Lambda] = D[logL, [Lambda]] /. mle[[2]]
(* 1.000782830828939 * 10^(-6) *)

(* Estimate of standard error *)
se = Sqrt[-1/(D[logL, [Lambda], 2]) /. mle[[2]]]
(* 0.39489891632730545 *)

(* Another check: plot the log likelihood *)
Plot[logL, [Lambda], 1, 4, Frame -> True, FrameLabel -> "[Lambda]", "Log(likelihood)"]


Lambda vs log likelihood






share|cite|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853519%2fmaximum-likelihood-using-simulated-annealing%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    Here is an approach using Mathematica.



    First construct the density function for $Y_i$:



    PDF[TransformedDistribution[x z, x [Distributed] ExponentialDistribution[[Lambda]], 
    z [Distributed] NormalDistribution[0, 1]], y]


    $$beginarraycc
    { &
    beginarraycc
    fraclambda G_0,3^3,0left(fracy^2 lambda ^282 sqrt2 pi & yneq 0 \
    fraclambda G_0,3^3,0left(fracy^2 lambda ^28sqrt2 pi & y=0 \
    endarray
    \
    endarray$$



    Now in the following we assume that no value of $Y_i$ is exactly zero:



    (* Generate some data *)
    SeedRandom[12345];
    [Lambda]0 = 3;
    n = 100;
    x = RandomVariate[ExponentialDistribution[[Lambda]0], n];
    z = RandomVariate[NormalDistribution, n];
    y = x z;

    (* Log likelihood *)
    logL = n Log[[Lambda]] - (3 n/2) Log[2] - n Log[[Pi]] +
    Sum[Log[MeijerG[, , 0, 0, 1/2, , (
    y[[i]]^2 [Lambda]^2)/8]], i, n];

    (* Method of moments estimate *)
    [Lambda]Initial = Sqrt[2]/StandardDeviation[y]
    (* 3.159259259137787 *)

    (* Maximum likelihood estimate *)
    mle = FindMaximum[logL, [Lambda] > 0, [Lambda], [Lambda]Initial]
    (* 33.75465530200371,[Lambda] -> 2.7946001175877813 *)
    (* Alternative (but slower) *)
    (* mle=NMaximize[logL,[Lambda]>0,[Lambda],Method -> "SimulatedAnnealing"] *)

    (* Check to see if first partial derivative is approximately zero *)
    dlogL[Lambda] = D[logL, [Lambda]] /. mle[[2]]
    (* 1.000782830828939 * 10^(-6) *)

    (* Estimate of standard error *)
    se = Sqrt[-1/(D[logL, [Lambda], 2]) /. mle[[2]]]
    (* 0.39489891632730545 *)

    (* Another check: plot the log likelihood *)
    Plot[logL, [Lambda], 1, 4, Frame -> True, FrameLabel -> "[Lambda]", "Log(likelihood)"]


    Lambda vs log likelihood






    share|cite|improve this answer

























      up vote
      0
      down vote













      Here is an approach using Mathematica.



      First construct the density function for $Y_i$:



      PDF[TransformedDistribution[x z, x [Distributed] ExponentialDistribution[[Lambda]], 
      z [Distributed] NormalDistribution[0, 1]], y]


      $$beginarraycc
      { &
      beginarraycc
      fraclambda G_0,3^3,0left(fracy^2 lambda ^282 sqrt2 pi & yneq 0 \
      fraclambda G_0,3^3,0left(fracy^2 lambda ^28sqrt2 pi & y=0 \
      endarray
      \
      endarray$$



      Now in the following we assume that no value of $Y_i$ is exactly zero:



      (* Generate some data *)
      SeedRandom[12345];
      [Lambda]0 = 3;
      n = 100;
      x = RandomVariate[ExponentialDistribution[[Lambda]0], n];
      z = RandomVariate[NormalDistribution, n];
      y = x z;

      (* Log likelihood *)
      logL = n Log[[Lambda]] - (3 n/2) Log[2] - n Log[[Pi]] +
      Sum[Log[MeijerG[, , 0, 0, 1/2, , (
      y[[i]]^2 [Lambda]^2)/8]], i, n];

      (* Method of moments estimate *)
      [Lambda]Initial = Sqrt[2]/StandardDeviation[y]
      (* 3.159259259137787 *)

      (* Maximum likelihood estimate *)
      mle = FindMaximum[logL, [Lambda] > 0, [Lambda], [Lambda]Initial]
      (* 33.75465530200371,[Lambda] -> 2.7946001175877813 *)
      (* Alternative (but slower) *)
      (* mle=NMaximize[logL,[Lambda]>0,[Lambda],Method -> "SimulatedAnnealing"] *)

      (* Check to see if first partial derivative is approximately zero *)
      dlogL[Lambda] = D[logL, [Lambda]] /. mle[[2]]
      (* 1.000782830828939 * 10^(-6) *)

      (* Estimate of standard error *)
      se = Sqrt[-1/(D[logL, [Lambda], 2]) /. mle[[2]]]
      (* 0.39489891632730545 *)

      (* Another check: plot the log likelihood *)
      Plot[logL, [Lambda], 1, 4, Frame -> True, FrameLabel -> "[Lambda]", "Log(likelihood)"]


      Lambda vs log likelihood






      share|cite|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        Here is an approach using Mathematica.



        First construct the density function for $Y_i$:



        PDF[TransformedDistribution[x z, x [Distributed] ExponentialDistribution[[Lambda]], 
        z [Distributed] NormalDistribution[0, 1]], y]


        $$beginarraycc
        { &
        beginarraycc
        fraclambda G_0,3^3,0left(fracy^2 lambda ^282 sqrt2 pi & yneq 0 \
        fraclambda G_0,3^3,0left(fracy^2 lambda ^28sqrt2 pi & y=0 \
        endarray
        \
        endarray$$



        Now in the following we assume that no value of $Y_i$ is exactly zero:



        (* Generate some data *)
        SeedRandom[12345];
        [Lambda]0 = 3;
        n = 100;
        x = RandomVariate[ExponentialDistribution[[Lambda]0], n];
        z = RandomVariate[NormalDistribution, n];
        y = x z;

        (* Log likelihood *)
        logL = n Log[[Lambda]] - (3 n/2) Log[2] - n Log[[Pi]] +
        Sum[Log[MeijerG[, , 0, 0, 1/2, , (
        y[[i]]^2 [Lambda]^2)/8]], i, n];

        (* Method of moments estimate *)
        [Lambda]Initial = Sqrt[2]/StandardDeviation[y]
        (* 3.159259259137787 *)

        (* Maximum likelihood estimate *)
        mle = FindMaximum[logL, [Lambda] > 0, [Lambda], [Lambda]Initial]
        (* 33.75465530200371,[Lambda] -> 2.7946001175877813 *)
        (* Alternative (but slower) *)
        (* mle=NMaximize[logL,[Lambda]>0,[Lambda],Method -> "SimulatedAnnealing"] *)

        (* Check to see if first partial derivative is approximately zero *)
        dlogL[Lambda] = D[logL, [Lambda]] /. mle[[2]]
        (* 1.000782830828939 * 10^(-6) *)

        (* Estimate of standard error *)
        se = Sqrt[-1/(D[logL, [Lambda], 2]) /. mle[[2]]]
        (* 0.39489891632730545 *)

        (* Another check: plot the log likelihood *)
        Plot[logL, [Lambda], 1, 4, Frame -> True, FrameLabel -> "[Lambda]", "Log(likelihood)"]


        Lambda vs log likelihood






        share|cite|improve this answer













        Here is an approach using Mathematica.



        First construct the density function for $Y_i$:



        PDF[TransformedDistribution[x z, x [Distributed] ExponentialDistribution[[Lambda]], 
        z [Distributed] NormalDistribution[0, 1]], y]


        $$beginarraycc
        { &
        beginarraycc
        fraclambda G_0,3^3,0left(fracy^2 lambda ^282 sqrt2 pi & yneq 0 \
        fraclambda G_0,3^3,0left(fracy^2 lambda ^28sqrt2 pi & y=0 \
        endarray
        \
        endarray$$



        Now in the following we assume that no value of $Y_i$ is exactly zero:



        (* Generate some data *)
        SeedRandom[12345];
        [Lambda]0 = 3;
        n = 100;
        x = RandomVariate[ExponentialDistribution[[Lambda]0], n];
        z = RandomVariate[NormalDistribution, n];
        y = x z;

        (* Log likelihood *)
        logL = n Log[[Lambda]] - (3 n/2) Log[2] - n Log[[Pi]] +
        Sum[Log[MeijerG[, , 0, 0, 1/2, , (
        y[[i]]^2 [Lambda]^2)/8]], i, n];

        (* Method of moments estimate *)
        [Lambda]Initial = Sqrt[2]/StandardDeviation[y]
        (* 3.159259259137787 *)

        (* Maximum likelihood estimate *)
        mle = FindMaximum[logL, [Lambda] > 0, [Lambda], [Lambda]Initial]
        (* 33.75465530200371,[Lambda] -> 2.7946001175877813 *)
        (* Alternative (but slower) *)
        (* mle=NMaximize[logL,[Lambda]>0,[Lambda],Method -> "SimulatedAnnealing"] *)

        (* Check to see if first partial derivative is approximately zero *)
        dlogL[Lambda] = D[logL, [Lambda]] /. mle[[2]]
        (* 1.000782830828939 * 10^(-6) *)

        (* Estimate of standard error *)
        se = Sqrt[-1/(D[logL, [Lambda], 2]) /. mle[[2]]]
        (* 0.39489891632730545 *)

        (* Another check: plot the log likelihood *)
        Plot[logL, [Lambda], 1, 4, Frame -> True, FrameLabel -> "[Lambda]", "Log(likelihood)"]


        Lambda vs log likelihood







        share|cite|improve this answer













        share|cite|improve this answer



        share|cite|improve this answer











        answered Jul 18 at 0:56









        JimB

        42537




        42537






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2853519%2fmaximum-likelihood-using-simulated-annealing%23new-answer', 'question_page');

            );

            Post as a guest













































































            Comments

            Popular posts from this blog

            What is the equation of a 3D cone with generalised tilt?

            Color the edges and diagonals of a regular polygon

            Relationship between determinant of matrix and determinant of adjoint?