How to enable optimization by optim() return close estimate?
First, I need to clarify that I have read the following posts but my problem still cant be solved:
R optim() L-BFGS-B needs finite values of 'fn' - Weibull
Optimization of optim() in R ( L-BFGS-B needs finite values of 'fn')
R optimize multiple parameters
optim function with infinite value
Below are the code to do simulation and proceed maximum likelihood estimation.
#simulation
#a0, a1, g1, b1 and d1 are my parameters
#set true value of parameters to
#simulate a set of data with size 2000
#x is the simulated data sets
set.seed(5)
a0 = 2.3; a1 = 0.05; g1 = 0.68; b1 =
0.09; d1 = 2.0; n=2000
x = h = rep(0, n)
h[1] = 6
x[1] = rpois(1,h[1])
for (i in 2:n) {
h[i] = (a0 + a1 *
(abs(x[i-1]-h[i-1])-g1*(x[i-1]-
h[i-1]))^d1 +
b1 * (h[i-1]^d1))^(1/d1)
x[i] = rpois(1,h[i])
}
#this is my log-likelihood function
ll <- function(par) {
h.n <- rep(0,n)
a0 <- par[1]
a1 <- par[2]
g1 <- par[3]
b1 <- par[4]
d1 <- par[5]
h.n[1] = x[1]
for (i in 2:n) {
h.n[i] = (a0 + a1 *
(abs(x[i-1]-h.n[i-1])-g1*
(x[i-1]-h.n[i-1]))^d1 +
b1 * (h.n[i-1]^d1))^(1/d1)
}
-sum(dpois(x, h.n, log=TRUE))
}
#as my true value are a0 = 2.3; a1
#= 0.05; g1 = 0.68; b1 = 0.09; d1
#= 2.0
#I put the parscale to become
#c(1,0.01,0.1,0.01,1)
ps <- c(1.0, 1e-02, 1e-01, 1e-02,1.0)
#optimization to check whether
#estimate return near to the true
#value
optim(par=c(0.1,0.01,0.1,0.01,0.1),
ll, method = "L-BFGS-B",
lower=c(1e-6,-10,-10,-10, 1e- 6),
control= list(maxit=1000,
parscale=ps,trace=1))
Then I will get the result of:
> iter 10 value 3172.782149
> iter 20 value 3172.371186
> iter 30 value 3171.952137
> iter 40 value 3171.525942
> iter 50 value 3171.174571
> iter 60 value 3171.095186
> Error in optim(par = c(0.1, 0.01, 0.1, 0.01,
> 0.1), ll, method = "L-BFGS-B", : L-BFGS-B
> needs finite values of 'fn'
So I try to change the lower bound, and it returns
> > optim(par=c(0.1,0.01,0.1,0.01,0.1), ll, method = "L-BFGS-B",lower=c(1e-6,1e-6,-10,1e-6,1e-6),control=list(maxit=1000,parscale=ps,trace=1))
>
> iter 10 value 3172.782149
>
> iter 20 value 3172.371186
>
> iter 30 value 3171.952137
>
> iter 40 value 3171.525942
>
> iter 50 value 3171.174571
>
> iter 60 value 3171.095186
>
> iter 70 value 3171.076036
>
> iter 80 value 3171.044809
>
> iter 90 value 3171.014010
>
> iter 100 value 3170.991805
>
> iter 110 value 3170.971857
>
> iter 120 value 3170.954827
>
> iter 130 value 3170.941397
>
> iter 140 value 3170.925935
>
> iter 150 value 3170.915694
>
> iter 160 value 3170.904309
>
> iter 170 value 3170.894642
> iter 180 value 3170.887122
> iter 190 value 3170.880802
>
> iter 200 value 3170.874319
>
> iter 210 value 3170.870006
>
> iter 220 value 3170.866008
>
> iter 230 value 3170.865497
>
> final value 3170.865422 converged
>
> $`par` [1] 3.242429e+05
> 2.691999e-04 3.896417e-01 6.174022e-04 2.626361e+01
>
> $value [1] 3170.865
>
> $counts function gradient
> 291 291
>
> $convergence [1] 0
>
> $message [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
Definitely, the estimated parameters are far from the true value.
What could I do to get close estimates to true value?
r nonlinear-optimization mle
add a comment |
First, I need to clarify that I have read the following posts but my problem still cant be solved:
R optim() L-BFGS-B needs finite values of 'fn' - Weibull
Optimization of optim() in R ( L-BFGS-B needs finite values of 'fn')
R optimize multiple parameters
optim function with infinite value
Below are the code to do simulation and proceed maximum likelihood estimation.
#simulation
#a0, a1, g1, b1 and d1 are my parameters
#set true value of parameters to
#simulate a set of data with size 2000
#x is the simulated data sets
set.seed(5)
a0 = 2.3; a1 = 0.05; g1 = 0.68; b1 =
0.09; d1 = 2.0; n=2000
x = h = rep(0, n)
h[1] = 6
x[1] = rpois(1,h[1])
for (i in 2:n) {
h[i] = (a0 + a1 *
(abs(x[i-1]-h[i-1])-g1*(x[i-1]-
h[i-1]))^d1 +
b1 * (h[i-1]^d1))^(1/d1)
x[i] = rpois(1,h[i])
}
#this is my log-likelihood function
ll <- function(par) {
h.n <- rep(0,n)
a0 <- par[1]
a1 <- par[2]
g1 <- par[3]
b1 <- par[4]
d1 <- par[5]
h.n[1] = x[1]
for (i in 2:n) {
h.n[i] = (a0 + a1 *
(abs(x[i-1]-h.n[i-1])-g1*
(x[i-1]-h.n[i-1]))^d1 +
b1 * (h.n[i-1]^d1))^(1/d1)
}
-sum(dpois(x, h.n, log=TRUE))
}
#as my true value are a0 = 2.3; a1
#= 0.05; g1 = 0.68; b1 = 0.09; d1
#= 2.0
#I put the parscale to become
#c(1,0.01,0.1,0.01,1)
ps <- c(1.0, 1e-02, 1e-01, 1e-02,1.0)
#optimization to check whether
#estimate return near to the true
#value
optim(par=c(0.1,0.01,0.1,0.01,0.1),
ll, method = "L-BFGS-B",
lower=c(1e-6,-10,-10,-10, 1e- 6),
control= list(maxit=1000,
parscale=ps,trace=1))
Then I will get the result of:
> iter 10 value 3172.782149
> iter 20 value 3172.371186
> iter 30 value 3171.952137
> iter 40 value 3171.525942
> iter 50 value 3171.174571
> iter 60 value 3171.095186
> Error in optim(par = c(0.1, 0.01, 0.1, 0.01,
> 0.1), ll, method = "L-BFGS-B", : L-BFGS-B
> needs finite values of 'fn'
So I try to change the lower bound, and it returns
> > optim(par=c(0.1,0.01,0.1,0.01,0.1), ll, method = "L-BFGS-B",lower=c(1e-6,1e-6,-10,1e-6,1e-6),control=list(maxit=1000,parscale=ps,trace=1))
>
> iter 10 value 3172.782149
>
> iter 20 value 3172.371186
>
> iter 30 value 3171.952137
>
> iter 40 value 3171.525942
>
> iter 50 value 3171.174571
>
> iter 60 value 3171.095186
>
> iter 70 value 3171.076036
>
> iter 80 value 3171.044809
>
> iter 90 value 3171.014010
>
> iter 100 value 3170.991805
>
> iter 110 value 3170.971857
>
> iter 120 value 3170.954827
>
> iter 130 value 3170.941397
>
> iter 140 value 3170.925935
>
> iter 150 value 3170.915694
>
> iter 160 value 3170.904309
>
> iter 170 value 3170.894642
> iter 180 value 3170.887122
> iter 190 value 3170.880802
>
> iter 200 value 3170.874319
>
> iter 210 value 3170.870006
>
> iter 220 value 3170.866008
>
> iter 230 value 3170.865497
>
> final value 3170.865422 converged
>
> $`par` [1] 3.242429e+05
> 2.691999e-04 3.896417e-01 6.174022e-04 2.626361e+01
>
> $value [1] 3170.865
>
> $counts function gradient
> 291 291
>
> $convergence [1] 0
>
> $message [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
Definitely, the estimated parameters are far from the true value.
What could I do to get close estimates to true value?
r nonlinear-optimization mle
add a comment |
First, I need to clarify that I have read the following posts but my problem still cant be solved:
R optim() L-BFGS-B needs finite values of 'fn' - Weibull
Optimization of optim() in R ( L-BFGS-B needs finite values of 'fn')
R optimize multiple parameters
optim function with infinite value
Below are the code to do simulation and proceed maximum likelihood estimation.
#simulation
#a0, a1, g1, b1 and d1 are my parameters
#set true value of parameters to
#simulate a set of data with size 2000
#x is the simulated data sets
set.seed(5)
a0 = 2.3; a1 = 0.05; g1 = 0.68; b1 =
0.09; d1 = 2.0; n=2000
x = h = rep(0, n)
h[1] = 6
x[1] = rpois(1,h[1])
for (i in 2:n) {
h[i] = (a0 + a1 *
(abs(x[i-1]-h[i-1])-g1*(x[i-1]-
h[i-1]))^d1 +
b1 * (h[i-1]^d1))^(1/d1)
x[i] = rpois(1,h[i])
}
#this is my log-likelihood function
ll <- function(par) {
h.n <- rep(0,n)
a0 <- par[1]
a1 <- par[2]
g1 <- par[3]
b1 <- par[4]
d1 <- par[5]
h.n[1] = x[1]
for (i in 2:n) {
h.n[i] = (a0 + a1 *
(abs(x[i-1]-h.n[i-1])-g1*
(x[i-1]-h.n[i-1]))^d1 +
b1 * (h.n[i-1]^d1))^(1/d1)
}
-sum(dpois(x, h.n, log=TRUE))
}
#as my true value are a0 = 2.3; a1
#= 0.05; g1 = 0.68; b1 = 0.09; d1
#= 2.0
#I put the parscale to become
#c(1,0.01,0.1,0.01,1)
ps <- c(1.0, 1e-02, 1e-01, 1e-02,1.0)
#optimization to check whether
#estimate return near to the true
#value
optim(par=c(0.1,0.01,0.1,0.01,0.1),
ll, method = "L-BFGS-B",
lower=c(1e-6,-10,-10,-10, 1e- 6),
control= list(maxit=1000,
parscale=ps,trace=1))
Then I will get the result of:
> iter 10 value 3172.782149
> iter 20 value 3172.371186
> iter 30 value 3171.952137
> iter 40 value 3171.525942
> iter 50 value 3171.174571
> iter 60 value 3171.095186
> Error in optim(par = c(0.1, 0.01, 0.1, 0.01,
> 0.1), ll, method = "L-BFGS-B", : L-BFGS-B
> needs finite values of 'fn'
So I try to change the lower bound, and it returns
> > optim(par=c(0.1,0.01,0.1,0.01,0.1), ll, method = "L-BFGS-B",lower=c(1e-6,1e-6,-10,1e-6,1e-6),control=list(maxit=1000,parscale=ps,trace=1))
>
> iter 10 value 3172.782149
>
> iter 20 value 3172.371186
>
> iter 30 value 3171.952137
>
> iter 40 value 3171.525942
>
> iter 50 value 3171.174571
>
> iter 60 value 3171.095186
>
> iter 70 value 3171.076036
>
> iter 80 value 3171.044809
>
> iter 90 value 3171.014010
>
> iter 100 value 3170.991805
>
> iter 110 value 3170.971857
>
> iter 120 value 3170.954827
>
> iter 130 value 3170.941397
>
> iter 140 value 3170.925935
>
> iter 150 value 3170.915694
>
> iter 160 value 3170.904309
>
> iter 170 value 3170.894642
> iter 180 value 3170.887122
> iter 190 value 3170.880802
>
> iter 200 value 3170.874319
>
> iter 210 value 3170.870006
>
> iter 220 value 3170.866008
>
> iter 230 value 3170.865497
>
> final value 3170.865422 converged
>
> $`par` [1] 3.242429e+05
> 2.691999e-04 3.896417e-01 6.174022e-04 2.626361e+01
>
> $value [1] 3170.865
>
> $counts function gradient
> 291 291
>
> $convergence [1] 0
>
> $message [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
Definitely, the estimated parameters are far from the true value.
What could I do to get close estimates to true value?
r nonlinear-optimization mle
First, I need to clarify that I have read the following posts but my problem still cant be solved:
R optim() L-BFGS-B needs finite values of 'fn' - Weibull
Optimization of optim() in R ( L-BFGS-B needs finite values of 'fn')
R optimize multiple parameters
optim function with infinite value
Below are the code to do simulation and proceed maximum likelihood estimation.
#simulation
#a0, a1, g1, b1 and d1 are my parameters
#set true value of parameters to
#simulate a set of data with size 2000
#x is the simulated data sets
set.seed(5)
a0 = 2.3; a1 = 0.05; g1 = 0.68; b1 =
0.09; d1 = 2.0; n=2000
x = h = rep(0, n)
h[1] = 6
x[1] = rpois(1,h[1])
for (i in 2:n) {
h[i] = (a0 + a1 *
(abs(x[i-1]-h[i-1])-g1*(x[i-1]-
h[i-1]))^d1 +
b1 * (h[i-1]^d1))^(1/d1)
x[i] = rpois(1,h[i])
}
#this is my log-likelihood function
ll <- function(par) {
h.n <- rep(0,n)
a0 <- par[1]
a1 <- par[2]
g1 <- par[3]
b1 <- par[4]
d1 <- par[5]
h.n[1] = x[1]
for (i in 2:n) {
h.n[i] = (a0 + a1 *
(abs(x[i-1]-h.n[i-1])-g1*
(x[i-1]-h.n[i-1]))^d1 +
b1 * (h.n[i-1]^d1))^(1/d1)
}
-sum(dpois(x, h.n, log=TRUE))
}
#as my true value are a0 = 2.3; a1
#= 0.05; g1 = 0.68; b1 = 0.09; d1
#= 2.0
#I put the parscale to become
#c(1,0.01,0.1,0.01,1)
ps <- c(1.0, 1e-02, 1e-01, 1e-02,1.0)
#optimization to check whether
#estimate return near to the true
#value
optim(par=c(0.1,0.01,0.1,0.01,0.1),
ll, method = "L-BFGS-B",
lower=c(1e-6,-10,-10,-10, 1e- 6),
control= list(maxit=1000,
parscale=ps,trace=1))
Then I will get the result of:
> iter 10 value 3172.782149
> iter 20 value 3172.371186
> iter 30 value 3171.952137
> iter 40 value 3171.525942
> iter 50 value 3171.174571
> iter 60 value 3171.095186
> Error in optim(par = c(0.1, 0.01, 0.1, 0.01,
> 0.1), ll, method = "L-BFGS-B", : L-BFGS-B
> needs finite values of 'fn'
So I try to change the lower bound, and it returns
> > optim(par=c(0.1,0.01,0.1,0.01,0.1), ll, method = "L-BFGS-B",lower=c(1e-6,1e-6,-10,1e-6,1e-6),control=list(maxit=1000,parscale=ps,trace=1))
>
> iter 10 value 3172.782149
>
> iter 20 value 3172.371186
>
> iter 30 value 3171.952137
>
> iter 40 value 3171.525942
>
> iter 50 value 3171.174571
>
> iter 60 value 3171.095186
>
> iter 70 value 3171.076036
>
> iter 80 value 3171.044809
>
> iter 90 value 3171.014010
>
> iter 100 value 3170.991805
>
> iter 110 value 3170.971857
>
> iter 120 value 3170.954827
>
> iter 130 value 3170.941397
>
> iter 140 value 3170.925935
>
> iter 150 value 3170.915694
>
> iter 160 value 3170.904309
>
> iter 170 value 3170.894642
> iter 180 value 3170.887122
> iter 190 value 3170.880802
>
> iter 200 value 3170.874319
>
> iter 210 value 3170.870006
>
> iter 220 value 3170.866008
>
> iter 230 value 3170.865497
>
> final value 3170.865422 converged
>
> $`par` [1] 3.242429e+05
> 2.691999e-04 3.896417e-01 6.174022e-04 2.626361e+01
>
> $value [1] 3170.865
>
> $counts function gradient
> 291 291
>
> $convergence [1] 0
>
> $message [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
Definitely, the estimated parameters are far from the true value.
What could I do to get close estimates to true value?
r nonlinear-optimization mle
r nonlinear-optimization mle
edited Nov 21 '18 at 13:46
Miyazaki
asked Nov 16 '18 at 8:17
MiyazakiMiyazaki
497
497
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
When an MLE is far from the true value, there are several possible explanations:
You don't have enough data to get an accurate estimate. Try using a much larger sample size and see if thing come out closer.
You have coded the likelihood incorrectly. This is harder to diagnose; basically you just want to read it over and check your coding.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
h[1]
is always6
andx[1]
is a random value with that mean; in your likelihood, you're assuming thath[1]
is equal tox[1]
. That's unlikely to be true.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
Your likelihood doesn't have a unique maximum, because the parameters are not identifiable.
There are probably others, too.
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
This is getting off-topic for stackoverflow, but in answer to your question: I'd treath[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that saysx[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.
– user2554330
Nov 18 '18 at 8:37
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333884%2fhow-to-enable-optimization-by-optim-return-close-estimate%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
When an MLE is far from the true value, there are several possible explanations:
You don't have enough data to get an accurate estimate. Try using a much larger sample size and see if thing come out closer.
You have coded the likelihood incorrectly. This is harder to diagnose; basically you just want to read it over and check your coding.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
h[1]
is always6
andx[1]
is a random value with that mean; in your likelihood, you're assuming thath[1]
is equal tox[1]
. That's unlikely to be true.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
Your likelihood doesn't have a unique maximum, because the parameters are not identifiable.
There are probably others, too.
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
This is getting off-topic for stackoverflow, but in answer to your question: I'd treath[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that saysx[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.
– user2554330
Nov 18 '18 at 8:37
add a comment |
When an MLE is far from the true value, there are several possible explanations:
You don't have enough data to get an accurate estimate. Try using a much larger sample size and see if thing come out closer.
You have coded the likelihood incorrectly. This is harder to diagnose; basically you just want to read it over and check your coding.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
h[1]
is always6
andx[1]
is a random value with that mean; in your likelihood, you're assuming thath[1]
is equal tox[1]
. That's unlikely to be true.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
Your likelihood doesn't have a unique maximum, because the parameters are not identifiable.
There are probably others, too.
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
This is getting off-topic for stackoverflow, but in answer to your question: I'd treath[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that saysx[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.
– user2554330
Nov 18 '18 at 8:37
add a comment |
When an MLE is far from the true value, there are several possible explanations:
You don't have enough data to get an accurate estimate. Try using a much larger sample size and see if thing come out closer.
You have coded the likelihood incorrectly. This is harder to diagnose; basically you just want to read it over and check your coding.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
h[1]
is always6
andx[1]
is a random value with that mean; in your likelihood, you're assuming thath[1]
is equal tox[1]
. That's unlikely to be true.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
Your likelihood doesn't have a unique maximum, because the parameters are not identifiable.
There are probably others, too.
When an MLE is far from the true value, there are several possible explanations:
You don't have enough data to get an accurate estimate. Try using a much larger sample size and see if thing come out closer.
You have coded the likelihood incorrectly. This is harder to diagnose; basically you just want to read it over and check your coding.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
h[1]
is always6
andx[1]
is a random value with that mean; in your likelihood, you're assuming thath[1]
is equal tox[1]
. That's unlikely to be true.
- I'm not familiar with your model, but this looks likely in your case: in your simulation,
Your likelihood doesn't have a unique maximum, because the parameters are not identifiable.
There are probably others, too.
answered Nov 16 '18 at 12:00
user2554330user2554330
10.1k11241
10.1k11241
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
This is getting off-topic for stackoverflow, but in answer to your question: I'd treath[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that saysx[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.
– user2554330
Nov 18 '18 at 8:37
add a comment |
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
This is getting off-topic for stackoverflow, but in answer to your question: I'd treath[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that saysx[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.
– user2554330
Nov 18 '18 at 8:37
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
Thank you for your explanation, @user2554330 1. I will try to increase sample size. 2. I will put in the model in the question. I put 6 as the h[1] initial value, so that it could start to generate finite data. In the likelihood i put x[1] equal to h.n[1] so that when doing model fitting to real data which might contain outlier, the first h.n[1] will exactly same with first data point in the graph. However, it is alright to put h.n[1] as zero too. 3. I also guess that it do not have unique maximum, but what is the meaning of parameters not identifiable?
– Miyazaki
Nov 18 '18 at 3:43
This is getting off-topic for stackoverflow, but in answer to your question: I'd treat
h[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that says x[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.– user2554330
Nov 18 '18 at 8:37
This is getting off-topic for stackoverflow, but in answer to your question: I'd treat
h[1]
as a parameter to be estimated, or a known value. Setting it to zero would be bad, because that says x[1]
would always be zero, but presumably it's not. "Not identifiable" means that different parameter values give exactly the same distribution for the data, so there is no unique MLE.– user2554330
Nov 18 '18 at 8:37
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333884%2fhow-to-enable-optimization-by-optim-return-close-estimate%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown