文档介绍:Ch. 17 Maximum Likelihood Estimation
The identification process having led to a tentative formulation for the model, we
then need to obtain efficient estimates of the parameters. After the parameters
have been estimated, the fitted model will be subjected to diagnostic checks .
This chapter contains a general account of likelihood method for estimation of
the parameters in the stochastic model.
Consider an ARMA (from model identification) model of the form
Yt = c + φ1Yt−1 + φ2Yt−2 + ... + φpYt−p + εt + θ1εt−1
+θ2εt−2 + ... + θqεt−q,
with εt white noise:
E(εt) = 0
σ2 for t = τ
E(εε) = .
t τ 0 otherwise
2
This chapter explores how to estimate the value of (c, φ1, ..., φp, θ1, ..., θq, σ)
on the basis of observations on Y .
The primary principle on which estimation will be based is maximum likelihood
2 0
estimation. Let θ= (c, φ1, ..., φp, θ1, ..., θq, σ) denote the vector of population
parameters. Suppose we have observed a sample of size T (y1, y2, ..., yT ). The
approach will be to calculate the joint probability density
fYT ,YT −1,...,Y1(yT , yT −1, ..., y1; θ), (1)
which might loosely be viewed as the probability of having observed this particular
sample. The maximum likelihood estimate (MLE) of θ is the value for which
this sample is most likely to have been observed; that is, it is the value of θ that
maximizes (1).
This approach requires specifying a particular distribution for the white noise
process εt. Typically we will assume that εt is gaussian white noise:
ε . N(0, σ2).
t ∼
1
1 MLE of a Gaussian AR(1) Process
Evaluating the Likelihood Function Using (Scalar) Con-
ditional Density
A stationary Gaussian AR(1) process takes the form
Yt = c + φYt−1 + εt, (2)
with ε . N(0, σ2) and φ< 1 (How do you know at this stage ?). For this
t ∼| |
case, θ= (c, φ, σ2)0.
Consider the of Y1, the first observations in the sample. This is a random
variable with mean and variance
c
E(Y ) = µ = an