1 / 15
文档名称:

Numerical Tools For The Bayesian Analysis Of Stochastic Frontier Models.pdf

格式:pdf   页数:15
下载后只包含 1 个 PDF 格式的文档,没有任何的图纸或源代码,查看文件列表

如果您已付费下载过本站文档,您可以点这里二次下载

Numerical Tools For The Bayesian Analysis Of Stochastic Frontier Models.pdf

上传人:bolee65 2014/9/23 文件大小:0 KB

下载得到文件列表

Numerical Tools For The Bayesian Analysis Of Stochastic Frontier Models.pdf

文档介绍

文档介绍:Journal of Productivity Analysis, 10, 103–117 (1998)

c 1998 Kluwer Academic Publishers, Boston. Manufactured in herlands.
Numerical Tools for the Bayesian Analysis of
Stochastic Frontier Models
JACEK OSIEWALSKI
Department of Econometrics, Academy of Economics, 31-510 Krakow,´ Poland
MARK F. J. STEEL*
CentER and Department of Econometrics, Tilburg University, 5000 LE Tilburg, herlands
Abstract
In this paper we describe the use of modern numerical integration methods for making
posterior inferences posed error stochastic frontier models for panel data or individual
cross- sections. Two Monte Carlo methods have been used in practical applications. We
survey these two methods in some detail and argue that Gibbs sampling methods can greatly
reduce putational difficulties involved in analyzing such models.
Keywords: Efficiency analysis, composed error models, posterior inference, Monte Carlo-importance sampling,
Gibbs sampling
Introduction
The stochastic frontier posed error framework was first introduced in Meeusen and
van den Broeck (1977) and Aigner, Lovell and Schmidt (1977) and has been used in many
empirical applications. The reader is referred to Bauer (1990) for a survey of the literature.
In previous papers (van den Broeck, Koop, Osiewalski and Steel, 1994, hereafter BKOS;
Koop, Steel and Osiewalski, 1995; Koop, Osiewalski and Steel, 1994, 1997a,b,c, hereafter
KOS) we used Bayesian methods to analyze stochastic frontier models and argued that such
methods had several advantages over their classical counterparts in the treatment of these
models. Most importantly, the Bayesian methods we outlined enabled us to provide exact
finite sample results for any feature of interest and to take fully into account parameter
uncertainty. In addition, they made it relatively easy to treat uncertainty about which
model to use since they mended taking weighted averages over all models, where the
weights were posterior model probabilities. We esfully app