Research articles

Basics of estimation

Expand
  • Helsinki Institute for Information Technology, Tampere University of Technology, Tampere 33720, Finland;

Published date: 05 Sep 2010

Abstract

This paper outlines a theory of estimation, where optimality is defined for all sizes of data — not only asymptotically. Also one principle is needed to cover estimation of both real-valued parameters and their number. To achieve this we have to abandon the traditional assumption that the observed data have been generated by a “true” distribution, and that the objective of estimation is to recover this from the data. Instead, the objective in this theory is to fit ‘models’ as distributions to the data in order to find the regular statistical features. The performance of the fitted models is measured by the probability they assign to the data: a large probability means a good fit and a small probability a bad fit. Equivalently, the negative logarithm of the probability should be minimized, which has the interpretation of code length. There are three equivalent characterizations of optimal estimators, the first defined by estimation capacity, the second to satisfy necessary conditions for optimality for all data, and the third by the complete Minimum Description Length (MDL) principle.

Cite this article

Jorma RISSANEN, . Basics of estimation[J]. Frontiers of Electrical and Electronic Engineering, 2010 , 5(3) : 274 -280 . DOI: 10.1007/s11460-010-0104-0

Outlines

/