Helsinki Institute
for Information Technology, Tampere University of Technology, Tampere
33720, Finland;
Show less
History+
Published
05 Sep 2010
Issue Date
05 Sep 2010
Abstract
This paper outlines a theory of estimation, where optimality is defined for all sizes of data — not only asymptotically. Also one principle is needed to cover estimation of both real-valued parameters and their number. To achieve this we have to abandon the traditional assumption that the observed data have been generated by a “true” distribution, and that the objective of estimation is to recover this from the data. Instead, the objective in this theory is to fit ‘models’ as distributions to the data in order to find the regular statistical features. The performance of the fitted models is measured by the probability they assign to the data: a large probability means a good fit and a small probability a bad fit. Equivalently, the negative logarithm of the probability should be minimized, which has the interpretation of code length. There are three equivalent characterizations of optimal estimators, the first defined by estimation capacity, the second to satisfy necessary conditions for optimality for all data, and the third by the complete Minimum Description Length (MDL) principle.
Jorma RISSANEN,.
Basics of estimation. Front. Electr. Electron. Eng., 2010, 5(3): 274‒280 https://doi.org/10.1007/s11460-010-0104-0
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
This is a preview of subscription content, contact us for subscripton.
AI Summary ×
Note: Please note that the content below is AI-generated. Frontiers Journals website shall not be held liable for any consequences associated with the use of this content.