An Introduction to Kristof’s Theorem for Solving Least-Square Optimization Problems <i>Without</i> Calculus

2018-01-11T16:46:15Z (GMT) by Niels Waller
<p>Kristof’s Theorem (Kristof, <a href="#cit0019" target="_blank">1970</a>) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof’s Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof’s (<a href="#cit0018" target="_blank">1964</a>, <a href="#cit0019" target="_blank">1970</a>) writings. In this article, I describe the underlying logic of Kristof’s Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem’s proof. I then show how Kristof’s Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, <a href="#cit0027" target="_blank">2017</a>) code to perform the calculations described in the text.</p>