posted on 2010-01-01, 00:00authored byOgnjen Arandjelovic
Linear subspace representations of appearance variation are pervasive in computer vision. In this paper we address the problem of robustly matching them (computing the similarity between them) when they correspond to sets of images of different (possibly greatly so) scales. We show that the naïve solution of projecting the low-scale subspace into the high-scale image space is inadequate, especially at large scale discrepancies. A successful approach is proposed instead. It consists of (i) an interpolated projection of the low-scale subspace into the high-scale space, which is followed by (ii) a rotation of this initial estimate within the bounds of the imposed “downsampling constraint”. The optimal rotation is found in the closed-form which best aligns the high-scale reconstruction of the low-scale subspace with the reference it is compared to. The proposed method is evaluated on the problem of matching sets of face appearances under varying illumination. In comparison to the naïve matching, our algorithm is shown to greatly increase the separation of between-class and within-class similarities, as well as produce far more meaningful modes of common appearance on which the match score is based.