Unitary-Group Invariant Features

"Unitary-Group Invariant Kernels and Features from Transformed Unlabeled Data", Dipan K. Pal and Marios Savvides (under review at ICLR 2016)

 

"Technical Report for Invariant Features", Dipan K. Pal, Felix Juefei-Xu and Marios Savvides

One of the fundamental problems in pattern recognition and vision in general is to factor out or be selectively invariant or covariant to transformations. Most studies in the field however focus on achieving this goal indirectly through augmented data and solving heavy optimization problems. Although such an approach does work in practice, it does not provide insight into the nature of the problem itself. Further, most studies from an algorithmic perspective of the human visual cortex suggest that the cortex does not seem to be solving a heavy optimization problem, rather simple ones, further it relies heavilly on memory.

 

One notable charecteristic of the human/primate visual cortex is the ability to general extremely well to new samples given very few training instances. This can be atributed to the fact that the human visual cortex works well even with low sample complexity. Most of machine learning and statistical appraoches to the field so far, have focussed on theory requiring high sample complexity. Thus there arises a need to explore the theory and practice of low sample complexity methods.

 

It has been shown that variation in data arising due to the combined effect of some common transformations is one of the key factors in rasiing the sample complexity of the problem. We thus explore approaches to handle such tranformations by explicitly being invariant to them. We study the theory and practice of discriminative unitary group invariant features. Unitary groups are important because most common transformations in vision can be modelled as unitary.

 

 

"Unitary-Group Invariant Kernels and Features from Transformed Unlabeled Data", Dipan K. Pal and Marios Savvides, (under review at ICLR 2016)

 

Abstract: The study of representations invariant to common transformations of the data is important to learning. Most techniques have focused on local approximate invariance implemented within expensive optimization frameworks lacking explicit theoretical guarantees. In this paper, we study kernels that are invariant to the unitary group while having theoretical guarantees in addressing practical issues such as (1) unavailability of transformed versions of \textit{labelled} data and (2) not observing all transformations. We present a theoretically motivated alternate approach to the invariant kernel SVM. Unlike previous approaches to the invariant SVM, the proposed formulation solves both issues mentioned. We also present a kernel extension of a recent technique to extract linear unitary-group invariant features addressing both issues and extend some guarantees regarding invariance and stability. We present experiments on the UCI ML datasets to illustrate and validate our methods.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

"Technical Report for Invariant Features", Dipan K. Pal, Felix Juefei-Xu and Marios Savvides

 

Abstract: We propose an explicitly discriminative and 'simple' approach to generate invariance to nuisance transformations modeled as unitary. In practice, the approach works well to handle non-unitary transformations as well. Our theoretical results extend the reach of a recent theory of invariance to discriminative and kernelized features based on unitary kernels. As a special case, a single common framework can be used to generate subject-specific pose-invariant features for face recognition and vice-versa for pose estimation. We show that our main proposed method can perform well under very challenging large-scale semi-synthetic face matching and pose estimation protocols with unaligned faces using no landmarking whatsoever. We additionally benchmark on CMU MPIE and outperform previous work in almost all cases on off-angle face matching while we are on par with the previous state-of-the-art on the LFW unsupervised and image-restricted protocols, without any low-level image descriptors other than raw-pixels.