Abstract

When training a neural network, vector data is input into a sequence of adjustable affine functions and nonlinear transformations. This process works well for visual data, and many other applications are currently being developed. However, little attention is paid to the abstract vector basis associated with a given data format.

My research involves machine learning in a basis-independent way. In particular, I have found neural networks can be trained using a basis of discrete orthogonal Hanh polynomials. These special functions many interesting mathematical properties that can aid the analysis of the linear operations inherent in the machine learning process.