Sparse Inversion of Stacked Autoencoder Classification Machines

A. Sarishvili, M. Jirstrand, B. Adrian, A. Wirsen. In proceedings of 7th International Congress on Information and Communication Technology (ICICT 2022), London, UK, 21-24 February, 2022.

Abstract

This paper describes a new approach to solving an inverse classification problem. In many applications of supervised learning it is from interest to generate a number of synthetic high dimensional model input from a high dimensional as well output. Recent works have shown the benefit of using artificially augmented data in deep machine learning applications. Especially in image processing augmentation algorithms e. g. generative Adversarial Networks (GAN), conditional GAN’s, variational type of autoencoders, Restricted Boltzmann Machines, geometric transformations (translation, rotation and scaling fractional linear Möbius transformation), mixing images, feature space augmentation, etc. are used as methods to solve the problem of limited data space. Furthermore the classical inverse problem e. g. reconstruction of images from incomplete data is still of high interest. There are several studies where the use of Deep Learning methods in solving inverse problems have shown good results. These studies have been focused on image denoising by e. g. stacked denoising autoencoders (SDA), removing noisy patterns from images, and image super-resolution by using special type of deep convolutional networks. The proposed approach generates/reconstructs handwritten digits by using Compressed Sensing (CS) in combination with SAE+SM (Stacked Autoencoder Softmax classification machine) and Sparse Coding (SC) in forward step.




Photo credits: Nic McPhee