Federated Learning of Deep Neural Decision Forests

A. Sjöberg, E. Gustavsson, A. C. Koppisetty, M. Jirstrand. In Proceedings of the Fifth International Conference on Machine Learning, Optimization, and Data Science 2019, Siena, Italy, 10-13 September, 2019.

Abstract

Modern technical products have access to a huge amount of data and by utilizing machine learning algorithms this data can be used to improve usability and performance of the products. However, the data is likely to be large in quantity and privacy sensitive, which excludes the possibility of sending and storing all the data centrally. This in turn makes it difficult to train global machine learning models on the combined data of different devices. A decentralized approach known as federated learning solves this problem by letting devices, or clients, update a global model using their own data and only sending changes of the global model, which means that they do not need to communicate privacy sensitive data.

Deep neural decision forests (DNDF), inspired by the versatile algorithm random forests, combine the divide-and-conquer principle together with the property representation learning. In this paper we further develop the concept of DNDF to be more suited for the framework of federated learning. By parameterizing the probability distributions in the prediction nodes of the forest, and include all trees of the forest in the loss function, a gradient of the whole forest can be computed which some/several federated learning algorithms utilize. We demonstrate the inclusion of DNDF in federated learning by an empirical experiment with both homogeneous and heterogeneous data and baseline it against a convolutional neural network with the same architecture as the DNDF.

Experimental results show that the modified DNDF, consisting of three to five decision trees, outperform the baseline convolutional neural network.

 




Photo credits: Nic McPhee