Welcome to IBM Federated Learning

IBM Federated Learning is a framework for federated learning in an enterprise environment. Federated Learning (FL) is a distributed machine learning process in which each participant node (or party) retains in data locally and interacts with the other participants via a learning protocol. The main driver behind FL is need to not share data with others, mainly driven by privacy and confidentially concerns.

IBM Federated Learning provides a basic fabric for FL on which advanced features can be added. It is not dependent on any specific machine learning framework and supports different learning topologies, e.g., an shared aggregator, and protocols. This is a team effort of multiple IBM Research teams in Almaden, Yorktown Heights, and Dublin. It is meant to provide a solid basis for federated learning that enables a large variety of federated learning models, topologies, learning models etc., in particular in enterprise settings.

Federated Learning is still in its infancy. Many aspects of machine learning have to be re-thought when not sharing data, including hyper-parameter tuning, privacy of models, optimization sensitive to privacy budgets and many more. IBM Federated Learning provides a robust foundation for this advanced work, and a target to which you can contribute your threshold homomorphic encryption aggregator using secure multi-party encryption protocol or your noise and clipping functions for the implementation of differential privacy.

IBM Federated Learning supports the following out-of-the-box model implementations at present:

  • Neural networks (any neural network topology supported by Keras)

  • Linear classifiers/regressions (with regularizer): logistic regression, linear SVM, ridge regression and more

  • Decision Tree ID3

  • Deep Reinforcement Learning algorithms including DQN, DDPG, PPO and more

  • Naïve Bayes

Multiple fusion algorithms and advanced privacy features are also supported.


Indices and tables