Stacking, Boosting and Online Learning in distributed mode with Apache Ignite

Stacking, Boosting and Online Learning in distributed mode with Apache Ignite

The current implementation of ML algorithms in Spark has several disadvantages associated with the transition from standard Spark SQL types to ML-specific types, a low level of algorithms’ adaptation to distributed computing, a relatively slow speed of adding new algorithms to the current library. 

Also, Spark ML doesn’t support online-learning by nature for all algorithms, stacking, boosting and a bunch of approximate ML algorithms that gives a significant speedup in many cases. 

But if you choose Ignite ML you could avoid a most of problems mentioned above.

Currently, Apache Ignite has Ignite ML module that includes a lot of distributed ML algorithms, the bunch of approximate ML algorithms, simple integration with TensorFlow via TensorFlow Ignite Dataset (currently, this is a part of TF.contrib package) and also each algorithm supports the model updating that gives us ability to make online-learning not only for KMeans and LinReg. 

We suggest to use Apache Ignite ML module to speedup your ML training and use Ignite as backend for distributed TensorFlow calculations. 

You will see live demos of ML pipeline building with Apache Ignite ML module, Apache Spark, TensorFlow and more.

Schedule:

Room:

Edward 1-4
Speakers
Yury
Babak
Head Of Development
at
GridGain Systems
---

Slides & Recordings

   Download Slides