Delivering Machine Learning in Real-Time with In-Memory Computing

Delivering Machine Learning in Real-Time with In-Memory Computing

Schedule
June 21, 01:40pm
Room
Matterhorn 1

Predictive intelligence from machine learning has the potential to change everything in our day to day experiences, from education to entertainment, from travel to healthcare, from business to leisure and everything in between. Modern ML frameworks are batch by nature and cannot pivot on the fly to changing user data or situations. Many simple ML applications such as those that enhance user experience, can benefit from real time robust predictive models that adapt on the fly.

Join this session to learn how common practices in machine learning such as running a trained model in production can be substantially accelerated and radically simplified by using Redis modules that natively store and execute  common models generated by Spark ML and Tensorflow algorithms. We will also discuss the implementation of simple, real time feed forward neural networks with Neural Redis and scenarios that can benefit from such efficient, accelerated artificial intelligence.
Real life implementations of these new techniques at a large targeted content serving company, requiring several trillions of operations/second will be discussed. 

Speakers
Kamran
Yousaf
Solution Architect
at
Redis Labs
Kamran Yousaf

Kamran Yousaf is a Solution Architect at Redis Labs, he specialises in development of distributed, high performance, low latency architectures. During this time, he has worked with a wide range of technologies and architectures from rule based development, grid and low latency application to enterprise file sync and share. Most recently he was VP engineering at a UK start-up SME, a leader in enterprise file sync and share. Previously he has worked at GigaSpaces, BEA and Versata.