Low-Latency Machine Learning at scale with an in-memory platform - 3 top architecture approaches and when to use them.

Low-Latency Machine Learning at scale with an in-memory platform - 3 top architecture approaches and when to use them.

Are you ready to take your machine learning algorithms and in real time business applications? We will walk through an architecture for taking a Machine Learning model from Training into deployment for inference within an Open Source platform for real-time stream processing. We will discuss the typical workflow from Data Exploration to Model Training through to real-time Model Inference (aka Scoring) on streaming data. We will also touch on important considerations to ensure maximum flexibility for deployments that need the flexibility to run in the top 3 architectures - Cloud-Native, Microservices and Edge/Fog. Finally, we’ll discuss some real-world examples to illustrate these key patterns.

Schedule:

Schedule

Room:

Tracks:

Speakers
John
DesJardins
VP Solution Architecture and CTO for N. America
at
Hazelcast
John DesJardins is currently VP Solution Architecture and CTO for North America at Hazelcast, where he champions growth and adoption of our in-memory computing platform. His expertise in large scale computing spans Microservices, Big Data, Internet of Things, Machine Learning and Cloud. He is an active blogger and speaker. John brings over 25 years of experience in architecting and implementing global scale computing solutions with top Global 2000 companies while at Hazelcast, Cloudera, Software AG and webMethods. He holds a BS in Economics from George Mason University, where he first built predictive models, long before that was considered cool.

Slides & Recordings

   Download Slides