Low-Latency Machine Learning at scale with an in-memory platform - 3 top architecture approaches and when to use them.
Are you ready to take your machine learning algorithms and in real time business applications? We will walk through an architecture for taking a Machine Learning model from Training into deployment for inference within an Open Source platform for real-time stream processing. We will discuss the typical workflow from Data Exploration to Model Training through to real-time Model Inference (aka Scoring) on streaming data. We will also touch on important considerations to ensure maximum flexibility for deployments that need the flexibility to run in the top 3 architectures - Cloud-Native, Microservices and Edge/Fog. Finally, we’ll discuss some real-world examples to illustrate these key patterns.