Make your data science actionable, real-time machine learning inference with stream processing.
Are you ready to take your machine learning algorithms and make them operational within your business in real time? We will walk through an architecture for taking a Machine Learning model from Training into deployment for inference within an Open Source platform for real-time stream processing. We will discuss the typical workflow from Data Exploration to Model Training through to real-time Model Inference (aka Scoring) on streaming data. We will also touch on important considerations to ensure maximum flexibility for deployments that need the flexibility to run in Cloud-Native, Microservices and Edge/Fog architectures.
We'll demonstrate a working example of a Machine Learning model being used on streaming data within Hazelcast Jet, an extremely powerful platform for distributed stream processing.