Using In-Memory Computing to convert Big Data into Fast Data

Schedule
Room
Edward 1-4

Deploying Big Compute applications can lead to a wide range of tools and approaches needed to run large-scale applications for business, science, and engineering using a large amount of CPU and memory resources in a coordinated way. Typically Big Compute implies one of two things.  It can be ordinary computing scaled across a massive parallel cluster, or it can be High-Performance Computing (HPC).  The problem with the former is that it can only scale so far.  And, for it to really succeed, the data itself must be scaled just as wide. The immediate problem is that in many cases, the speed at which you get results matters just as much as the results themselves.  For example, if you are using analytics to improve e-commerce, the best time to do this is while the customer is engaged in a transaction.

 For a business to really get the value they need from Big Compute, you need to shift the paradigm and toolset to a message based, stateful data architecture with distributed computations.

In this session, we will walk through how the paradigm shift happened and where IMDG’s, NoSQL and In-Memory databases stopped working and what else was needed.

 

Speakers
Profile picture for user Kevin
Kevin
Goldstein
Sr Principal Architect
at
NEEVE RESEARCH
A Principle Architect and low latency developer with over 17 years of experience in building high performance enterprise applications and trading systems. Kevin specializes in the low latency, messaging and algorithmic arena of trading, and has experience with front, middle and back office systems.