The In-Memory Computing sneak peek schedule is just a taste of the in-depth talks attendees will be able to attend during the conference this coming June. Make sure to reserve your spot now and register at the Super Saver rate.

Sven
Beauprez
Lead Architect
at
The Glue
In this presentation I will show you how to combine Apache Ignite with Docker to not only build an event-driven microservice platform but also to make this dynamically re-configurable without any downtime at all. The pre-requisites for the platform are banking grade NFRs such as Exactly Once Processing of requests, High Availability -even in case of data center disasters- and no downtime _ever_. For this, all containers are linked in one big software-defined network in which the contained services form one big distributed grid, and in which new services register themselves during deploy time…
Chitij
Chauhan
Subject Matter Expert - Distributed Databases
at
Incedo
There are plenty of inmemory distributed databases available in the market such as MemSQL , VoltDB , SAP HANA , Exasol , Apache Ignite , Altibase , Aerospike etc.Some of these fall under the NewSQL category and some of them are also NoSQL products.As there are many offerings available it is difficult for organizations to choose the right fit for their applications. Finding the right inmemory database is a challenge for the organizations.The idea of this presentation is to showcase how to choose the right product for the application needs.We will drill down into the respective characteristics…
Manuel
Mourato
Big Data Engineer
at
Nomad Tech
The stock trading world is a harsh reality for many investors, in which many times you need to make critical decisions in a very short window of time. In this constantly changing landscape in which prices are constantly updated and investing at the right moment makes all the difference, having the right tools to collect, process and analyze big volumes of data in a short amount of time becomes very important. This presentation aims at presenting an architecture for a distributed application with in-memory capabilities, in order to collect, process, classify and visualize different equities…
Alexander
Ermakov
CTO
at
Arenadata
In-memory computing technologies have already changed a lot of IT spheres, from relational databases to data science solutions. But can standalone in-memory grids accelerate traditional enterprise data warehouses (DWH)? In the talk we will show how Apache Ignite can be used together with world's first open source massive-parallel processing database - Greenplum DB - to give 12x acceleration to the queries. Subtopics of the talk: - Why do traditional DWH needs in-memory grid  - Integrating Apache Ignite with Greenplum DB: how to integrate one cluster system with another - Using the power…
Ravikanth
Durgavajhala
SSD Solutions Architect
at
Intel Corp
Intel Memory Drive Technology (IMDT) is a software-defined memory (SDM) product that allows for the expansion of system memory beyond DRAM by defining some of the PCIe-based Intel Optane SSD capacity as memory, instead of as storage. It executes directly on the hardware and below the operating system, and allows for system memory to be assembled from DRAM and the PCIe-based Intel Optane SSD. It leverages economic benefit of SSDs, and operates transparently as volatile system memory. It is optimized for up to 8x system memory expansion over installed DRAM capacity and provides ultra-low…
William
Bain
CEO
at
ScaleOut Software, Inc.
In use cases ranging from IoT to ecommerce, an ongoing challenge for stream-processing applications is to extract important insights from real-time systems as fast as possible and then generate effective feedback that optimizes operations or avoids costly failures. Unlike popular software platforms for streaming analytics (e.g., Apache Storm, Flink, Spark Streaming, and legacy CEP), which focus on extracting value from unfiltered data streams, in-memory data grids (IMDGs) have opened the door to stateful stream processing that correlates event streams by data sources using a “digital twin”…
Colin
MacNaughton
Head of Engineering
at
Neeve Research
Big data is moving to a new stage of maturity — one that promises even greater business impact and industry disruption over the course of the next few years. As big data initiatives mature, organizations are now combining the agility of big data processes with the scale of artificial intelligence (AI) capabilities to accelerate the delivery of business value. The convergence of big data with AI has emerged as the single most important development that is shaping the future of how firms drive business value from their data and analytics capabilities. The availability of greater volumes and…
Alparslan
Avci
Solutions Architect
at
Hazelcast
In-memory data grids try to provide simple, scalable and redundant solutions for enterprise businesses. In today's world, enterprise applications keep most of the business critical data on in-memory data grids rather than persistent stores in order to gain more performance with scalability. However, application life in a distributed environment is not easy because of the hard constraints. Disasters such as unexpected software or hardware crashes, power losses, and network issues can make even harder to keep the data consistent.  In this presentation, we will go over a bunch of disaster…
David
Rolfe
Director of Solutions Engineering
at
VoltDB Inc
Everything scales, at least at the whiteboard or powerpoint levels. But in reality scaling is never easy, and there are few things more painful than being ‘behind the curve’ as your system volumes increase.  In this presentation the speaker will share the lessons he has learned from working with systems that have grown massively over time. Topics include: Things you need to know before you think about running at scale Scaling at the architectural level Scaling Write Centric Workloads Scaling in the real world
Lucas
Beeler
Senior Consultant
at
GridGain Systems, Inc.
GridGain and Apache Ignite customers follow a capability-maturity model as they become more familiarwith GridGain’s in-memory computing platform. When customers encounter the product, they often deploy it for simple caching use cases. At this simplest phase of maturity, the business case for GridGain is speed: the product is used to accelerate existing data access pipelines.But GridGain and Ignite offer customersmore than caching. And ascustomers’capability and maturity withthe product grow, they begin to use the product’s other features, like Compute Grid and Service Grid. Some customers,…