Memory-Centric Architecture - a New Approach to Distributed Systems

In this presentation Dmitriy will present how to achieve the best performance and scale with the new memory-centric approach to distributed architectures. Dmitriy will go over traditional in-memory and disk-based systems, compare their strengths and weaknesses, cover such features as ACID compliance, SQL compatibility, persistence, replication, security, fault tolerance and more. Dmitriy will also cover some of the most common use cases for distributed computing and analyze some large Apache Ignite and GridGain deployments.


In-Memory Computing Brings Operational Intelligence to Business Challenges

Every day, businesses face the challenge of analyzing and responding in the moment to fast-changing data, and the workloads are only getting larger. In diverse applications, including cable media, IoT, logistics, financial services, medical monitoring, and more, companies need“operationalintelligence” – the ability to identify patterns and trends that impact competitiveness and then take immediate action. As legacy, database-centric software architectures have become increasingly overtaxed, in-memory computing(IMC)has stepped in to meet the need.


Things you Learn as you Massively Scale...

Everything scales, at least at the whiteboard or powerpoint levels. But in reality scaling is never easy, and there are few things more painful than being ‘behind the curve’ as your system volumes increase.  In this presentation the speaker will share the lessons he has learned from working with systems that have grown massively over time. Topics include:

  • Things you need to know before you think about running at scale

  • Scaling at the architectural level


Ways to recover in-memory data on a disaster

In-memory data grids try to provide simple, scalable and redundant solutions for enterprise businesses. In today's world, enterprise applications keep most of the business critical data on in-memory data grids rather than persistent stores in order to gain more performance with scalability. However, application life in a distributed environment is not easy because of the hard constraints. Disasters such as unexpected software or hardware crashes, power losses, and network issues can make even harder to keep the data consistent. 


Speed-of-light faceted search via Oracle In-Memory option

The talk is real-life story about search and findings for high-performance faceted search engine of one of our customers.

Client's application struggled from performance degradation for either data ingestion and ad-hoc search Before In-Memory approach and it is in a perfect shape now.

The talk will briefly describe all former faceted search architectures and options tested before in-memory and why Oracle In-Memory was choosen.