Application caches in modern systems must be manually tuned and sized in response to changing application’s workload. A balance must be achieved between cost, performance and revenue loss from cache sizing mis-matches. However, caches are inherently nonlinear systems making this exercise equivalent to solving a maze in the dark.
Until now! The industry’s first self-tuning and auto-scaling solution for application caches leverages breakthrough predictive algorithms. Imagine your cache self-tuning in real-time in response to changing workloads, thereby unlocking huge improvements in cost, efficiency, performance and observability.
In this session, we cover how auto-scaling cache delivers impressive gains in performance and observability together with a live demo of multi-tiered cache scaling with changing workload
Why static caches leave so much performance on the floor?
What is a auto-scaling for caches and how does the cache automatically adapt to changing application workloads?
The efficiency, QoS, performance SLAs/SLOs and cost tradeoffs that auto-scaling caches enable
Live demo of auto-scaling multi-tiered cache as workload changes
Irfan is an inventor on more than 40 granted patents. Irfan has published at ACM SOCC, USENIX FAST, ATC and IEEE IISWC and has chaired HotStorage, HotCloud and VMware’s R&D Innovation Conference. Irfan serves on steering committees for USENIX HotStorage and HotCloud. Irfan has served on program committees for FAST, MSST, HotCloud and HotStorage among others and as a reviewer for the ACM Transactions on Storage. Irfan earned his pink tie from the University of Waterloo.