Does the data model fit for in-memory computing – Tales from SAP analytical software for financial industries
SAP built their classical analytical financial services software based on the “NetWeaver on anyDB” concept. With SAP’s HANA strategy, they pushed migration of their legacy systems to the new database and column-store technology. However, the limits of this approach based on Single Selects and extensive joins on relation data models were soon to be discovered.
The purpose of the talk will focus on the question, how the data model and access APIs have to support the underlying in-memory and database technology, in this case SAP HANA . It will show, what remedies can be provided by the software vendor and workarounds that have to be implemented by customers. Finally, it will cover how the data model and functions of the next generation analytical software for financial industries are designed to fit to the new technology .
To provide examples, I’ll be covering the path of SAPs analytical software for financial industries from the anyDB concept to HANA1 acceleration to a completely new implementation based on the HANA2 Data Management Suite.
The talk is based on experience in HANA migration projects as well as implementation projects on the newest HANA Data management suite. Technical details like preferred joins, buffering frameworks and temporal versioning will be covered in this talk.
The key take aways for the audience is to realize the importance of a fitting data model and corresponding APIs that has to go hand in hand with in memory computing (might even not be optimal in commercial software). People that attend will benefit from experiences made in implementation projects to realistically evaluate the usage of modern in memory technologies.