Today I finished the MOOC at open.sap.com about “In-memory Data Management”. For me as a not too technical guy this was very enlightening.
For one thing, when talking about HANA, there is always the mentioning of the compression of the database, thus less HANA hardware needed. I can understand that once you have HANA, it is not necessary to have aggregates anymore, but still I didn’t understand the amount of compression or the way it was done. Today I learned that it is done by storing the data in a columnar way with a sorted dictionary. In a way similar to how dimensions work for a SAP BW infocube, only a bit more efficiënt and these are not necessarily sorted. Compression as a matter of fact is a very important factor for the performance anyway, since it reduces the operations on the main memory (that is the in-memory part).
One of the other things I found interesting is the way the general use (read/write) of databases was analysed, and based on that analyses it was decided to store the data in main memory differently (columnar data layout as opposed to row data lay out). There are drawbacks to this method (reconstruction a row is performance expensive), but based on experience this will in general be the better way.
The most important thing I learned today though is that HANA is not a mythical magic box. It remains a thing build by people, and thus will also in some cases be disappointing.
In conclusion: HANA is an impressive piece of technology, I think it will change the way we look at databases in the near future.