No one can accuse Scott M Fulton, author of a recent article entitled "SAP's HANA: Accelerating Your Apps by 6 Orders of Magnitude" of not having a sense of humour; his own profile states that "many of the world's recycled goods and paper products, from packing containers to park benches, are made from Scott's manuscripts from the 1990s." His latest article takes this gag to its logical extreme; it isn't so much that it is destined to be re-cycled - more that it consists almost entirely of re-cycled SAP quotes. Which is a shame, because there is an interesting debate about how best to exploit the latest advances in computing hardware - so that organizations can "do more with their data" - that Fulton's article fails to examine critically. Or even at all.
To do what Fulton doesn't, we need to introduce some physics. But stay with me, I'll go slowly!
All modern computing systems are based on the "von Neumann" or "stored instruction" computing model. Instructions - and the data on which they operate - are stored in persistent storage and both follow the same path to the processor (the component where the computation is actually performed); off the storage; across an interconnect or "bus"; into memory; back out across the bus; and, finally, into the memory registers on the processor.
The genius of the Von Neumann architecture is its flexibility; the "stored instructions" are software - and we can change the functionality of the machine just by replacing or changing the software. Once upon a time, computers were hard-wired - and computer scientists had to set switches and insert patch leads to make them do different things between "runs". This is not a programming model that plays well outside secret code-cracking facilities and atomic weapon research labs stuffed full of highly-trained scientists and engineers. By contrast, the "stored-program" model pioneered by von Neumann and friends gave birth to the whole modern ICT industry in which we are all employed today. And this flexibility is arguably nowhere more important than in data warehousing and analytics where, by definition, the questions that we ask - and hence the queries that we run against the database – evolve and change continuously.
There is a catch, however. Whilst processor speeds have increased by a factor of five million over the course of the last 30 years, disk access speeds have increased only by a factor of five. Persistent storage, has, historically at least, consisted of magnetized spinning platters – and making them spin faster whilst ensuring that we can still read the data encoded on them accurately turns out to be much harder than the semiconductor physics that has driven the improvements in processor performance and memory latency.
The consequence of all of this is that when a von Neumann processor is asked to perform relatively simple calculations on large volumes of data, the processors are often sat idle, waiting for the data that they need to continue working. Which is why, of course, Massively Parallel Processing (MPP) systems like Teradata systems spread those large volumes of data across multiple disks attached to multiple computers ("nodes"). If you engineer an MPP system well enough to remove the other bottlenecks, then the "I/O bandwidth" (the rate at which you can get data off the storage) available to each computing node is a function of the number of disk spindles attached to it; and in this way you can make sure that there are sufficient disk spindles available to "saturate" the computing node so that it can get the data it needs just as fast as the bus connecting the storage, memory and processor - and the physical separation between them - will allow.
But what if you could just bust the storage bottleneck? Existing database management systems do, of course, exploit memory by "caching" frequently accessed data in memory – but if you could just take the persistent storage completely out of the equation by storing all the data in memory, then you could potentially improve performance by several orders of magnitude where there is a "cache miss". This is the promise of in-memory database technology – and SAP's new HANA product is the poster-child for in-memory database technology.
Weighing the in-memory technology trade-offs
Engineering is, of course, fundamentally about trade-offs – and there are a couple of important trade-offs that have to be taken into consideration where in-memory technology is concerned.
The first issue we have to consider is persistence. Memory is not persistent; if there is a system failure, the data in memory are lost. So if we are going to build an "in-memory database", we also have to keep an additional copy of the data on persistent storage; and if we want to be able to recover data quickly after a system failure, then the redundant data will need to be stored on an expensive, high-performance, high-availability storage sub-system. Not only that, but we cannot ever really consider that data are "stored" until they are written to the persistent storage layer, so in-memory architectures don't by themselves do anything to improve load performance, which has traditionally been a very significant issue for organizations that have deployed SAP's flagship Data Warehouse solution, "Business Warehouse" (BW), as most of the industry still refers to it. (We'll discuss whether BW should actually be considered a Data Warehouse solution in part 3 of this post.
The second issue that we have to consider is the nature of the problem that we are trying to solve. Remember that the "von Neumann bottleneck" is acute where we are performing relatively simple calculations on large volumes of data (an "I/O bound" workload) – but if the rate limiting factor is the not the time it takes us to access the data, but rather the complexity of the calculations that we then perform on them (a "CPU-bound" workload), then storing the data in-memory to reduce "access latency" doesn't help. And since data warehousing is increasingly about sophisticated, predictive analytics, rather than just basic reporting, data warehouse workloads are becoming increasingly computationally intensive.
The third issue that we need to consider is how to scale the computing system. There are essentially two ways that we can add compute power: we can "scale-up" by adding more CPUs to the system that share the same memory-space and storage (a "multi-processor" architecture); or we can "scale-out", by adding more CPUs that don't share computing resources (a "shared nothing multi-computer" architecture). Because sharing computing resources results in contention for those resources, scale-out architectures scale very, very much better than scale-up architectures - particularly for workloads, like complex queries running over large data sets that are not localized and cannot easily be confined to execute in a discrete partition. For this reason, just about all of the data warehouse platform vendors have adopted some variation of the multi-computer architecture.
Moving intermediate result-sets around the multiple computing units of a shared nothing system requires us to ship data across a network, introducing a new potential bottleneck. In Teradata systems, this challenge is addressed by carefully managing – and minimising – the amount of inter-process communication and by avoiding the requirement to perform all sort / merge processing on a "co-ordinator" node. But since networks run slower than memory, if the designer of a parallel computing system doesn't go to the (significant) lengths that Teradata's original designers did to avoid unnecessary data shipping, the performance of complex workloads on a scale-out, multi-computer architecture are ultimately limited by the performance of the interconnect. In this scenario, storing the data in memory does not remove the bottleneck - it just moves it to another part of the computing system.
And because CPUs are getting faster more quickly than memory is getting faster, a fourth technology challenge looms just over the horizon for in-memory database technology, as a rapidly widening gap between the performance of processors and the performance of memory opens up. Those commentators that claim that "memory is the new disk" should perhaps be more careful what they wish for.
There is an even more immediate hurdle to the widespread deployment of in-memory database technology for analytics: cost. More on that issue in the next installment of this post.
And in part 3, we will discuss whether HANA enables SAP to address the micro and macro data redundancy issues that plague many BW deployments.
Director of Platform & Solutions Marketing