Shakuntala Devi is a mathematical genius. She has phenomenal memorisation and calculating skills. In fact, she can extract the 23rd root of a 201 digit number mentally in matter of seconds. She even demonstrated the multiplication of two 13-digit numbers picked at random by a computer by providing the answer in 28 seconds. What is interesting is that it took more time dictating the answer of a 26-digit number than the time it took for her mental calculation. Sounds familiar? Input/output (I/O) performance of hard disks generally lags behind processor performance!
I was fortunate to witness her stage performance when Shakuntala Devi visited my school in India where I grew up. As a teenage lad I was watching in awe as she reeled out complex numbers to difficult mathematical questions posed to her with sub-second responses. What inspired me the most then was the ease with which she would dish out answers to date-related questions! She would spontaneously answer the day of the week for a given date thrown at her for any century! Or if you give the day, month and year she would answer the dates that fall on the day of the week. “How could she memorise all these”, I wondered! I have occasionally forgotten the birthdays of my dear ones and paid penalties for it! I thought to myself, “it is simply not possible to memorise and there must be some method behind it”!
This set me on a trail to do my own part-time research on calendars over the following 3 years. My little calendar research led me to recognition of a pattern of date occurrences and identification of a method by which one could easily answer random date questions. I took into account solar cycles and aberration of time. I even developed a portable ‘Universal Calendar’ an electrical gadget (see surface view with partially filled tables in the figure below) from scrap materials and applied a patent for it in India. The calendar works as a ready ‘reckoner’ and can be thought of as a three-way relational table join! However, I have not been successful in being granted a patent for it, which was probably attributable to my poor English language skills (which is my tertiary language), techno-legal patent specification writing skill and type-writing skill (or lack of it!) all of which I was learning and doing at the same time on this little project of mine! Nevertheless, I figured out that it was all less to do with memorisation and more to do with developing a method of recognising, computing and presenting results efficiently.
Fast forward to 2011, we see a frenzy of activities on in-memory databases and analytics. Industry analysts expect in-memory computing to be the future for analytics, transactions and cloud applications. Emotions abound about HANA revolutionising how SAP manages data and how billions of lines of data can be processed in far less time. In-memory processing is not new – I did my first ‘Bubble Sort’ successfully for transport routes, with the limited memory capacity of the main frame system, when I learnt to write my assembler programming years ago.
With the explosion in the availability of enterprise, online and other sources of unstructured data, it begs the question how much memory is good enough to store and process all these ever-expanding data in-memory? The answer may lead to the conclusion that in-memory computing only partially solves data warehousing needs and intelligent management of data is far more important. Speed of processing in-memory is one thing but cache-coherency and atomicity is another when it comes to distributed computing and data warehousing that require enforcement of the ACID properties. Also, getting the answer to questions you have never asked before of a data warehouse requires a more intelligent approach to data management than being able to simply locate the answer predictably in-memory. All this means that there is a need for intelligent data management with optimal investments rather than simply dumping the data in-memory for faster processing.
Teradata’s new generation of platforms for Active Enterprise Data Warehousing is the latest innovation that combine solid state drive (SSD) and hard disk drive (HDD) technology with the industry’s only intelligent virtual storage solution (TVS) that automatically migrates data between drive types to achieve optimum performance. Teradata is the first to market and deliver virtual storage functionality, which continuously and automatically places the most frequently used “hot” data on the fastest, solid state storage and the least used “cold” data on the slowest storage without user or administrator intervention. Usage of data changes as it ages naturally creating dramatic changes to data temperature. Any business that depends on “100% hot” data applications—ones that require continuous high-speed sense and respond processing capabilities is a good candidate for the unprecedented analytic performance and truly operational business intelligence (BI) that the Teradata® Active Enterprise Data Warehouse platforms provide.
In summary, while I am not a prodigy like Shakuntala Devi, my little journey of researching the ‘Universal Calendar’ led me to appreciate the limits of memorisation! I can see a parallel with the in-memory database that is currently in vogue in data warehousing and analytics, and I am hoping that data warehousing professionals take notice of the unique differentiation that Teradata brings!
Latest posts by Sundara Raman (see all)
- Big Data And The Double-Edged Sword Of Digital Transformation - March 2, 2016
- The Big Data Gold Rush — Rich Pickings For Telcos - November 5, 2015
- Is Your Information Structure Aligned with the Corporate Strategy? - September 1, 2015