“The report of my death was an exaggeration.” – Mark Twain, 1897
Yes, I have been caught up in the same wave as many of us who live and breathe data daily. I research and explore the ever-growing list of new tools and technologies which support “Big Data”. I have been Splunked, Mongoed, Oozied, Sparc’d and otherwise Mahooted to the nth degree! I have also heard that the database is all but dead with new technologies banging at the gate to take over. But, what I failed to notice was the huge advances Teradata has been making in the hardware and software packages with the relational database side of the equation.
My first experience with Teradata was back in the late 1990’s when I got access to a 5200 Worldmark platform. At the time, this was a massive machine with 4 Xeon PIII 500MHz processors and rated at 4.5 T-Perf per node. I am not sure how many nodes we had, but I am sure it was more than 10. What impressed me most, after moving off of Red Brick, was that I would submit my complex queries which compared about 10K rows to the overall base of 20 million, do some analytics and have results back in seconds to minutes.
Today, the Teradata Active Enterprise Data Warehouse platform is the 6750H Model which contains 12 Core 2.7Ghz Xeon processors, combinations of solid state and hard disk drives with a T-Perf rating of 240 per node. In rough terms, 1 node of current generation is more powerful than 53 of the 5200 I first used about 18 years ago. Of course there have been many more advancements than just the cpu and disk including infiniband, large memory and data compression cards in some models.
Needless to say, there is some significant power in these new cabinets, and the ability to do so much more in a much smaller footprint is quite appealing to many of our customers who are trying to reduce their data centre footprints. But as powerful as the hardware is, it requires the Teradata RDBMS software to really shine!
In the late 1990’s, DBMS operations were fairly basic by today’s standards. Sure, you could do some counts, averages and groupings, but your ability to do more advanced techniques such as windowed OLAP and standard deviations was not feasible. I look back on those days in the same way I look back at the days of travel by horse-drawn buggy! There have been countless advances in support of native SQL based operations, and enhancement after enhancement to enable these to run in a high-performance environment. Teradata 15.0 brings dozens of new features to support the key pillars of performance, ease of use, enterprise fit, and quality. I have chosen just a few which I find extremely exciting!
The internet of things (IOT) is speaking, and the predominant language is JSON (Java Script Object Notation). This is a semi-structured construct which provides data in documents which have the structure defined through the use of named value pairs. This allows for the structure to be applied at query time, rather than having to define it when you load the database. This can remove the rigid constraints which exist with structured data to allow for more agile changes to be applied. Teradata has implemented the support for native JSON storage and the ability to query directly on this data applying the structure on demand.
With Teradata 15.0 the foundations of query grid are being established and will continue to grow throughout the next couple of years. Simply put, query grid enables you to access data on other physical data stores, query it natively in place, and return the results to be processed with other data from other systems. Before Query Grid, Teradata introduced SQL-H in both the Teradata and Aster platforms. This enabled users to access data stored in Hadoop and bring it seamlessly in to the system it was called from to be processed. There were limitations, such as data coming back from Hadoop would be stored in Teradata spool, and this could be huge! Now, the same query will pass through the optimizer, tell the Hadoop system the filtering conditions and have only the required data pass back to Teradata for the query.
Scripting and Language Support
Have you wanted to take data in Teradata and run a Python, Ruby or Perl script across it without extracting the data? Well, now you can. The table operator has been extended to support access to native languages, provided their libraries are installed. This allows developers and analysts more access to the parallel processing of the Teradata environment to process data natively.
So I have to agree with Mr. Twain and say that the reports of the death of the relational database are an exaggeration. Not only is Teradata RDBMS alive and well, but it continues to innovate and advance. The relational structure continues play its cornerstone role in the Unified Data Architecture; supporting all kinds of data with expanded features and functions to support converting that data into information and knowledge.
John Berg is the lead Principal Consulting Architect for Teradata Australia/New Zealand, guiding market leading companies in the region further advancing their lead over the competition. John’s experience spans industry verticals including hospitality, banking, retail, e-commerce and government.
Latest posts by John Berg (see all)
- What I Learned at Teradata 2015 Partners - October 26, 2015
- Top 10 Takeaways from Teradata Partners 2014 - October 30, 2014
- Teradata Partners 2014 – It Keeps Getting Better - October 21, 2014
- Teradata Partners 2014 – A Recap of Day 1 - October 20, 2014
- Why the Reports of the Death of the Relational Database are an Exaggeration - July 30, 2014