big data analytics

Why Should Big Data Matter to You?

Posted on: September 15th, 2015 by Marc Clark No Comments


With all the attention given to big data, it is no surprise that more companies feel pressure to explore the possibilities for themselves. The challenge for many has been the high barriers to entry. Put simply, big data has cost big bucks. Maybe even more perplexing has been uncertainty about just what big data might deliver for a given company. How do you know if big data matters to your business?

The advent of cloud-based data warehouse and analytics systems can eliminate much of that uncertainty. For the first time, it is possible to explore the value proposition of big data without the danger of drowning the business in the costs and expertise needed to get big data infrastructure up and running.

cloud analytics Marc Clark Teradata

Subscription-based models replace the need to purchase expensive hardware and software with the possibility of a one-stop-shopping experience where everything—from data integration and modeling tools to security, maintenance and support—is available as a service. Best of all, the cloud makes it feasible to evaluate big data regardless of whether your infrastructure is large and well-established with a robust data warehouse, or virtually nonexistent and dependent on numerous Excel worksheets for analysis.

Relying on a cloud analytics solution to get you started lets your company test use cases, find what works best, and grow at its own pace.

Why Big Data May Matter

Without the risk and commitment of building out your own big data infrastructure, your organization is free to explore the more fundamental question of how your data can influence your business. To figure out if big data analytics matters to you, ask yourself and your company a few questions:

  • Are you able to take advantage of the data available to you in tangible ways that impact your business?
  • Can you get answers quickly to questions about your business?
  • Is your current data environment well integrated, or a convoluted and expensive headache?

For many organizations, the answer to one or more of these questions is almost certainly a sore point. This is where cloud analytics offers alternatives, giving you the opportunity to galvanize operations around data instead of treating data and your day-to-day business as two separate things. The ultimate promise of big data is not one massive insight that changes everything. The goal is to create a ceaseless conveyor belt of insights that impact decisions, strategies, and practices up, down, and across the operational matrix.

The Agile Philosophy for Cloud Analytics

We use the word agile a lot, and cloud analytics embraces that philosophy in important new ways. In the past, companies have invested a lot of time, effort, and money in building infrastructure to integrate their data and create models. Then they find themselves trapped in an environment that doesn’t suit their requirements.

Cloud analytics provides a significant new path. It's a manageable approach that enables companies to get to important questions without bogging down in technology.

And, to really figure out what value is lurking in their data and what its impact might be.

To learn more, download  our free Enterprise Analytics in the Cloud eBook.

Big Data Success Starts With Empowerment: Learn Why and How

Posted on: September 1st, 2015 by Chris Twogood No Comments


As my colleague Bill Franks recently pointed out on his blog, there is often the perception that being data-driven is all about technology. While technology is indeed important, being data-driven actually spans a lot of different areas, including people, big data processes, access, a data-driven culture and more. In order to be successful with big data and analytics, companies need to fundamentally embed it into their DNA.

To be blunt, that level of commitment simply must stem from the top rungs of any organization. This was evident when Teradata recently surveyed 316 senior data and IT executives. The commitment to big data was far more apparent at companies where CEOs personally focus on big data initiatives, as over half of those respondents indicated it as the single most important way to gain a competitive advantage.

Big Data Success Starts With Empowerment, Chris Twogood, Data Points, TeradataIndeed, industries with the most competitive environments are the ones leading the analytics push. These companies simply must find improvements, even if the needle is only being moved in the single digits with regards to things like operational costs and revenue.

Those improvements don’t happen without proper leadership, especially since a data-driven focus impacts just about all facets of the business -- from experimentation to decision-making to rewarding employees. Employees must have access to big data, feel empowered with regards to applying it and be confident in their data-driven decisions.

In organizations where being data-driven isn’t embedded in the DNA, someone may go make a decision and attempt to leverage a little data. But, if they don’t feel empowered by the data’s prospects and aren’t confident in the data, they will spend a lot of cycles seeking validation. A lot of time will be spent simply attempting to ensure they have the right data, the accurate data, that they are actually making the right decision based on it and that they will be backed up once that decision is made.

There is a lot of nuance with regards to being data-driven, of course. While all data has value, there are lots of levels to that value – the challenge generally lies in recognizing the values and extracting it. Our survey confirmed, for instance, just how hot location data is right now, as organization work to understand the navigation of their customers in order to deliver relevant communication.

Other applications of data, according to the survey, include the creation of new business models, the discovery of new product offers, and the monetization of data to external companies. But that’s just the tip of the iceberg. Healthcare, for example, is an up-and-coming industry with regards to data usage. An example is better understanding path to surgery -- breaking down the four or five steps most important to achieving a better patient outcome.

But whether you’re working in a hospital or a hot startup, and working to carve out more market share or improve outcomes for patients, the fundamentals we’ve been discussing here remain the same. Users must be empowered and confident in order to truly be data-driven -- and they’re not going to feel that way unless those at the top are leading the way.


Pluralism and Secularity In a Big Data Ecosystem

Posted on: August 25th, 2015 by Guest Blogger No Comments


Solutions around today's analytic ecosystem are too technically driven without focusing on business values. The buzzwords seem to over-compensate the reality of implementation and cost of ownership. I challenge you to view your analytic architecture using pluralism and secularity. Without such a view of this world your resume will fill out nicely but your business values will suffer.

In my previous role, prior to joining Teradata, I was given the task of trying to move "all" of our organization’s BI data to Hadoop. I will share my approach - how best-in-class solutions come naturally when pluralism and secularity are used to support a business-first environment.

Big data has exposed some great insights into what we can, should, and need to do with our data. However, this space is filled with radical opinions and the pressure to "draw a line in the sand" between time-proven methodologies and what we know as "big data." Some may view these spaces moving in opposite directions; however, these spaces will collide. The question is not "if" but "when." What are we doing now to prepare for this inevitability? Hadapt seems to be moving in the right direction in terms of leadership between the two spaces.

Relational Databases
I found many of the data sets in relational databases to be lacking in structure, highly transient, and loosely coupled. Data scientists needed to have quick access to data sets to perform their hypothesis testing.

Continuously requesting IT to rerun their ETL processes was highly inefficient. A data scientist once asked me "Why can't we just dump the data in a Linux mount for exploration?" Schema-on-write was too restrictive as the data scientists could not predefine the attributes for the data set for ingestion. As the data sets became more complex and unstructured, the ETL processes became exponentially more complicated and performance was hindered.

I also found during this exercise that my traditional BI analysts were perplexed with formulating questions about the data. One of the reasons was that businesses did not know what questions to ask. This is a common challenge in the big data ecosystem. We are used to knowing our data and being able to come up with incredible questions about it. The BI analyst's world has been disrupted as they now need to ask "What insights/answers do I have about my data?" – (according to IIya Katsov in one of his blogs).

The product owner of Hadoop was convinced that the entire dataset should be hosted on Amazon Web Services (S3) which would allow our analytics (via Elastiv Map Reduce) to perform at incredible speeds. However, due to various ISO guidelines, the data sets had to be encrypted at rest and in transit which degraded performance by approximately 30 percent.

Without an access path model, logical model, or unified model, business users and data scientists were left with little appetite for unified analytics. Data scientists were on their own guidelines for integrated/ federated/governed/liberated post-discovery analytical sets.

Communication with the rest of the organization became an unattainable goal. The models which came out of discovery were not federated across the organization as there was a disconnect between the data scientists, data architects, Hadoop engineers, and data stewards -- who spoke different languages. Data scientists were creating amazing predictive models and at the same time data stewards were looking for tools to help them provide insight in prediction for the SAME DATA.

Using NoSQL for a specific question on a dataset required a new collection set. To maintain and govern the numerous collections became a burden. There had to be a better way to answer many questions without having a linear relationship to the number of collections instantiated. The answer may be within access path modeling.

Another challenge I faced was when users wanted a graphical representation of the data and the embedded relationships or lack thereof. Are they asking for a data model? The users would immediately say no, since they read in a blog somewhere that data modeling is not required using NoSQL technology.

At the end of this entire implementation I found myself needing to integrate these various platforms for the sake of providing a business-first solution. Maybe the line in the sand isn't a business-first approach? Those that drive Pluralism (a condition or system in which two or more states, groups, principles, sources of authority, etc., coexist) and Secularity (not being devoted to a specific technology or data 'religion') within their analytic ecosystem -- can truly deliver a business-first solution approach while avoiding the proverbial "silver bullet" architecture solutions.

In my coming post, I will share some of the practices for access path modeling within Big Data and how it supports pluralism and secularity within a business-first analytic ecosystem.

Sunile Manjee

Sunile Manjee is a Product Manager in Teradata’s Architecture and Modeling Solutions team. Big Data solutions are his specialty, along with the architecture to support a unified data vision. He has over 12 years of IT experience as a Big Data architect, DW architect, application architect, IT team lead, and 3gl/4gl programmer.

Optimization in Data Modeling 1 – Primary Index Selection

Posted on: July 14th, 2015 by Guest Blogger No Comments


In my last blog I spoke about the decisions that must be made when transforming an Industry Data Model (iDM) from Logical Data Model (LDM) to an implementable Physical Data Model (PDM). However, being able to generate DDL (Data Definition Language) that will run on a Teradata platform is not enough – you also want it to perform well. While it is possible to generate DDL almost immediately from a Teradata iDM, each customer’s needs mandate that existing structures be reviewed against data and access demographics, so that optimal performance can be achieved.

Having detailed data and access path demographics during PDM design is critical to achieving great performance immediately, otherwise it’s simply guesswork. Alas, these are almost never available at the beginning of an installation, but that doesn’t mean you can’t make “excellent guesses.”

The single most influential factor in achieving PDM performance is proper Primary Index (PI) selection for warehouse tables. Data modelers are focused on entity/table Primary Keys (PK) since it is what defines uniqueness at the row level. Because of this, a lot of physical modelers tend to implement the PK as a Unique Primary Index (UPI) on each table as a default. But one of the keys to Teradata’s great performance is that it utilizes the PI to physical distribute data within a table across the entire platform to optimize parallelism. Each processor gets a piece of the table based on the PI, so rows from different tables with the same PI value are co-resident and do not need to be moved when two tables are joined.

In a Third Normal Form (3NF) model no two entities (outside of super/subtypes and rare exceptions) will have the same PK, so if chosen as a PI, it stands to reason that no two tables share a PI and every table join will require data from at least one table to be moved before a join can be completed – not a solid performance decision to say the least.

The iDM’s have preselected PI’s largely based on Identifiers common across subject areas (i.e. Party Id) so that all information regarding that ID will be co-resident and joins will be AMP-local. These non-unique PI’s (NUPI’s) are a great starting point for your PDM, but again need to be evaluated against customer data and access plans to insure that both performance and reasonably even data distribution is achieved.

Even data distribution across the Teradata platform is important since skewed data can contribute both to poor performance and to space allocation (run out of space on one AMP, run out of space on all). However, it can be overemphasized to the detriment of performance.

Say, for example, a table has a PI of PRODUCT_ID, and there are a disproportionate number of rows for several Products causing skewed distribution Altering the PI to the table PK instead will provide perfectly even distribution, but remember, when joining to that table, if all elements of the PK are not available then the rows of the table will need to be redistributed, most likely by PRODUCT_ID.

This puts them back under the AMP where they were in the skewed scenario. This time instead of a “rest state” skew the rows will skew during redistribution, and this will happen every time the table is joined to – not a solid performance decision. Optimum performance can therefore be achieved with sub-optimum distribution.

iDM tables relating two common identifiers will usually have one of the ID’s pre-selected as a NUPI. In some installations the access demographics will show that other ID may be the better choice. If so, change it! Or it may give leave you with no clear choice, in which case picking one is almost assuredly better than
changing the PI to a composite index consisting of both ID’s as this will only result in a table no longer co-resident with any table indexed by either of the ID’s alone.

There are many other factors that contribute to achieving optimal performance of your physical model, but they all pale in comparison to a well-chosen PI. In my next blog we’ll look at some more of these and discuss when and how best to implement them.

Jake Kurdsjuk Biopic-resize July 15

Jake Kurdsjuk is Product Manager for the Teradata Communications Industry Data Model, purchased by more than one hundred Communications Service Providers worldwide. Jake has been with Teradata since 2001 and has 25 years of experience working with Teradata within the Communications Industry, as a programmer, DBA, Data Architect and Modeler.


In advance of the upcoming webinar Achieving Pervasive Analytics through Data & Analytic Centricity, Dan Woods, CTO and editor of CITO Research, sat down with Clarke Patterson, senior director, Product Marketing, Cloudera, and Chris Twogood, vice president of Poduct and Services Marketing, Teradata, to discuss some of the ideas and concepts that will be shared in more detail on May 14, 2015.


Having been briefed by Cloudera and Teradata on Pervasive Analytics and Data & Analytic Centricity, I have to say it’s refreshing to hear vendors talk about WHY and HOW big data is important in a constructive way, rather than platitudes and jumping into the technical details of the WHAT which is so often the case.

Let me start by asking you both in your own words to describe Pervasive Analytics and Data & Analytic Centricity, and why this an important concept for enterprises to understand?


During eras of global economic shifts, there is always a key resource discovered that becomes the spark of transformation for organizations that can effectively harness it. Today, that resource is unquestionably ‘data’. Forward-looking companies realize that to be successful, they must leverage analytics in order to provide value to their customers and shareholders. In some cases they must package data in a way that adds value and informs employees, or their customers, by deploying analytics into decisions making processes everywhere. This idea is referred to as pervasive analytics.

I would point to the success that Teradata’s customers have had over the past decades in terms of making analytics pervasive throughout enterprises. The spectrum in which their customer have gained value is comprehensive, from business intelligence reporting and executive dashboards, to advanced analytics, to enabling front line decision makers, and embedding analytics into key operational processes. And while those opportunities remain, the explosion of new data types and breadth of new analytic capabilities is leading successful companies to recognize the need to evolve the way they think about data management and processes in order to harness the value of all their data.


I couldn’t agree more. It’s interesting now that we’re several years into the era of big data to see how different companies have approached this opportunity, which really boils down to two approaches. Some companies have taken the approach of what can we do with this newer technology that has emerged, while others take the approach of defining a strategic vision for the role of the data and analytics to support their business objectives and then map the technology to the strategy. The former, which we refer to as an application centric approach, can result in some benefits, but typically runs out of steam as agility slows and new costs and complexities emerge; while the latter is proving to create substantially more competitive advantage as organizations put data and analytics – not a new piece of technology – at the center of their operations. Ultimately, these companies that take a data and analytic centric approach are coming to a conclusion that there are multiple technologies required, and their acumen on applying the-right-tool-to-the-right-job naturally progresses, and the usual traps and pitfalls are avoided.


Would you elaborate on what is meant by “companies need to evolve the way they think about data management?”


Pre “big data,” there was a single approach to data integration whereby data is made to look the same or normalized in some sort of persistence such as a database, and only then can value be created. The idea is that by absorbing the costs of data integration up front, the costs of extracting insights decreases. We call this approach “tightly coupled.” This is still an extremely valuable methodology, but is no longer sufficient as a sole approach to manage all data in the enterprise.

Post “big data,” using the same tightly coupled approach to integration undermines the value of newer data sets that have unknown or under-appreciated value. Here, new methodologies to “loosely couple” or not couple at all are essential to cost effectively manage and integrate the data.   These distinctions are incredibly helpful in understanding the value of Big Data, where best to think about investments, and highlighting challenges that remain a fundamental hindrance to most enterprises.

But regardless of how the data is most appropriately managed, the most important thing is to ensure that organizations retain the ability to connect-the-dots for all their data, in order to draw correlations between multiple subject areas and sources and foster peak agility.


I’d also cite that leading companies are evolving the way they approach analytics. We can analyze any kind of data now - numerical, text, audio, video. We are now able to discover insights in this complex data. Further, new forms of procedural analytics have emerged in the era of big data, such as graph, time-series, machine learning, and text analytics.

This allows us to expand our understanding of the problems at hand. Key business imperatives like churn reduction, fraud detection, increasing sales and marketing effectiveness, and operational efficiencies are not new, and have been skillfully leveraged by data driven businesses with tightly coupled methods and SQL based analytics – that’s not going away. But when organizations harness newer forms of data that adds to the picture, and new complimentary analytic techniques, they realize better churn and fraud models, greater sales and marketing effectiveness, and more efficient business operations.

To learn more, please join the Achieving Pervasive Analytics through Data & Analytic Centricity webinar on Thursday, May 14 the from 10 - 11:00am PT


High Level Data Analytics Graph
(Healthcare Example)

 <---- Click on image to view GRAPH ANIMATION

Michael Porter, in an excellent article in the November 2014 issue of the Harvard Business Review[1], points out that smart connected products are broadening competitive boundaries to encompass related products that meet a broader underlying need. Porter elaborates that the boundary shift is not only from the functionality of discrete products to cross-functionality of product systems, but in many cases expanding to a system of systems such as a smart home or smart city.

So what does all this mean from a data perspective? In that same article, Porter mentions that companies seeking leadership need to invest in capturing, coordinating, and analyzing more extensive data across multiple products and systems (including external information). The key take-away is that the movement of gaining competitive advantage by searching for cross-functional or cross-system insights from data is only going to accelerate and not slow down. Exploiting cross-functional or cross-system centrality of data better than anyone else will continue to remain critical to achieving a sustainable competitive advantage.

Understandably, as technology changes, the mechanisms and architecture used to exploit this cross-system centrality of data will evolve. Current technology trends point to a need for a data & analytic-centric approach that leverages the right tool for the right job and orchestrates these technologies to mask complexity for the end users; while also managing complexity for IT in a hybrid environment. (See this article published in Teradata Magazine.)

As businesses embrace the data & analytic-centric approach, the following types of questions will need to be addressed: How can business and IT decide on when to combine which data and to what degree? What should be the degree of data integration (tight, loose, non-coupled)? Where should the data reside and what is the best data modeling approach (full, partial, need based)? What type of analytics should be applied on what data?

Of course, to properly address these questions, an architecture assessment is called for. But for the sake of going beyond the obvious, one exploratory data point in addressing such questions could be to measure and analyze the cross-functional/cross-system centrality of data.

By treating data and analytics as a network of interconnected nodes in Gephi[2], the connectedness between data and analytics can be measured and visualized for such exploration. We can examine a statistical metric called Degree Centrality[3] which is calculated based on how well an analytic node is connected.

The high level sample data analytics graph demonstrates the cross-functional Degree Centrality of analytics from an Industry specific perspective (Healthcare). It also amplifies, from an industry perspective, the need for organizations to build an analytical ecosystem that can easily harness this cross-functional Degree Centrality of data analytics. (Learn more about Teradata’s Unified Data Architecture.)

In the second part of this blog post series we will walk through a zoomed-in view of the graph, analyze the Degree Centrality measurements for sample analytics, and draw some high-level data architecture implications.


[2] Gephi is a tool to explore and understand graphs. It is a complementary tool to traditional statistics.

[3] Degree centrality is defined as the number of links incident upon a node (i.e., the number of ties that a node has).

Ojustwin blog bio

Ojustwin Naik (MBA, JD) is a Director with 15 years of experience in planning, development, and delivery of Analytics. He has experience across multiple industries and is passionate at nurturing a culture of innovation based on clarity, context, and collaboration.

Big Data Analytics: Will Better Company Culture Trump A Strong ROI?

Posted on: February 17th, 2015 by Data Analytics Staff No Comments


(This post discusses the results1 of Forrester Consulting’s research examining the economic impact and ROI of an online retailer that implemented a Teradata analytics solution. The new integrated platform runs big data analytics and replaces the retailer’s third party analytics solution.)

What is more beneficial? The quantifiable pay out of a big data solution...or the resulting improvements in corporate culture like encouraging innovation and increased productivity?

In this case, the big data solution is a Teradata Aster Discovery Platform.  Previously, the retailer relied upon a third-party web solution for analytics – which was inefficient, difficult to manage and not at all scalable. And with its limited IT support staff and ever exploding business requirements, the online business needed an easy-to-manage big data analytics solution able to handle its compiled customer data.

The platform has the ability to analyze and manage unstructured data plus has data visualization tools to help illuminate key business insights and improve marketing efficiency. And, it’s easy on labor costs. Because of the platform’s ready-to-use functionality, acquiring data, performing analysis and presenting output can be done by a wide variety of IT skills sets and resources. The organization does not need a full team of expensive data scientists and engineers to manipulate and use data.

Does it pay out? Forrester confirmed the retailer’s increases in new customer conversions, overall sales, savings from IT and end user productivity...all resulting in a direct impact to net bottom line profits.

 “For us, it has been relatively easy to monetize and

justify our investment in Aster Discovery Platform;

the changes that have resulted from the product

have offered us much increase in revenue.”

~Director of data engineering, online retailer

The cultural intangibles?  The retailer estimates 20% of its total employees (both IT and business) have a direct benefit, a gradual increase in their productivity from Year 1 to Year 3 due to how quickly business insights can be generated and the business practices optimized.

Performance throughout the organization improved dramatically. With the Aster Discovery Platform, the online retailer avoids multi-step, non-scalable procedures to run analytics and instead can just type a simple query. The organization’s planning process has become tighter. Better forecasts and predictions using predictive insights allows the organization more efficiency within the product life cycle delivering noticeable impact across a variety of measures like incremental sales, customer retention and customer satisfaction.

“We have a lot of test cases and product changes that we

have been able to make internally as a result of the analytics

that is taking place on the platform.”

~Director of data engineering, online retailer

The Teradata Aster Discovery Platform is the industry’s first next-generation, integrated discovery platform. Designed to provide high-impact insights, it makes performing powerful and complex analyses across big data sets easy and accessible.

1The Total Economic Impact TM Of Teradata Aster Discovery Platform. Cost Savings And Business Benefits Enabled By Implementing Teradata’s Aster Discovery Platform.  October 2014. © 2014, Forrester Research, Inc.

Learn more about Teradata’s big data analytics solutions.

Big Data Apps Solve Industry Big Data Issues

Posted on: February 13th, 2015 by Data Analytics Staff No Comments


Leverage the power of big data to solve stubborn industry issues faster than your competition

Big data solutions – easier, faster and simpler – are today’s best means of securing an advantage in 2015.  If there was a way to quickly and easily leverage big data to address the nagging issues in your industry – like shopping cart abandonment for retail or churn for wireless carriers – wouldn’t that be appealing?

What if the leader in big data analytics told you that insights into the problems in your industry and organization could be in your hands and operational within a matter of weeks...and with far less complexity than big data solutions available just 6 months or a year ago?

Teradata has developed a collection of purpose-built analytical apps that address opportunities surrounding things like behavioral analytics, customer churn, marketing attribution and text analytics. These apps are built using the Teradata Aster AppCenterand were intentionally developed to solve pressing big data business challenges.

Industries covered include consumer financial, communications and cable,  healthcare (payer, provider and pharmaceutical), manufacturing, retail, travel and hospitality, and entertainment and gaming.  Review the following to see if your organization’s issues are covered.

Consumer Financial – Fraud Networks and Customer Paths.

Communications – Network Finder, Paths to Churn and Customer Satisfaction.

Cable – Customer Behavioral Segmentation and Customer Paths.

Healthcare –Paths to Surgery, Admission Diagnosis Procedure Paths, HL7 Parser, Patient Affinity & Length of Stay, Patient Compare, Impact Analysis and

Drug Prescription Affinity Analysis.

Retail – Paths to Purchase, Attribution (multi-channel), Shopping Cart Abandonment, Checkout Flow Analysis, Website Flow Analysis, Customer Product Review Analysis and Market Basket & Product Recommendations.

Travel & Hospitality – Customer Review Text Analysis, Website Conversion Paths, Diminishing Loyalty, Customer Review Sentiment Analysis.

Entertainment & Gaming – Companion Matcher, Diminishing Playing Time,

Network Finder and Paths to Churn.

More and more, organizations and in particular, business users and senior management, understand the value of and opportunities nested in big data. But across the enterprise, managers struggle with a number of hurdles like large upfront investments, labor demands (both resource time and specialized skills), and a perceived glacial movement toward real insights and operational analytics.

Now the biggest hurdles are removed. Big data apps tackle the investment, time and skills gap. Configured to your organization by Teradata professional services, the apps enable quick access to self-service analytics and discovery across the organization.  Big data apps allow for lower upfront investment and faster time to value – a matter of weeks, not months or years. How? Industry accepted best practices, analytic logic, schema, visualization options and interfaces are all captured in the pre-built templates.

Enter the world of big data in a big way. Tackle your biggest issues easily. Realize value faster. Let the excitement of discovery with big data help analytics infiltrate your organization. Momentum is a powerful driver in instilling a culture of innovation.

Learn more about big data and big data solutions.

LA kicks off the 2014 Teradata User Group Season

Posted on: April 22nd, 2014 by Guest Blogger No Comments


By Rob Armstrong,  Director, Teradata Labs Customer Briefing Team

After presenting for years at the Teradata User Group meetings, it was refreshing to see some changes in this roadshow.  While I had my usual spot on the agenda to present Teradata’s latest database release (15.0), we had some hot new topics including Cloud and Hadoop, some more business level folks were there, more companies researching Teradata’s technology (vs. just current users) and there was a hands-on workshop the following day for the more technically inclined looking to walk through real world Unified Data Architecture™ (UDA) use cases of a Teradata customer.  While LA tends to be a smaller venue than most, the room was packed and we had 40% more attendees compared with last year.

I would be remiss if I did not give a big Thanks to the partner sponsors of the user group meeting.  In LA we had Hortonworks and Dot Hill as our gold and silver sponsors.  I took a few minutes to chat with them and found out some interesting upcoming items.  Most notably, Lisa Sensmeier from Hortonworks talked to me about Hadoop Summit which is coming up in San Jose, June 3-5th.  Jim Jonez, from Dot Hill, gave me the latest on their newest “Ultra Large” disk technology where they’ll have 48 1 TB drives in a single 2U rack.  It is not in the Teradata line up yet, but we are certainly intrigued for the proper use case.

Now, I’d like to take a few minutes to toot my own horn about the Teradata Database 15.0 presentation that had some very exciting elements to help change the way users get to and analyze all of their data.  You may have seen the recent news releases, but if not, here is a quick recap:

  • Teradata 15.0 continues our Unified Data architecture with the new Teradata QueryGrid.  This is the new environment to define and access data from Teradata to other data servers such as Apache Hadoop (Hortonwoks), Teradata Aster Discovery Platform, Oracle, and others.  This lays the foundation for an extension to even more foreign data servers.  15.0 simplifies the whole definition and usage as well as added bi-directional and predicate pushdown.  In a related session, Cesar Rojas provided some good recent examples of customers taking advantage of the entire UDA ecosystem where data from all of the Teradata offerings were used together to generate new actions.
  • The other big news in 15.0 is the inclusion of the JSON data type.  This allows customers to store direct JSON documents in a column and then apply “schema on read” for much greater flexibility with greatly reduced IT resources.  As the JSON document changes, there is no table or database changes necessary to absorb the new content.

Keep your eyes and ears open for the next Teradata User Group event coming your way, or better yet, just go to the webpage: to see where the bus stops next and to register.  The TUGs are free of charge.  Perhaps we’ll cross paths as I make the circuit? Until then, ‘Keep Calm and Analyze On’ (as the cool kids say).

 Since joining Teradata in 1987, Rob Armstrong has worked in all areas of the data warehousing arena.  He has gone from writing and supporting the database code to implementing and managing systems at some of Teradata’s largest and most innovative customers.  Currently Rob provides sales and marketing support by traveling the globe and evangelizing the Teradata solutions.

Change and “Ah-Ha Moments”

Posted on: March 31st, 2014 by Data Analytics Staff No Comments


This is the first in a series of articles discussing the inherent nature of change and some useful suggestions for helping operationalize those “ah-ha moments."

Nobody has ever said that change is easy. It is a journey full of obstacles. But those obstacles are not impenetrable and with the right planning and communication, many of these obstacles can be cleared away making a more defined path for change to follow.   

So why is it that we often see failures that could have been avoided if changes that are obvious were not addressed before the problem occurred? The data was analyzed and yet nobody acted on these insights. Why does the organization fail to what I call operationalize the ah-ha moment? Was it a conscious decision? 

From the outside looking in it is easy to criticize organizations for not implementing obvious changes. But from the inside, there are many issues that cripple the efforts of change, and it usually boils down to time, people, process, technology or financial challenges.  

Companies make significant investments in business intelligence capabilities because they realize that hidden within the vast amounts of information they generate on a daily basis, there are jewels to be found that can provide valuable insights for the entire organization. For example, with today's analytic platforms business analysts in the marketing department have access to sophisticated tools that can mine information and uncover reasons for the high rate of churn occurring in their customer base. They might do this by analyzing all interactions and conversations taking place across the enterprise and the channels where customers engage the company. Using this data analysts then begin to  see various paths and patterns emerging from these interactions that ultimately lead to customer churn.   

These analysts have just discovered the leading causes of churn within their organization and are at the apex of the ah-ha moment. They now have the insights to stop the mass exodus of valuable customers and positively impact the bottom line. It’s obvious these insights would be acted upon and operationalized immediately, but that may not be the case. Perhaps the recently discovered patterns leading to customer churn touch so many internal systems, processes and organizations that getting organizational buy in to the necessary changes is mired down in a endless series of internal meetings.   

So what can be done given these realities? Here’s a quick list of tips that will help you enable change in your organization:

  • Someone needs to own the change and then lead rather than letting change lead him or her.
  • Make sure the reasons for change are well documented including measurable impacts and benefits for the organization.
  • When building a change management plan, identify the obstacles in the organization and make sure to build a mitigation plan for each.
    Communicate the needed changes through several channels.
  • Be clear when communicating change. Rumors can quickly derail or stall well thought out and planned change efforts.
  • When implementing changes make sure that the change is ready to be implemented and is fully tested.
  • Communicate the impact of the changes that have been deployed.  
  • Have enthusiastic people on the team and train them to be agents of change.
  • Establish credibility by building a proven track record that will give management the confidence that the team has the skills, creativity and discipline to implement these complex changes. 

Once implemented monitor the changes closely and anticipate that some changes will require further refinement. Remember that operationalizing the ah-ha moment is a journey.  A journey that can bring many valuable and rewarding benefits along the way. 

So, what’s your experience with operationalizing your "ah-ha moment"?