Monthly Archives: October 2017

Don’t Rely on Witchcraft: Question the Status Quo of Customer Analytics

October 31, 2017

Telco customer service - is it modern witchcraft - Stefan Schwarz

Late 9th century, somewhere in Europe, a fictitious story:

Life is hard these days. The attacks of the Vikings have always been brutal, but this time it was worse. Your body is peppered with wounds from their battle axes and spears. Your body is aching and the wound fever is rising and draining energy out of your cells.

But there is help – you can count on the village healer’s support. The wise old woman follows the secret old healing traditions passed on through generations of her family. She places a crow feather on your forehead. The air is filled with heavy scents of burning sandalwood and thyme. Rhythmic witch spells – in what sounds to you like a foreign language – create a mystic & slightly frightening atmosphere. Despite the unspecific feeling of anxiety, you put yourself in her hands and let things happen.

The above short scene could well be part of the plot of a typical medieval movie. Certainly (and very thankfully) real life has changed tremendously for the better compared to medieval days. The descendants of the Vikings are very peace-loving European neighbors, the average everyday life never experiences anything like the above brutality, and modern medicine relies on solid science and proven treatments for the root causes of the disease.

Nevertheless, every now and then – especially when thinking about telco customer retention – some strange and diffuse feeling of uneasiness reminds me of the healer. An illustration …

Early 21st century, somewhere in Europe, a fictitious story:

It is hard to imagine how life without mobile technology would look like. Mobile services became essential to you. You have managed to live with the white spots of mobile reception, and drop calls almost became part of the normal service. But lately your mobile became just “unusable”. Your calls get interrupted several times during your typical route of travel, your battery dies at lunch time, and your friend pays only half the price for what seems to be a very similar service.

But there is help – you can count on your operator’s support line. The customer service agent offers a wide range of tariffs which have amazingly innovative names – although they seem to comprise well-known services. Right away in the call the agent offers you a 6-month free trial period for an additional virus scanner “to show our gratitude for your service” and to apologize for the “inconvenience you might have experienced”. Every now and then – typically when you asked a question – the procedure gets interrupted by repetitive music and friendly computer voices asking you to stay on the line as you are “important to us as a customer”. It is hard to follow the discussion as it is filled with acronyms many of which you do not really understand. An atmosphere of unspecific suspicion arises, but to the end of the call, you agree to sign up for a new 24-month contract.

Although the above analogy surely is of a provocative nature, I nevertheless ask myself why other industries and disciplines (like medicine, science or sometimes even government) put huge efforts into identifying the right (big) data and exploit it through innovative analytics. Just to name a few, examples come from a wide variety of disciplines from DNA sequencing and analyzing to optimizing complex cities through smart IoT analytics and real-time streaming analytics of dozens of petabytes of data from the Large Hadron Collider at Cern.

But what is current practice in telco? Even leading telcos very often still rely on nearly the same kind of data sets and the same traditional analytics they already applied ten or more years ago. Like the witch in the above short story lots of telco data scientists (have to) rely on “inherited” models and analytics. They (have to) feed their models primarily with data like sociodemographic profiles, calling and top-up behavior, extrapolated NPS scores or “needs-based” segment information. Yes, the more innovative data scientists experiment with adding data points like social media or web. I do not intend to sound cynical, but using the above analogy, does it really make a difference if the witch adds a spider leg to the treatment?

You disagree? Congratulations, you are perhaps one of the very few who managed to overcome the current as is. But considering the almost countless discussions I had with telco customer management executives, retention managers and data scientists, I had to acknowledge they more or less all openly share my observations.

Do telcos really understand the majority of the individuals of their customer base well enough to design real customer-individual activity plans? The answer to the questions – with some very few exceptions – is a very clear NO! At the best, most telcos have the right analytics in place to get along ok with perhaps the “best” 10-15% of their customer base. And even for those, they perhaps “know” what to do, but not exactly why – not to speak about any insight of how the situation can be improved other than gradually.

Obviously, there are reasons for why the situation is as it is. The reasons range from technological challenges, to organizational barriers, budget restrictions, legal boundaries, missing tools, resources and knowledge and many more. But it is certainly worth it to overcome those. In mature telco markets like in Western Europe, an average telco spends roughly 20% of its overall operational expenditures (opex) on customer management. Without going into any kind of financial details and leaving all side effects apart – e.g. substantial reduction of network roll-out costs (Arias2015) – this number alone indicates the huge potential of a more fundamental readjustment of telco customer analytics.

From witchcraft to modern treatment

How would a doctor nowadays act differently compared to the medieval witch? First of all, she would carry out a very thorough medical examination. For example, she would examine the wounds carefully, perhaps test all vital functions and examine the patient for hidden injuries. She would obviously rely on her deep medical knowledge and combine all the different “data points” to come to a very individual, well-balanced, cohesive and fact-based diagnosis. She would obviously also ask the patient (sometimes very specific) question, which are relevant to the matter, and incorporate the answers into her diagnosis. Nevertheless, she would consider his feedback carefully as patients sometimes make quite misleading statements. For example, patients in the mid stages of hypothermia become very happy and incredibly energetic, as the body instinctively makes one final effort to warm itself up, shortly afterwards they become unconscious and eventually die.  The doctor would also try to eliminate certain diseases to improve her diagnoses.

Only then would she define a specialized treatment including the best combination of medication – all based on medical evidences. She would take possible side effects for this individual patient into account and evaluate how different medications influence each other. Then she would apply individual treatments: She would clean and nurse the wounds using – if needed – antiseptic ointment, prepare a saline infusion with the right individual mix of drugs stabilizing the patience and treat him with the right dose of pain killer to ease the actual discomfort and minimize side effects.

How does this all relate to customer management and customer analytics?

If our aim is to take customer management to a complete new level of effectiveness, we have to follow a similar, much more structured approach, act in a more scientific style. Obviously, it is beyond the scope of this blog post to detail down the desired approach in all features, but let me outline the high-level structure.

  1. Individual examination
    We start with the customer, neither with the data we have nor the analytics we can apply. This does not sound like a new statement, but more or less common sense. We are bombarded with marketing slogans like “customer first” or “customer centric” or many more. Unfortunately, when it comes to customer management and analytics, very few telcos really follow this common sense. For me the currently existing churn models are perhaps the clearest proof: individual service experience and value for money (in the absence of real product differentiation: price) have been for years by far the most important reason for churn for a very significant portion of customers (Dass-Jain2011). Nevertheless, most telco churn models are not able to provide a real comprehensive measure for the customer’s service/network experience as they do not even incorporate service/network experience data. No, I am not talking about simple drop call counters derived from call detail records (CDRs) or about the (often categorized and extrapolated) feedback from customer experience surveys. These pieces of information are simply not detailed enough to understand anything like the individual service experience. We can prove through analytics that for some customers a certain set of specific network failures are the primary reason to churn, whereas other customers tolerate the same set. We know for example that the sequence of failures, the type of usage at the time of failure and the failure density are important (and not so much the isolated number of failures). In order to analyze this kind of churn drivers, you need to source very detailed network data (e.g. SS7), integrate it (e.g. across different bearers), but keep all information with indicates any kind of service deterioration.
    On a side remark, talking about direct customer response: Such feedback is a two-edged sword anyway. First of all, you can almost never obtain it in any kind of relevant detail for a significant portion of your customer base. This makes it per se tricky to use. Secondly, customer feedback is hard to compare. Even an “8” or a “9” on a simple and standardized NPS scale will mean different things to different customers.  Such feedback – and even more so free-text feedback – has to be filtered and sometimes even ignored – just remember the hypothermia example above.
    Individual customer price pressure is another example for a weak area in customer analytics. There are very few telcos today which understand the price pressure for all their customers individually. They are not able to answer basic questions like “Which tariff from my portfolio/in the market is pricewise the best for a particular customer with his individual usage?”, “What maximum price premium is this particular customer prepared to pay?” or “Which competitor tariffs would allow the customer to save more than the above maximum price premium?” But how can a telco ever understand a customer’s price-driven churn potential if they do not know these basic facts? All of the above data and analytics to answer that kind of questions are available and industry-proven. There is very little reason not to apply them and generate actionable insight on a totally new level.
  1. Cure the root cause not the symptoms (alone)
    Today still many customer managers think (only) of campaigns and offers when it comes to retaining customers. Campaigns and offers are not necessarily a bad thing, but honestly, we all know that giving some goodie to the customer or talking him into a new contract will not solve the churn challenge fundamentally. There is much more a customer manager has to incorporate into his thinking than campaigns. Let us for – just for illustration purposes – revert to the initial example again. The customer is experiencing severe network problems. The free trial period for a virus scanner will at the best make him recognize some appreciation, but not in any way change his propensity to churn. It simply does not cure the root cause. Even worse, if the additional virus scanner is not providing any additional value, the effect might well be the opposite. Sticking a plaster on the wound makes it disappear for a while, but only thoroughly cleaning the wound and applying antiseptic treatment eliminates the root cause.
    It is also fundamental that the diagnoses as described above eliminates churn reasons. Treating non-existing churn reasons is at best a waste of resources/money, at worst it annoys the customer and potentially even drives churn. In reality, I have seldom seen a churn management process that specifically looked at eliminating churn reasons.
  2. Develop an individual treatment plan
    Looking at our “real lives” we intrinsically know that an isolated action will seldom create a lasting relationship. The sense of trust and appreciation comes from the sum of experiences. If on average these experiences are relevant and positive for the customer, he will even forgive single negative experiences. Astoundingly enough, research shows that after experiencing a couple of positive events a well-handled negative event will even reduce churn by 67% (Kolsky2015). This train of thoughts has a significant impact on how we manage customer relationships and how we build analytics model (e.g. churn models).

Consequently, a customer management strategy which purely builds on a next best activity (NBA) or, even worse, on a next best offer (NBO) approach is too shortsighted. We need to plan the best sequence of activities for each individual customer. I even prefer to use the term “event” in exchange of the term “activity” as the latter might suggest restricting our thinking to company-initiated events.  On the opposite, we also need to consider how to optimize events which are initiated by the customer. Let’s say you are a high-value customer, lately changed your bank account and therefore unfortunately missed to pay the last monthly bills. Just image the negative loyalty effect your operator would avoid by individualizing the collection and dunning process. We have to plan the optimal, individual customer journey for all our customers. For obvious reasons, this planning has to happen on a rolling basis which creates some complexity. But also in this context industry-proven methodologies and technologies do exist. They “just” need to be applied (Teradata2017).

The above also has implications on analytics. Currently most (retention) analytics have far reaching limitations as they are static in a way that they look at a “status” at a certain point of time – now or in the future. But they do not analyze any sequence of events and their interdependencies. They provide no direct explanations for why events occur and do not incorporate customer interaction in its entirety (for more info: Hassouna2015). But as described above, this is not how (customer) relationships work. Analyzing customer-related event paths and predicting future events provide a totally different view on how each customer relationship develops. Furthermore, this view creates highly actionable insight as almost each event can be influenced. You can plan and – based on prescriptive analytics – change the future event path for each customer relationship. For me, this is the essence of customer relationship management.

Do not get me wrong. From burning sandal wood and witch spell to modern treatment, medicine has gone a long way – and similar is obviously true for customer management. But as Oscar De La Hoya rightly said: “There is always space for improvement, no matter how long you’ve been in the business”. And in order to innovate, you have to question the status quo. I hope the above blog provoked some fresh thinking and can contribute to develop telco customer management a noteworthy step further.

In case you disagree with some (or all) the above or want to discuss, please reach out to me on LinkedIn or twitter.

List of references:

Further sources of information:

SSStefan Schwarz, Director of Industry Consulting for Teradata, talks about how Telcos can adapt to the new market dynamics and the OTT competition. Schwarz maps out the necessary steps Telcos needs to take to maintain relevancy for their customers.

Is Unstructured Data a “Trick or Treat” for your Organization?

October 31, 2017

Sweet shop

Anyone that works in data analytics and is excited it’s Halloween could appreciate a sign Teradata had displayed near the press room at the PARTNERS event last week in Anaheim, California. The copy proclaimed “The original big data company,” and the image displayed a candy store, with each toffee, lollipop and taffy flavor neatly compiled into its own little jar. Since confectionary stores popped up at the turn of the last century, shop owners had a perfect visual for just how popular each of its types of candy were. It was easy to tell just how valuable each set of candy was to the store, because its sales were visualized in real-time based on how full or empty each jar was.

Getting real-time, visualized analytics is nowhere near as easy today as it was for a candy shop owner 100 years ago, and the problem is unstructured data. In fact, it remains one of the last great impediments of analytics.

Today’s data is more like that giant pillowcase full of a random array of candies that children amass trick or treating. The initial goal was just to collect as much of as possible. But later on, organizations realized it takes a long time to sort through it all and, even after that arduous process, some of it doesn’t even hold value to an organization.

The companies that currently are able to leverage large amounts of data and turn it into insights — even predictions — are using mostly structured or semi-structured data, fueled by AI and machine learning crunching through this information in real-time. As Oliver Ratzesberger said at last week’s “The Sentient Enterprise” breakfast, AI needs a foundation. Companies can’t just hope that whatever they’ve throw into a data lake is going to turn into insights. And AI is proving extremely useful in helping businesses blend their many disparate data silos to gain a better understanding of the complex data ecosystem that now inhabits every major company. They have thousands of those candy jars and need to understand and instantly analyze what that information is doing to their business. But what about all the unstructured data sitting around that’s too time-consuming to sift though?

Eventually, as neural networks and deep learning progress, the processes will be in place to leverage fully unstructured data too. And in the interim, processes like semantic smoothing makes sense of that data, including speech. The problem was clear from some of the presentations at Partners that mentioned chatbots or digital personal assistants gone awry.

So while it may seem like a hindrance to fast, inexpensive insights right now, businesses are already dreaming up how machine learning and advanced analytics will turn unstructured data from what seems like a trick into a treat.

For more on how Teradata helps its customers apply machine learning to structured, unstructured and semi-structured data check out our Teradata Analytics Platform.

Lessons from the Sentient Enterprise: To Scale Your Analytics, “Merchandise” the Insights

October 30, 2017


“Lessons from the Sentient Enterprise” is a series of posts timed around the publication of “The Sentient Enterprise”, a new book on big data analytics by Oliver Ratzesberger and Mohan Sawhney. Each post in the series highlights a major theme covered in book and at executive workshops being held in conjunction with its upcoming release by Wiley publishing.

One of the bedrock principles Mohan Sawhney and I put forth in “The Sentient Enterprise” is that more data is only as good as your ability to keep up and leverage it for insight. It’s a sentiment shared by many of the top analytics leaders we interviewed for the book. As Jacek Becla, a former data executive at Stanford University’s prestigious SLAC National Accelerator Laboratory and current Teradata vice president of technology and innovation, told us analytics don’t progress unless there’s a “symbiotic relationship between capacity and skills.”

Capacity, unfortunately, can easily outpace our skills in managing it. In fact, our book focuses on several “pain points” of data drift, duplication and error — side-effects of poorly governed capacity that can leave people swimming in oceans of data, without much insight to be found. These problems get more critical as you try to scale the operation.

A ‘Forcing Function’ for Agility

Dell Vice President for Enterprise Services Jennifer Felch and her colleagues learned this first-hand as they worked to aggregate global manufacturing data into one master environment for reporting and analytics. “Scaling is the forcing function for standardizing and becoming as efficient and accurate as possible with your data,” she told us. As we describe in the book, Dell’s solution involved setting up “virtual data marts” — more than two dozen specialized data labs that access, but do not corrupt, the master environment.

The virtual data mart is a feature of the Agile Data Platform, the first of five stages in the Sentient Enterprise journey. That’s where we “decompose” data into architectures that preserve its most granular form, so data’s more malleable and adaptable to various business needs across the organization. The next couple stages — the Behavioral Data Platform and Collaborative Ideation Platform — are where build capacity and set up a social-media style “LinkedIn for Analytics” environment for business users to share insights from this newly agile data.

But sharing insights is not the same as prioritizing them. And here’s where I’d like to emphasize a concept from our book that’s surprisingly simple, yet still underutilized in most businesses today: The key is to not just socialize data insights among business users, but to “merchandise” them!

‘Merchandising’ the Value of Data

What do I mean by “merchandising” analytic insights? Think of how we shop on Amazon, eBay or any other major e-commerce site. We search, we promote, we recommend, we follow. All that activity is tracked by analytics, such as eBay’s “Customer DNA” database — which we examine in the book — that can follow patterns of browsing, bidding and other indicators of value amid some 800 million concurrent auction listings. Over time, analytics running underneath learn what’s important in order to tailor searches and increase the relevance of product recommendations.

In the Sentient Enterprise, we’re essentially doing the same thing with data and analytics. We’re applying the same form of merchandising to the analytics network within an enterprise — promoting and recommending questions, people, and answers that a data scientist or business user might be interested in based on previous queries and activity.

Particularly at scale, there really is no other way to go about it. That’s because — as the book explains — we’re carrying forward the merchandising process beyond just data insights, and applying it to the valuation of entire prepackaged workflows (in the Stage 4 Analytical Application Platform) and self-decisioning algorithms (in the Stage 5 Autonomous Decisioning Platform). I’m covering a lot of ground here, which is why I invite you learn more about the Sentient Enterprise through online resources and, of course, the book itself!

I’m hoping it’s already clear, however, that scaling requires the absolute commitment to rethink old habits — such as extract, transform, load (ETL) and centralized metadata — and embrace new, scalable ways of listening to data and positioning algorithms for “wisdom-of-the-crowd” insights. That’s because, as we we’re fond of saying, humans don’t scale the way data does — and a hundred or even a thousand analysts will remain outgunned without some way to automatically “merchandise” insights from the huge volumes of lightning-fast data streams coming at them.


Oliver Ratzesberger is an accomplished practitioner, thought leader and popular speaker on using data and analytics to create competitive advantage – known as his Sentient Enterprise vision.

Oliver Ratzesberger is executive vice president and chief product officer for Teradata, reporting to Vic Lund. Oliver leads Teradata’s world-class research and development organization and provides strategic direction for all research and development related to Teradata Database, integrated data warehousing, big data analytics, and associated solutions.

Making the most of your time

October 24, 2017


Over the past few years there has been an explosion of data sources and data types that need to be introduced into a company’s analytics. One can argue that sensor data, and in particular the time series aspect of sensor data, is probably the most rich and useful of these. The big question is how can a company effectively leverage the time series data that is being generated into their other analytics in order to drive profitable actions?

Again, it is about integration

As has long been the case, data needs to be integrated to derive the most value. Companies can certainly capture time series data into a raw data file and use programmers to run sophisticated algorithms and get some insights, but then what? If I see that an engine is overheating, what should I do about it? What parts are going to be effected? Is this an anomaly or part of a larger pattern in my fleet? Is this happening everywhere or only in certain parts of the country? These questions will need data, and if that data is not readily and easily integrated, then the answers may be too hard to find. Delayed action can be costly. Time series data needs to be thought of like any other data in your ecosystem — data which needs to be managed, accessible and performant for end users to capitalize on the opportunities

Welcome to time series in Teradata

Recognizing that the internet of things world was going to present just such challenges, Teradata began working to incorporate time series data into the Teradata Database, and now, with Release 16.20 (available in Q4 2017) it is ready for customer usage. Time series data can now be loaded and managed inside the Teradata Database like any other data type. There have been significant optimizations to allow for easy integration with other tables, faster access to targeted time periods and a host of time-aware aggregate functions to facilitate a wide array of analytics.

What is time series data?

Simply put, time series data is generated a multitude of ways, but the end result is a time stamp, some identifier of what created the data and the observation or measurement. An example would be a car sensor that tracks oil temperature every second. In this case, we get one row for every second that indicated the time, sensor ID and a temperature reading. Every minute we’d get 60 rows of sensor data. There could be other data elements, like the vehicle identification number of the car that is sending the data, or even a multitude of sensor readings in JSON or CSV format. The key is that this data has the time stamp.

A second aspect of time series data is that can be regular, a reading every second without any missing data, or irregular, meaning the frequency is not consistent and there can be missing measurements. For example, instead of getting a temperature reading every second, we only get a reading when the temperature exceeds a certain threshold.

The last aspect to discuss is that time series can be bounded or not. Perhaps we have a sensor that continually sends a message from now until the end of life for the device. Other events may be bounded with a defined period of time (i.e., I ran a test for 20 minutes) or a logical overlay of boundaries (i.e., a car trip bounded by start and stop times).

You can quickly see that time series data can become complicated to manage and hard to align in analytics if you are simply going to just store the data in files rather than a mature and optimized database.

Time series in the Teradata Database

Teradata has integrated time series data by introducing a new type of storage structure, the Primary Time Index (PTI). This is very similar to our Primary Index and Partitioned Primary Index concepts. With PTI, times series data can be bucketed such that data from the same sensor/device is kept together in time intervals for faster analytics. The PTI can either be just a timestamp or include other attributes like the sensor ID.

Once the data is inside the database, queries get all the scalability, manageability and optimization capabilities. Time series is just another data construct to be leveraged.

For analytics, Teradata has optimized the aggregate functions that would work on time constraints. This means you can easily take two different time series tables and align them against each other. For example, one table may be capturing sensor data every two minutes and the other every five minutes. Using the aggregate functions, you can run analytics that get 15-minute averages and correlate the two sets of data.

But time is just the beginning….

Additional benefits come from integrating time series data with geospatial data to understand the changing nature of data over time and space. Then factor in your reference data, which may have temporal characteristics, so now you can understand which vendor supplied what part, at what time, and how that part was used over time in a vehicle. This can provide insights about normal versus abnormal usage or wear and tear. These insights lead to targeted and timely actions.

Operationalizing these new insights across departments is simplified, as the processes still gets the scale, optimizations and the workload management that has been the core of the Teradata Database for decades.

To learn more….

I have only begun to scratch the surface of this new and exciting capability. To learn more about Teradata Analytics Platform, I invite you to visit our website.

Gotta go, lost track of time, and I’m late for my next meeting …

Rob Armstrong

Starting with Teradata in 1987, Rob Armstrong has contributed in virtually every aspect of the data warehouse and analytical processing arenas. Rob’s work in the computer industry has been dedicated to data-driven business improvement and more effective business decisions and execution.  Roles have encompassed the design, justification, implementation and evolution of enterprise data warehouses.

In his current role, Rob continues the Teradata tradition of integrating data and enabling end user access for true self-driven analysis and data-driven actions. Increasingly, he incorporates the world of non-traditional “big data” into the analytical process.  He also has expanded the technology environment beyond the on-premises data center to include the world of public and private clouds to create a total analytic ecosystem.

Rob earned a B.A. degree in Management Science with an emphasis in mathematics and relational theory at the University of California, San Diego. He resides and works from San Diego


More cloud milestones for Teradata: Azure, AWS, IntelliCloud

October 23, 2017


Put more points on the leader board for Team Orange – because we’ve got more good news for Teradata in the cloud! Yet again Teradata has raised the bar when it comes to cloud data warehousing and cloud analytics.

How’s that? Well folks, we just published another set of improvements to our software in both Azure and AWS Marketplaces – AND we’ve innovating into overtime with Teradata IntelliCloud, our popular and secure managed cloud service for data and analytics. This explosion of cloudy orange awesomeness is truly impressive.

Here’s the deal: Less than a month after the previous big update, our latest volley contains the following highlights (note: more details at the bottom of this post):

Teradata in both Azure and AWS Marketplaces

Teradata in Azure Marketplace

  • Replaced local storage virtual machine “G5” with lower-cost local storage options L32Sand L8S, which saves money
  • Added support for more storage at 30 TB with DS14_v2 and DS15_v2, virtual machines using network attached storage, which increases capacity

Teradata in AWS Marketplace

  • Added support for Teradata Database Fold / Unfold (see below)
  • Added information on customer keysfor encryption

Teradata IntelliCloud

Let’s talk: Teradata IntelliSphere.

Having the ability to add Teradata IntelliSphere as an option when selecting a Teradata Database subscription on Azure or AWS Marketplace is a big, big change – for the better.

In the past, each ingredient of the Teradata software ecosystem (i.e., the Teradata Unified Data Architecture) had to be subscribed to – and paid for – separately. With IntelliSphere, however, entitlement to 10 different software capabilities is now rolled into ONE convenient subscription. Easy.

So, not only is it MUCH more convenient to pay for and get access to additional Teradata software, customers with an IntelliSphere bundle as part of their Teradata Database subscription now have no reason NOT to use any or all the software that is part of the bundle – because it will have already been paid for!

Personally I suspect that experimentation with, and utilization of, IntelliSphere capabilities such as Teradata Data Lab (web page and brochure), Teradata Data Mover (brochure), and Teradata QueryGrid (web page and video) will rise rapidly. Read and learn more about IntelliSphere on the website page or in this blog post.

Teradata Database Fold / Unfold

The terms “Fold” and “Unfold” are nifty ways to describe the ability to expand and shrink the size of a Teradata system – and not just in the cloud, but across all Teradata deployment options: public cloud, private cloud, IntelliCloud, and on-premises.

Unfolding a system is scaling out by “unfolding” the compute resources, like how a Japanese hand fan unfolds to go from a narrow bundle to a semi-circle for full cooling effect. Likewise, Folding a system is the act of shrinking the compute resources or “folding” the fan back into its original compact state.

Our latest update enables Teradata Database systems on Azure and AWS Marketplaces using network attached storage (i.e., Premium in Azure and Amazon EBS in AWS) to Unfold twice. Think of it as going from 1X (initial state) to 2X (one Unfold) and then, if desired, to 4X (second Unfold) by doubling or quadrupling the number of compute instances. The reverse is also true: customers can revert from 4X to 2X and then back down again to 1X if desired. The whole thing is slick because it enables customers to crank up the compute volume when they need it, then dial it back down when they don’t. Now that’s cloudy.

IntelliCloud Free Trials

IntelliCloud Free Trials is a relatively new self-service web portal for exploring IntelliCloud Solutions through free hands-on demos.

The initial launch of IntelliCloud Free Trials showcases Teradata Database with Teradata Studio and Teradata Viewpoint. Within minutes, users get a ready-to-use environment including guided tours packaged with access to tooling, sample data, and video content. Maybe even better, all of this can be done without even having to talk to a single sentient being. It’s automated!

The duration of an IntelliCloud Free Trial is 10 calendar days. Users wishing to enhance their experience by connecting their own applications or by bringing their own data may do so by requesting a Proof of Concept (POC) directly through the portal (which is cool).

Visit this website page for screenshots and FAQs to learn more – or get started today at

Those are the highlights, my friends, but I’m only scratching the surface. Check out the links in this post and stay tuned for even more good stuff coming out soon – very, very soon. Progress never stops when you’re driving innovation!




Marketplace update details:

Teradata Database on Azure

  • Added support for Teradata IntelliSphere bundle subscription.
  • Added Enhanced IPE, also known as Adaptive Optimizer.
  • Replaced G5 local SSD VM with lower-cost and mostly equivalent L32S.
  • Added support for L8S for local SSD VM.
  • Added support for 30 TB with DS14v2 and DS15v2 for premium SSD VM.
  • Updated security group settings for Teradata Server Management.
  • Updated Premium Storage Deployment illustration.
  • Teradata Database on Azure release notes available here.

Teradata Database on AWS

  • Added support for Teradata Database Fold and Unfold.
  • Added support for Teradata IntelliSphere bundle subscription.
  • Added Enhanced IPE, also known as Adaptive Optimizer.
  • Added information on encryption.
  • Updated information for creating a system image.
  • Added how to configure Server Management to receive alerts.
  • Updated security group settings for Teradata Server Management.
  • Updated Database Deployment illustration.
  • Teradata Database on AWS release notes available here.

brian wood headshot teradata cloud marketingBrian Wood is director of cloud marketing at Teradata. He is a results-oriented technology marketing executive with 15+ years of digital, lead gen, sales / marketing operations & team leadership success. He has an MS in Engineering Management from Stanford University, a BS in Electrical Engineering from Cornell University, and served as an F-14 Radar Intercept Officer in the US Navy.

Simplicity out of Complexity: Announcing the Teradata Analytics Platform

October 23, 2017

Businesswoman inspecting graph on interactive display

What is the sense in transformative technology if only a select few can use it?

That has been a big conundrum for enterprises working with increasingly powerful analytics capabilities, from data science to machine learning and AI. As these technologies have matured, they have created a complex web of siloed data and analytics, some on premises, some in the cloud, some structured, some not. And that complex web of architecture and tools has also made it difficult for enterprises to scale analytics capabilities across the organization.

Yes, there are great open-source tools out there, like Python, R, Spark or TensorFlow, enabling analytics, machine learning and deep learning. And there are more enterprise-grade data and analytics tools on the market today than ever before. But using the latest technologies at scale, across a hybrid cloud environment, isn’t simple. It’s just not feasible for large enterprises — the businesses most likely to have a complex web of data — to continually knock on the door of  their IT department or data scientist, asking them over and over again to custom-build analytics applications that integrate disparate data across disparate platforms. At Teradata, we realize that these siloed data set — and disparate abilities within the enterprise to manage and interpret data — are at the crux of one of the biggest challenges enterprises have to overcome. That’s lost time no one can afford to waste. And that means performing analytics and machine learning in all aspects of the business is a nonstarter.

This gap must be closed and cutting-edge analytics capability must be simple and accessible across the organization. This is at the center of our sentient enterprise vision, which I codeveloped by Mohan Sawhney, noted academic, author and management consultant. In our comprehensive book we outline how the “The Sentient Enterprise” model gives businesses a guide to surviving the evolution of analytics and AI. With this announcement, we continue to drive toward the vision of the sentient enterprise through a series of Teradata technologies.

Last year, Teradata tackled the architecture problem, releasing Teradata Everywhere, the world’s most powerful analytics database, enabling massively parallel processing on multiple public clouds, managed clouds and on-premises environments. This gave companies a flexible data management layer that allowed them to focus on analytic applications.

Teradata Analytics Platform Delivers Superior Insights

Today, we’re announcing the next step — the Teradata Analytics Platform. With this platform, enterprises can keep their current analytics tools, write in the coding languages they prefer, and apply analytics to all their data quickly, regardless of location.

Our vision is clear. With the Teradata Analytics Platform, businesses can use the latest open-source technologies and their prefered analytics tools to perform both widespread and granular data analysis, from looking at customer purchasing trends down to the purchase path of a single individual. And this solution can be deployed anywhere, from the Teradata Cloud to public clouds, as well as on-premises. Teradata Analytics Platform is comprised of the best analytic functions, the leading analytic engines, the industry’s preferred analytic tools and languages, and support for a wealth of data types. First, Teradata Analytics Platform will integrate Teradata and Aster technology, allowing data scientists and analysts to execute a wide variety of advanced techniques to prepare and analyze data within a single workflow, at speed and at scale. In the near future, the Teradata Analytics Platform will include Spark and TensorFlow engines to provide quick and easy access to a full range of algorithms, including those for artificial intelligence and deep learning.

Teradata Analytics Platform further closes the loop on providing analytics power to everyone in a business through its integration with the Teradata AppCenter, which lets analysts share analytics applications and capabilities with their fellow employees through a web-based interface, enabling self-service access to all users within an enterprise.

As businesses and their data grow more complex, they have struggled to find an enterprise-grade solution that enables analytics with simplicity, agility and scale. This announcement fits into our vision of bringing about a shift in the market, by breaking data analytics free of its location and coupling it with the latest open-source and proprietary technologies that empowers analytics for everything.

For more, read the announcements from our PARTNERS 2017 conference.


Oliver Ratzesberger is an accomplished practitioner, thought leader and popular speaker on using data and analytics to create competitive advantage – known as his Sentient Enterprise vision.

Oliver Ratzesberger is executive vice president and chief product officer for Teradata, reporting to Vic Lund. Oliver leads Teradata’s world-class research and development organization and provides strategic direction for all research and development related to Teradata Database, integrated data warehousing, big data analytics, and associated solutions.

Just imagine: Analytics expertise on demand from Teradata

October 23, 2017

Is analytics operations the key to successful data science - Christopher Hillman

Imagine you have just been promoted and you are now responsible for delivering data and analytic solutions across your company. Further imagine these challenges as you step into this new job:

  • The business is frustrated that it takes so long to deliver solutions.
  • There is a proliferation of redundant and overlapping data resources, and the projects currently underway are only going to make this problem worse.
  • There is a lack of clarity on who is responsible for which aspects of data and analytics.
  • The latest analytical techniques are only applied in isolated pockets and are not exploited widely.
  • Attempts to deploy data coherently, as a shared enterprise resource, have struggled to deliver business results.
  • It’s difficult to find the skills and experience needed to develop and support the solutions.

Maybe you don’t have to imagine.

You want to address these problems, but how? There are so many questions:

  • How do you deliver solutions quickly while building to a coherent, shared resource at the same time? Is it possible?
  • How do you organize the teams and the work to deliver results efficiently, using the latest agile techniques? Do agile principles even apply to data and analytics?
  • How do you decide which technologies to use for which purpose? What works in real life?
  • What about data governance? How do you make sure the data is ready to meet business needs while preventing data governance from wandering aimlessly in Theory Land?
  • How do you accelerate development, leveraging work that has already been done by others?

Teradata’s Agile Analytics Factory service was designed to help you answer these questions and more. The Agile Analytics Factory is an end-to-end, ongoing partnership to plan, develop, deliver, and support enterprise data and analytics. The factory includes components built and refined over decades in partnership with top companies in every major industry. The components include:

  • Methods, such as roadmap development, agile solution development and sustainment, and business analytic processes.
  • Accelerators, including proven business use cases and industry data models.
  • Experience in all business and technical aspects of data and analytics.
  • Technology leveraging a combination of proprietary and open-source systems, working together to exploit the best capabilities of each.
  • Best practices, such as architecture and design guidance for common situations.
  • Relationships with third-party technology providers, including cooperative engineering optimization, to establish a comprehensive ecosystem.

on demand blog

Now imagine focusing your energy and creativity to support your company’s business strategy, armed with knowledge about analytic capabilities from basic, to advanced, to groundbreaking, without worrying about the mechanics of the program and every detail of the analytic ecosystem. The Teradata Agile Analytics Factory will provide everything you need to take in business needs — the raw material — and systematically transform them into insights, innovation and business outcomes.

kevinlewisKevin Lewis is Consulting Director for Teradata’s Strategy and Governance practice, sharing best practices across all major industries. Kevin helps clients transform and modernize data and analytics programs including organization, process, and architecture. The practice advocates strategies that deliver value quickly while simultaneously building a coherent ecosystem with each step.

Fast Track Business Outcomes from Artificial Intelligence with Proven Methods and Accelerators

October 23, 2017


Deep learning, chat bots, smart machines, natural language processing, neural networks — these buzzwords continue to create much excitement, hype and urgency among business leaders, as they deliberate on how and when to invest in artificial intelligence (AI) to transform the way their companies can leverage data to improve business outcomes and gain competitive advantage.  

Working with innovative, early adopters of enterprise AI, Teradata has helped clients realize significant and high-impact business outcomes, such as fraud detection, manufacturing performance optimization, risk modeling, precise recommendation engines and more.

Teradata’s clients are demonstrating that deep learning algorithms are significantly outperforming most rules-based and machine learning approaches, and, in other cases, deep learning is solving previously intractable problems. For example, working with Teradata:

  • Danske Bank saw a 50 percent increase in fraud detection while also achieving a 60 percent reduction in false positive rates, translating to financial outcomes that are highly significant and material to the business.
  • A mobile services provider is using deep learning and natural language processing techniques to apply 300-plus routine response types to manage customers’ common questions in two languages, automating routine queries at a much lower cost, so human agents can focus on complex requests and provide more personal customer attention.
  • A major shipping/logistics distributor now uses AI for image matching techniques that reduce costly lost package resolution time, saving the business $25 million a year.  
  • A government postal service organization now uses AI-driven image recognition and deep learning processes to improve the sorting of over 115 million parcels a year, resulting in valuable operational efficiencies that reduce time and radically lower cost.  

However, the reality for most large enterprises is that building AI solutions on their own that address their unique opportunities is not feasible. Challenges abound, from needing to establish an AI strategy that cohesively brings together processes and culture, to solving technical challenges, such as working with a variety of open-source projects, specialized hardware and fragmented data sets to operationalizing and supporting autonomous decisions. Furthermore, the shortage of AI talent simply makes it too difficult for one organization to pursue AI at an affordable, sustainable rate. Large enterprises need a partner who has done it before.

To help ensure successful AI engagements that enable the delivery of faster business results and lower implementation risk, Teradata introduces new services, including:

  • AI Strategy and Enablement Services — Teradata will review key enterprise capabilities and provide recommendations and next steps to successfully realize business value from AI.
  • AI Rapid Analytic Consulting Engagement — A service offering designed to help clients quickly demonstrate proof of value to gain buy-in from stakeholders.
  • AI Platform Build Services — A collaborative client engagement to build and deploy a deep learning platform while also integrating data sources, models and business processes. This includes implementing use cases through data science and engineering.
  • AI-as-a-Service — Teradata helps clients design and oversee mechanisms to optimize and improve existing business processes using AI. Teradata manages an iterative, stage-gate process for analytic models from development to hand over to operations.

Teradata is also introducing AI “accelerators” composed of best practices, code, IP and proven design patterns.

  • AnalyticOps Accelerator — Available now, this accelerator provides an end-to-end framework to facilitate the generation, validation, deployment and management of deep learning models at scale.
  • Financial Crimes Accelerator — Uses deep learning to detect patterns across retail banking products and channels such as credit card, debit card, online, branch banking, ATM, wire transfer and call centers. It continuously monitors and thwarts fraudulent schemes used by criminal actors to exploit the system, leading to quick time to value.

Upcoming AI events

Explore AI in more detail by virtually accessing or attending our informative sessions and panels at the Teradata PARTNERS Conference, from Oct. 22-26. Experts and practitioners will share specific use case scenarios, AI challenges and best practices, and how to identify or develop skills necessary to usher AI into your enterprise.

Also, join the O’Reilly webcast, “Bringing Artificial Intelligence to the Enterprise,” on Oct. 31, 2017, at 10:00 a.m. PST for a discussion of the current state of AI within large enterprises and a look at the trends that will shape their future architectures. You will learn:

  • How AI is currently driving business outcomes in a variety of industries.
  • The different options and considerations for enterprises to consume AI technology.
  • What engineering problems must be solved for AI to go mainstream within large enterprises.


Chad_MeleyChad Meley is Vice President of Product Marketing at Teradata, responsible for Teradata’s Analytic Ecosystem, IoT and Artificial Intelligence solutions.  

Chad understands trends in the analytics and big data space and leads a team of technology specialists who interpret the needs and expectations of customers while also working with Teradata Labs engineers, consulting teams and technology partners.  

Prior to joining Teradata, he led Electronic Arts’ Data Platform organization that supported game development, finance and marketing. Chad has held a variety of other leadership roles centered around data and analytics while at Dell and FedEx.  

Chad holds a B.A. in economics from the University of Texas, an MBA from Texas Tech University and performed postgraduate work at the University of Texas.  

His professional awards include Best Practice Award for Driving Business Results in Data Warehousing from The Data Warehouse Institute, Marketing Excellence Award from the Direct Marketing Association and Marketing Gold Award from Marketing Sherpa. He is a regular speaker at conferences, including Strata, DataWorks and Teradata Partners.  

Chad can be reached at or on Twitter at @chad_meley.

Teradata IntelliSphere — a unified software portfolio for a unified analytical ecosystem

October 23, 2017


Many things in life are simply better together — Laurel and Hardy, fish and chips and One Direction, for example. The idea that the whole is greater than the sum of its parts applies not only to food, boy bands and the best comedy duo of the 20th century — I’ll contend that it also applies to enterprise data integration software.  

The IT engineer or business analyst faces a plethora of choices of tools and technologies to help them ingest, access, deploy, and manage analytics and data in their environment. No one tool addresses all of the data integration use cases. What’s needed is a toolkit of capabilities that can be used together to develop solutions to drive value from large-scale, modern, complex analytical ecosystems. A tool kit is, of course, better than the sum of its parts — it provides ready access to a range of tools, enabling an agile approach to solving problems on the fly.

At Teradata, we recognize this need, so we’ve gathered together all of our software for orchestrating a modern analytical ecosystem into an easy-to-consume toolkit called Teradata IntelliSphere™. This software portfolio provides organizations with the foundational capabilities needed to support compelling integrated data solutions across diverse analytical ecosystems.

Many companies are adopting Teradata Everywhere™, which brings the Teradata Analytics Platform to multiple public clouds, managed cloud and on-premises environments. IntelliSphere complements Teradata Everywhere™, delivering the comprehensive set of capabilities required to ingest, access, deploy and manage a flexible analytical ecosystem.

With the IntelliSphere software suite, line of business users and IT organizations are able to integrate data across a broad ecosystem by taking advantage of four core capabilities:

  • Ingest: Using IntelliSphere, companies can easily capture and distribute high-volume data streams, with ready-to-run elastic architecture and quick access for business-critical analysis.
  • Access: Companies gain easy access to data stored anywhere, even in a hybrid cloud or heterogeneous technology environment.
  • Deploy: Deploy applications and analytic models for easy user access and enterprise collaboration.
  • Manage: Management software allows for ad hoc data movement, as well as ongoing monitoring and control via an operational interface.

The new offering simplifies the process of purchasing and licensing the existing ecosystem software products from Teradata. With IntelliSphere, customers are now able to purchase all of our existing analytical ecosystem products on a subscription basis under new licensing terms that allow them to be used in an unlimited fashion across all of the Teradata platforms they own.

The software bundle includes established ecosystem software such as:

  • Teradata Listener: For real time data streaming ingest into Teradata.
  • Teradata Data Mover: For high-performance data movement between Teradata platforms.
  • Teradata QueryGrid: For virtualizing queries across heterogeneous data management platforms such as Hadoop, Teradata Aster and other Teradata systems.
  • Teradata Unity: For providing high availability and data synchronization between Teradata database systems.
  • Teradata AppCenter: For deploying, executing and sharing analytical apps across an organization that accesses data in a broad range of Teradata and third-party data management solutions.
  • Teradata Hybrid Cloud Manager: For seamlessly transferring workloads from on-premises systems to public clouds, available by the end of 2017.
  • Teradata Data Lab: For creating exploratory data analysis sandboxes that can span hybrid cloud environments.
  • Teradata Data Stream Extension: For enabling backup and restore to and from a range of enterprise-grade, third-party backup solutions.
  • Teradata Multi-system Viewpoint: For managing and monitoring a heterogeneous set of data platforms.
  • Teradata Ecosystem Manager: For setting up workflows to manage key data transformation and loading tasks.

The software portfolio is offered via a one-year or three-year subscription, and is also available on an on-demand basis on the public cloud. Also, subscribers will be automatically entitled to download and deploy brand new capabilities as they are added to IntelliSphere.

I believe the most successful vendors in the data management space must offer broad, cross-platform and scalable data integration solutions that empower companies to break down the data silos that exist within their IT environments. Organizations that are able to eliminate these data silos are solving problems faster, gaining new flexibility and realizing big benefits. These companies come to see an integrated analytical ecosystem as invaluable to their respective data management and analytics strategy and are investigating additional use cases around data exploration, machine learning and prescriptive analysis.

Our customers care more about business outcomes and less about the technologies that enable them. Teradata’s goal is to support the need for data-driven insights by providing seamless, timely and cost-effective access to analytics regardless of where the data resides. We’re able to help our customers achieve this by enabling new analytic solutions based on the broad ecosystem integration capabilities of Teradata IntelliSphere. 

All in all, I think our customers and IntelliSphere are better together.

image001Mark Cusack is VP Analytical Ecosystem at Teradata, and has product management responsibility for IntelliSphere.  Mark joined Teradata almost three years ago as part of the acquisition of RainStor.  Since joining Teradata he has held the roles of Chief Architect for Teradata-RainStor, and Chief Architect for the IoT Analytics team.  Mark is based in San Diego.

Advanced analytics for a new era

October 23, 2017


Analytics is serious business.  I don’t care what buzz words are being used in today’s day’s stylized reinterpretation of the data business that we are in but analytics has indeed come a long way. Both, in the way it is done and the increasing incidence of companies doing it to stay competitive and grow.  Thing is, in an odd sort of way, analytics is often done in tentative spurts of frenzy and the business of providing analytic solutions that can create a sustained analytics practice and one that is fully beneficial to the business is, well, rarely the norm in companies today.

Challenges that impede a sane analytics practice

So, let’s first answer the question of what challenges companies face in creating this competitively differentiating analytics practice on a sustained basis.  But before I get into this, a quick note. Typically, in the spirit of shameless confessionals, my blog posts are prolix, long-winded affairs.  In the interest of judiciously using your spare time I will be brief and if or when you may have the desire to engage in a larger relaxed conversation, reach out to me.  Now, for the challenges:

First, businesses today generate vast gobs of data of all types, shapes, sizes, looks, across different times, and at various speeds.  Consequently, analytics solutions need to be repositories of these diverse data. One cannot have bespoke storage solutions for each data type and confront an array of infrastructure that requires all kinds of physical and mental gymnastics just to get all the data in one place.  

Second, the diversity of data alludes to the diversity of the business.  For example, when customer data is collected through CRM systems, web servers, call center notes, and images it means that customers engage with the business via the store, through online portals, via the call center, and on social media. To understand the multi-channel experience we need to analyze these diverse data through multiple techniques.  Unfortunately, to do that businesses have been forced to buy multiple analytic solutions, each with a unique set of algorithmic implementations and none easily interacting with the others.  

Third, let’s assume that the ability and purpose of doing advanced analytics is there and well established.  Now comes the challenge of purchasing a solution that can neatly fit into the budgetary constraints of the business.  Not for the first time can I recall a customer who has expressed an inability to transact business with a vendor not because they do not have a desperate need for that vendor’s solution but because they are likely locked into a solution purchase that inevitably restricts their flexibility of deploying it.  For example, a customer may first desire to kick the tires with a solution by purchasing a limited time cloud subscription before they are able to commit more resources to it.  This is only a fair ask.  Once they are successful in a minimal-risk purchase they can up the ante by buying something more substantial depending on their needs.  Analytic solutions that cannot fulfill this primal customer need will fast recede as a prickly memory of the past, regardless of how good or versatile they can be.  

Teradata Analytics Platform

Now that I have outlined the challenges, how about providing a rational fix for these?  Fortunately, and not too coincidentally, this is a pleasant enough occasion for me to introduce the Teradata Analytics Platform.  At a high level, the Teradata Analytics Platform is an advanced analytics solution that comes in multi-deployment form factors with the capability of ingesting diverse data types at scale upon which native analytic engines can be invoked to implement multi-genre analytic functions that deliver and operationalize critical business insights.  There are six core capabilities that are likely to provide a unique and significant competitive edge to customers of this solution. They are:

  • Evolution of the Teradata Database to include multi-data type storage capabilities (e.g., time Series, text) and pre-built analytics functions (e.g., sentiment extraction, time series modeling).
  • Aster Analytics analytic engine with over 180 pre-built advanced analytics functions that span multiple genres such as Graph, Statistics, Text and Sentiment, Machine and Deep Learning, and Data Preparation.
  • A customer friendly deployment and pricing option set (in-house, hybrid cloud, managed cloud, term and perpetual licensing) that ensures flexibility in accommodating changing customer preferences without impacting any current investments.
  • A Data Science friendly analytic environment that includes a variety of development tools and languages (e.g., KNIME, R, Dataiku, Python)
  • An active and healthy integration with open-source analytic engines (e.g., Spark, Tensorflow) and storage solutions (e.g., Hadoop) that aims to provide a customized solution that dovetails with a customer’s current investments and desired ecosystem mix.
  • A highly performant solution where the insights delivery and operationalization are tightly coupled in the same environment without having to artificially separate them.

Addressing the Analytics Challenges

So, now that you know what the Teradata Analytics Platform is it behooves me to close the loop and discuss briefly how it fixes the challenges that I had outlined earlier.  Fortunately, for me, this is an easy and delightful exercise.  For one thing, the features above clearly speak to the solution’s capability to ingest and process data of all types (our first challenge).  The fact that it comes with a mind-boggling array of analytic capabilities, not to mention the capability to leverage open source analytic engines such as Tensorflow clearly indicates that Data Scientists and other analytics professionals have the ease and flexibility to choose from a colorful palette of techniques to effectively do their work.  And what’s more, they can do their work using development tools and languages that they’re most comfortable with (our second challenge).  Finally, given that the Teradata Analytics Platform was conceived with a “customer first” mentality – a hallmark of the Teradata way of doing business – it is available for deployment in ways that suit the customer’s unique business needs.  Customers who prefer to analyze their data on the public cloud will have the option of buying a subscription to this solution.  Alternatively, those that prefer an in house implementation can have their choice fulfilled as well.  Customers who choose one deployment option to begin with and decide to change to something else won’t have the worry of a repriced solution as the pricing unit (TCore) is the same across all deployments (our third challenge).

Teradata Analytics Platform, the smart choice

Clearly, and honestly, my conclusion is not likely to culminate in a dramatic denouement. Be that as it may, it is a logical choice to opt for the Teradata Analytics Platform that puts the power of analytics in the hand of the customer and delivers a unique purchasing experience that is quite revolutionary in the market.

sri-raghavan-hadoop16Sri Raghavan is a Senior Global Product Marketing Manager at Teradata and is in the big data area with responsibility for the AsterAnalytics solution and all ecosystem partner integrations with Aster. Sri has more than 20 years of experience in advanced analytics and has had various senior data science and analytics roles in Investment Banking, Finance, Healthcare and Pharmaceutical, Government, and Application Performance Management (APM) practices. He has two Master’s degrees in Quantitative Economics and International Relations respectively from Temple University, PA and completed his Doctoral coursework in Business from the University of Wisconsin-Madison. Sri is passionate about reading fiction and playing music and often likes to infuse his professional work with references to classic rock lyrics, AC/DC excluded.