Monthly Archives: September 2017

The age of automation. Where does creativity fit in?

September 28, 2017

RS3786_shutterstock_407475370

In an increasingly data-driven world, we see the automation of jobs in almost every industry. This is a daunting prospect for some, but evidence shows that rather than taking over our jobs, new technology is forming new opportunities: Smart humans are working alongside smart machines in collaboration. But where does this leave creativity? Is it possible that machines will replace human imagination? How do data science, artificial intelligence, and machine learning fit into with the world of art? These are key questions to investigate as we adapt our company cultures to align with digitalisation and data-driven economies. To do so, let’s explore examples of how creative industries are utilising data and analytics to deliver business outcomes:

The music industry

The first pop song to be created using artificial intelligence (AI) was produced by Flow Machines software, a subsidiary of Sony CSL. Working towards the release of its first entirely AI album, the technology used by Sony carries out analysis of database lead sheets from a range of genres and forms melodies as a result. It doesn’t take long when listening to the the AI-produced track, “Daddy’s Car”, to identify hints of machine learning, and there is still plenty of room for improvement in terms of producing ‘human’ music. That being said, machines can learn extremely quickly, and development is continuous. The augmentative element in this case is clear: A human composer is needed to turn the melodies into songs.

The television industry

Can you imagine being able to predict the perfect television series? Well, Netflix did just that. Its recent series are important success stories for data and analytics. For example, “Orange is the New Black” and “House of Cards” were created by Netflix through the use of data and analytics to find the ideal combination of elements for both to be successful. In these cases, the right choice of actors and director and the best possible combination of genre elements provided the confidence for a $100 million investment on data and analytics from Netflix. This emphasises how machine learning can take the extra leap, understanding a deal of complexity that humans cannot: By using advanced data science techniques, Netflix could identify over 76,000 genre types to describe user tastes. This process that would have been extremely lengthy, if not impossible, when carried out by humans alone.

The marketing industry

A creative function by tradition, marketing is ultimately about storytelling. More and more, marketers are leveraging data to drive outcomes and understand customer decision-making effectively. With the digital footprint that customers leave behind online, businesses can leverage this data and add analytics to learn more deeply about the behaviours and future intentions of customers. Global bank HSBC used behavioural science to help customers reach their financial goals by developing a ‘nudge’ app using automated messaging to save customers £800,000 between Christmas and New Year’s alone. At the end of the day, data makes us better at decision-making and assists us in making sense of an increasingly complex world of customer interactions and transactions. It’s critical that marketers do this to remain competitive. However, it’s important to remember that human creativity is not to be disregarded: It sits alongside automation.

The transformation of customer data

Data gets more complex as technology advances, capturing not only customers’ transactions, but also their interactions and observations. As analysts start to delve into the data, they no longer extract only tens of variables from data, but potentially tens of thousands of variables in an attempt to understand customer behaviour at any given time. This data enables marketers to plan and form meaningful interactions.

It is essential that marketers follow the Google strategy of ‘be there, be useful, be quick’ to capture moments of true value. To demonstrate how organisations are adopting analytics in conjunction with automated, real-time decisioning we can look to advertising agencies who are building billboards that can achieve insight from visual analytics of video footage, to be able to deliver customised advertisements depending on the analytically identified make and model of car driving past.

Having the right content in real time is not enough: Location is also essential in understanding customer decision-making for future marketing teams. Studies reveal that geo-targeted mobile offers, based on customer environment and proximity to retailers, can provide two times more effective conversation rates. If delivered to a commuter on a particularly busy route, the conversion rates of marketing ads are even more effective.

Businesses require deep insight and need analysis of data to be delivered in real time to create moments of real value that improve a customer journey. To achieve this, organisations are coming to realise that they must adopt automation. If they fail to, marketers will simply not be able to make the massive number of decisions necessary in the marketing industry today.

Art versus science?

Creative industries, by adopting the right combination of art and science, can get to know their customers better and remain competitive: This is done through optimising creative skillsets belonging to humans, together with the use of data and analytics, to drive outcomes.

The face of marketing in a digitalised world is constantly transforming as machine technology advances. As demonstrated by the examples in this blog post, the creative industry is using analytics and machine learning to develop existing products and services, as well as create brand new ones. Humans will still drive narrative and innovation: Data science will support heavily, providing intelligent automation at scale.


Yasmeen AhmadYasmeen Ahmad – Director of CEUKIR, Think Big Analytics, a Teradata company

Yasmeen is a strategic business leader in the area of data and analytics consulting, named as one of the top 50 leaders and influencers for driving commercial value from data in 2017 by Information Age.

Leading the Business Analytic Consulting Practice at Teradata, Yasmeen is focused on working with global clients across industries to determine how data driven decisioning can be embedded into strategic initiatives. This includes helping organisations create actionable insights to drive business outcomes that lead to benefits valued in the multi-millions.

Yasmeen is responsible for leading more than 60 consultants across Central Europe, UK&I and Russia in delivering analytic services and solutions for competitive advantage through the use of new or untapped sources of data, alongside advanced analytical and data science techniques.

Yasmeen also holds a PhD in Data Management, Mining and Visualization, carried out at the Wellcome Trust Centre for Gene Regulation & Expression. Her work is published in several international journals and was recognised by the Sir Tim Hunt Prize for Cell Biology. Yasmeen has written regularly for Forbes and is a speaker at international conferences and events.

That (Amster)damn utilities data…

September 27, 2017

That Amster(damn) utilities data - Iain Stewart

European Utility Week returns to Amsterdam in early October – a great place for Teradata, given the local affection for the colour orange and how well it fits our branding!

As we prepare to take the stage, and present our vision of asset and customer data in Amsterdam once more, I have been reflecting on where the industry is with data today. Just four years ago I began my mission at Teradata: taking data to the heart of the utilities agenda.

As with all technology-based phenomena, you can loosely map the journey of utilities and data against the standard hype cycle. In Amsterdam four years ago, there was hype aplenty – the stands were plastered with talk of ‘big data’ in particular. Unfortunately, with the exception of ourselves and a handful of others, behind much of this branding lay hollow PowerPoint slideware, or poor quality, single point solutions.

Whilst many have monetised their data in the intervening years, working with those of us who can deliver on that promise, the trough of disillusionment claimed many others. That trough peaked (or whatever the opposite would be… maybe bottomed out?) around mid 2016. By the time we got to European Utility Week in Barcelona last year, conversations around data were scarce. Yet those conversations were much better informed and targeted – a sure sign of a maturing market.

Quantum leap

The other quantum leap I observed during 2016 was how data was being placed at the heart of key programmes, front and centre in the modern utility. This directly contrasted to the standalone project strategies of the past.

Digitization, the use of sensor data, the Internet of Things, industry innovation; these are all driven by data today, and the link is well understood as a result of this shift. Just look at how more mature customers such as Enedis are using data in these areas. Use of data and analytics is no longer a side-show capability – this is becoming part of something much greater in the modern utility.

What to look out for in Amsterdam

Which brings us to today, and more importantly the 2017 event in Amsterdam. Aside from taking the time to speak to us about the power of integrated data and analytics at scale, what else should you look out for at the event?

  • The number of times data and analytics are mentioned and/or clearly relevant in everything you see and discuss at the event, even when the words “data” and “analytics” are not used specifically. Data and analytics now underpin all areas of transformation in the modern utility. It’s all about data.
  • We are not all the same. Teradata and others that work with data co-exist within bespoke analytical ecosystems to enable utilities to drive revenues and reduce cost. Anyone you speak to in Amsterdam should be able to clearly articulate this and the role they play.
  • Some things never change! As I said four years ago, speak to those of us who have already delivered tangible benefits with data in utilities and other industries. We are already massively matured in the deployment of data integration and analytics at scale.

So, all aboard for Amsterdam… and for those of you that attended four years ago, I promise I have upgraded from that slightly dodgy, pin-stripe brown suit! Come listen to and speak with us. Together we can keep up the momentum. Together we can do great things in utilities with data. See you in Amsterdam!


iainstewartIain Stewart – ‎International Practice Partner Utilities and Smart Cities at Think Big Analytics, A Teradata Company – ‎Think Big Analytics, A Teradata Company

Iain provides a bridge for utilities companies – helping them understand the art of the possible in data and data analytics, and how that can deliver value to their business. He lives in Edinburgh with his wife and daughter and am is avid supporter of Edinburgh Rugby, The Dogs Trust charity, good food, wine and whisky.

Open Source AI is in the Same Place Big Data Was 10 Years Ago

September 26, 2017

Open Your Mind to All The DataPop quiz: Who invented the Hoverboard?

Of course the original concept was first documented in the “Back to the Future” movie trilogy, but as for the wheeled device that popped up all over malls and city streets in 2015, the answer is fairly complicated. In fact, despite being the No. 1 selling toy in that year’s holiday season, it’s difficult to name even a single company that manufactured the product. And there’s a reason why: open-source development.

The toy was in fact invented in Shenzhen, China’s engineering and manufacturing hub that accounts for nearly 1 million jobs and a disproportionate amount of China’s GDP. After work, many of the city’s engineers post their ideas on message boards, sketching out for each other new concepts they’re working on. They all work together to improve on the product, and then they all go back to their respective companies and all make the product. At the product’s peak, there were more than 1,200 hoverboard suppliers in Shenzhen.

While this level of collaboration can certainly be profitable in the short term, it also presents challenges. A similar scenario faces many businesses looking to get into AI and deep learning. As the industry currently stands, there are two options. There are limited off-the-shelf products for companies seeking to buy and integrate deep learning models and applications for their business — IBM’s Watson being the most recognizable. The alternative is open-source technologies like Google’s TensorFlow and Facebook’s PyTorch for deep learning. The closed-source option can be expensive and complicated, while the open-source options, offering developers a rich, collaborative online network and tools to flesh out their deep-learning models, lack enterprise support. Google or Facebook are not in the business of providing enterprise support for either Tensorflow or PyTorch, respectively.

When all those hoverboards started having battery issues and catching on fire, who was there to call? No one. For deep learning, the results of lack of support aren’t quite so dangerous, but for enterprises, the analogy holds. With the exception of Google Cloud customers, any enterprise using TensorFlow has to do what the rest of the product’s users do: post to the message board and hope somebody answers — no support number to call and no one to come onsite and work collaboratively with your own data engineers and scientists to solve a problem or build an application. This is not a feasible solution for companies that need to quickly address issues at production scale.

There is a third option, that of the homegrown approach, where companies hire people with the skills to write the very complex code deep learning requires in-house. But the homegrown options also has three distinct problems. One, there are a limited number of these individuals with the existing skills, and many often are snatched up for high six-figure salaries by companies like Google itself. Two, it’s hard to get everyday enterprise developers, who mostly work in Java, trained to learn and work proficiently in new programming language like Python, and others. And, three, as soon as that person gets great at building deep learning models and AI applications using Python and other contemporary tools and languages, they become very marketable and could move to another company — the brain-drain can be very difficult to overcome.

So, how do companies fill the gap between the lack of support for open source and the difficulty of building in-house. The best option is to leverage the power of vendor relationships. This model has proven itself with big data, which nearly universally has adopted the open-source framework Hadoop in some capacity. Companies like Cloudera, Teradata and Hortonworks have worked to engineer new tools on top of the framework, so its users can focus on their market expertise and leave the support and SLAs to those vendors. As a result, big data has gone gangbusters at the corporate level, in spite of the lack of data scientists in the workforce.

For companies seeking to gain a competitive advantage in the field of deep learning, it’s time to take the same approach. By seeking out vendors that can work closely with internal teams to spin up deep-learning projects, companies can avoid waiting months on messaging boards or spending money on very expensive hires for answers to their enterprise-level AI questions.


Screen Shot 2017-09-25 at 9.26.14 PMAtif Kureishy – VP, Global Emerging Practices | AI & Deep Learning at Think Big Analytics, a business outcome-led global analytics consultancy.

Based in San Diego, Atif specializes in enabling clients across all major industry verticals, through strategic partnerships to deliver complex analytical solutions built on machine and deep learning. His teams are trusted advisors to the world’s most innovative companies to develop next-generation capabilities for strategic data-driven outcomes in areas of artificial intelligence, deep learning & data science.

Atif has more than 18 years in strategic and technology consulting working with senior executive clients. During this time, he has both written extensively and advised organizations on numerous topics; ranging from improving the digital customer experience, multi-national data analytics programs for smarter cities, cyber network defense for critical infrastructure protection, financial crime analytics for tracking illicit funds flow, and the use of smart data to enable analytic-driven value generation for energy & natural resource operational efficiencies.

Is failure good for your data scientists?

September 25, 2017

failfast

If you’ve heard of data science (if you haven’t, where have you been and how did you find this blog?), you’ve probably heard of “fail fast”. The fail fast mentality is based on the notion that if an activity isn’t going to work, you should find out as quickly as possible, and stop doing it.

As the size, complexity and number of new data sources continues to increase, there is a corresponding increase in the value of discovery analytics. Discovery analytics is the method by which we uncover patterns in data and develop new use cases that lead to business value.

It is easy to see how discovery activities lead to a fail fast method. However, how can we learn from these failures, and how can we proceed without experiencing the same failures time and again?

Good failure, bad failure

There are two different types of failure possible in a data science project: good failures and bad failures. Good failures are a necessary part of the discovery process, and an important step in finding value in data. On the other hand, bad failures occur when they could have been avoided, and are basically of waste of everybody’s time. Examples of the cause of bad failures include:

  • Poor specification – this is not specific to data science and applies to any project that isn’t specified properly in terms of expected results and appropriate timelines.
  • Inappropriate projects for a data science methodology – it has become increasingly common to call all analytics data science. If a project can be solved using a standard data warehouse and business intelligence method, then you should probably just do that.
  • Poor expectation management – many data science projects suffer from this. It is important to ensure stakeholders are aware what can and cannot be expected from the results.
  • Data supply – a vital first step in any analytics project is to ensure that the necessary data feeds are available and accessible.

Let’s talk about publication bias. This phenomenon occurs in the publication of scientific papers, where it is usual to only publish studies that produce positive results. What is far less common is to publish a paper that highlights the amount of work you did in order to fail to produce anything of any worth! The problem is that this leads to teams making the same mistakes, or proceeding down the same creative cul-de-sacs as so many before them. Because of publication bias, we do not learn from each other’s mistakes.

Exactly that situation can occur in a data science team. Unless a true collaborative environment exists for discovery and predictive model development, the same failures will be made over and over again by different members of the team.

Move out of the cul-de-sac

In order to benefit from the fail fast approach, data science teams need to adopt a best practice method of sharing results, methodologies and discovery work – especially when their work is considered a failure. This can be done in many ways, but some of the more effective include regular discussion – similar to agile methodology’s stand-up meetings – and using appropriate software to aid the process.

Software tools exist to facilitate collaboration, issue tracking, continuous documentation, source control and versioning of programme code, as well as task tracking. These tools create a lineage of activities that is permanent and searchable.

If you want to hear more on this subject, why not come to see my presentation ‘My data scientists are failures’ at the Teradata PARTNERS conference in Anaheim this October.

Find out more about the PARTNERS conference.


Hilman Chris_Web_MG_8878Christopher Hillman is a Principal Data Scientist in the International Advanced Analytics team at Teradata based in London. He has over 20 years experience working with analytics across many industries including Retail, Finance, Telecoms and Manufacturing. Chris is involved in the pre-sale and start-up activities of Analytics projects helping customers to gain value from and understand Advanced Analytics and Machine Learning. He has spoken on Data Science and analytics at Teradata events such as Universe and Partners and also industry events such as Strata, Hadoop World, Flink Forward and IEEE Big data conferences. Currently Chris is also studying part-time for a PhD in Data Science at the University of Dundee applying Big Data analytics to the data produced from experimentation into the Human Proteome.

Teradata Database 16.10 Now on Azure and AWS Marketplaces

September 25, 2017

newsGood news! We’ve just published important updates for both Azure and AWS Marketplaces.

This is the first public cloud update in which the Teradata team has aligned solution launch conventions across both Azure and AWS for simplicity.

There are many feature updates. Highlights pertaining to both Azure and AWS Marketplaces:

  • Added support for Teradata Database 16.10; Teradata Database 15.10 continues to be supported. Note that only the Sparse Maps portion of the MAPS or Multiple Hash Maps feature is currently available in the public cloud.
  • Added support for Teradata QueryGrid 2.n, replacing Teradata QueryGrid 1.n. QueryGrid Manager is now available with zero software cost. QueryGrid connectors must be ordered separately.
  • Added manual resizing capability (i.e., scale up/down) for a Teradata Database node.
  • Added support for Teradata Server Management in the Developer Tier.
  • Added the ability to enable Teradata Intelligent Memory (TIM) when deploying a Teradata ecosystem for the Advanced and Enterprise tiers.

——

Azure Marketplace-specific updates:

  • Increased Azure node limit support to 64 Nodes (with 33-64 nodes under controlled deployment).
  • Changed the 5TB storage configurations from 5 x 1023 GiB to 10 x 512 GiB for DS14_v2 and DS15_v2 VM sizes with premium storage.

cloudb1

 

See Teradata product listings on Azure Marketplace

See the latest information about Teradata software on Azure Marketplace

See the latest Teradata Database on Azure Getting Started Guide

——

AWS Marketplace-specific updates:

  • Added ability to enable Teradata Intelligent Memory when launching a Teradata. ecosystem or launching components separately for the Advanced and Enterprise tiers.
  • Changed the port for PUT from 8080 to 8443.
  • Removed the ability to create a new VPC when launching a Teradata Ecosystem.
  • Added ability to enter an existing placement group or configure separate placement groups for Teradata Database, Teradata Data Stream Controller, and Teradata Data Mover when launching a Teradata ecosystem.
  • Added support for using Teradata Access Module for AWS to export data from and import data to S3 storage.
  • Added Server Management to the Test/Dev Ecosystem CloudFormation Template.

cloudb2

See Teradata product listings on AWS Marketplace

See the latest information about Teradata software on AWS Marketplace

See the latest Teradata Database on AWS Getting Started Guide

——

The updated Data Stream Utility (DSU) capabilities on AWS enable some nifty backup and disaster recovery (DR) capabilities, including S3 – Multiple Buckets, Multiple Regions. DSU can now restore a save set from an S3 region or bucket that is not the same as the original backup.

Here’s the scenario: users may now configure multiple S3 buckets across more than one AWS region via the command line or BAR portlet. This allows users to enable a DR scenario that allows geographic separation of their DR systems, such as:

  • Base system backs up to AWS S3 in region X
  • AWS S3 can automatically replicate stored data between regions X and Y
  • Secondary system in second AWS region Y can have data loaded
  • In the event of a regional disaster, the secondary system can be brought online and customer can resume operations

In other words, this feature enables a user to:
1) perform a back up to S3 in one region
2) allow Amazon to replicate to another S3 region automatically, and then
3) restore the save set to a different Teradata system
.

Pretty cool!

——

Read more about Teradata software on Azure and AWS Marketplaces:

The 9 steps every business analyst should take

September 21, 2017

RS2681_shutterstock_330005462

Some time ago, Mikael Bisgaard-Bohr, vice president of business development at Teradata International, talked about working inthe new world of data and analytics.” In a world where the biggest companies are transforming their business using data, a new position has arisen: the business analyst.

What exactly is a business analyst?

The International Institute of Business Analysts (IIBA) defines the role as:

“The practice of enabling change in an organizational context, by defining needs and recommending solutions that deliver value to stakeholders”

Business analysts (BAs) certainly enable change. A good BA will not only help define business requirements, but the level of support needed to realise the necessary solution.

However, a business analyst’s key attribute is that they communicate. Business analysts listen to the business to understand the areas they want to improve or with which areas they need support. A business analyst can distil complex requirements into a brief for the technical team to design or build a solution.

BAs bring together both halves of the story, explaining a business problem to a technical team. Equally, they can lay out a technical solution in business terms for a nontechnical audience.

The first rule for a business analyst

Communication is everything. For a BA to succeed, verbal and written communication skills have to be extremely strong. A BA could explain a fundamental system change to the user community over email in the morning, liaise with the technical team on new business requirements via phone conference at lunch and present new functionality in person to the board of directors by the afternoon.

Regardless of the medium, a BA has to communicate clearly, in a concise and easily digestible way.

What does a business analyst do?

A BA manages requirements, but what does that mean? Regardless of the development methodology of choice — waterfall, Agile, RAD, or any other method you care to mention — a BA should shepherd a business through some version of the following nine steps:

  1. Understand the business problem or opportunity.
  2. Gather information: What is the status quo, and what needs to change?
  3. Present findings on the current and envisioned state of the organisation or system.
  4. Translate information into manageable chunks or requirements.
  5. Gain consensus on how to move towards the desired goal.
  6. Prioritise requirements based on the business benefit they will deliver and the effort needed to deliver them.
  7. Support a development team with knowledge about the business and its processes.
  8. Act as ‘subject zero’ for a variety of assessment phases, testing whether what is being created delivers on what the business wants.
  9. Assist with training and knowledge transfer, ensuring the business is left with a strong understanding of how any new functionality works.

If a BA delivers on all nine of those points, then it’s job done — happy customers.

Ultimately, the role of the BA is to be the custodian of information. They communicate information to the appropriate parties, in the correct format, and support the transformation of systems and processes using the information they gather.

In future posts, we’ll take a deeper look at each stage of the nine-step lifecycle of a business analyst, delve into a few detailed examples, and draw out some handy tips to help you become an exceptional BA.

As Clive Humby once said, “Data is the new oil”. The business analyst will teach businesses how to drill.


Niall Adamson photo

Niall Adamson is a digital business analyst at Think Big Analytics, a Teradata Company. He thrives in the consultative space between technical delivery and business need. With a skill set honed at a global consultancy, he has experienced working on a number of large engagements offering business analysis, stakeholder management and business change expertise.

Within data and analytics, the “If you build it, they will come” mentality is finally dead

September 20, 2017

Within data and analytics, the ³if you build it, they will come² mentality is finally dead_Monica Woolmer

How to align data and analytics to achieve business outcomes

Over the past few months I’ve been engaged across industries to help organisations understand what businesses are trying to achieve from big data and analytics. I’m glad to report that the focus is no longer on the technology first; at last, the belief in the old idiom from “if you build it, they will come” (pinched from the 1989 classic film Field of Dreams) has all but disappeared.

The truth is that any data and analytics projects must begin by identifying analytic use cases before drilling down to the key business questions that could be answered by the analytics.

“Any data and analytics projects must begin by identifying analytic use cases”

I recently worked with a government customer to lead a series of workshops to identify what the organisation expected to achieve from analytics before creating an environment or loading any data whatsoever. On the other side of the coin, when I engaged with one of our retail customers they had invested resource into loading their data onto a big data platform in addition to their data warehouse, and were struggling to understand the full potential of the data as a consequence.

Having to work in the opposite direction (much like the retailer I just mentioned) is a viable alternative, and can often be the right thing to do when you’re looking to get more from the systems you already have available. When it comes to a new system or technology however, beginning the process by looking at use cases is definitely the ideal.

To achieve real benefits you have to map analytic use cases based on the available data, along similar lines to the following diagram:

woolmer blog-field of dreams

The mapping was achieved with Teradata’s Retail Business Value Framework, a collection of hundreds of proven analytic use cases developed from engagements around the world.

Even from one simple mapping exercise, you can clearly see where additional analysis could be conducted, and where additional data sourcing projects should be planned for the future.

I am now working with a banking customer to take one specific use case (drivers of customer experience), break it down and refine it in order to identify the key events required to support customer journey analysis (e.g., website interactions, customer contacts).

A brief introduction to events (and their uses)

“Integrating events from different sources provides additional context, which helps to understand customer behaviour and develop meaningful customer journeys.”

Integrating events from different sources provides additional context, which helps to understand customer behaviour and develop meaningful customer journeys. Events occur across different channels, systems and devices. These channels are often developed independently of each other and serve different functional purposes. As a result, different types of data will be collected when events are processed or interactions initiated.

Events are a subset of the varieties of data that can be ingested into a data lake (such as the fantastic data lake offering from Think Big Analytics). Events are records that describe an interaction, such as between a bank and their customer at a moment in time. The event lake standardises the event data and organises it in a way that is meaningful for a wide range of users and use cases, creating a set of dimensions and metrics that are relevant for each event type.

If you build it, will they come?

Over the next few months I’ll be working with many more customers across industries and countries, helping them align their future plans for data and analytics to their desired business outcomes.

It is fair to say that at least in my corner of the world, the old ‘technology first’ mentality has become a thing of the past. Technology will continue to be necessary to achieve the business outcomes you’re looking to achieve, but it no longer leads the charge.

Looking to get the most from data and analytics in your company? Take a look at how Think Big Analytics can use a data lake to bring tangible benefits to your business: SHOW ME


Monica Woolmer

As an Industry/Business Consultant Monica’s role is to help organisations become data-driven. That is, to utilise her diverse experience across multiple industries to understand client’s business, articulate industry vision and trends and to identify opportunities to leverage analytics to achieve high-impact business outcomes.

From an Industry perspective, Monica primarily supports Retail and Public Sector. From a Business Consulting perspective, her area of focus is customer analytics. In addition, she works across industries to help our customers identify and prioritise the business outcomes they want to achieve from analytics.

With over 30 years of IT experience, Monica has been leading data management and data analysis implementations for 20 years. Prior to joining Teradata, Monica was the Managing Partner of Formation Data Pty Ltd, a specialty data management, data warehousing and analytics consultancy in Australia.

Who owns the customer experience in the digital age?

September 19, 2017

grocery

From online browsing to email offers, from in-app shopping carts to in-store purchases, tracking the customer experience is a complex marriage of a myriad of touch points, occurring across endless platforms with countless key performance indicators nested in each. For executives to to really wrap their heads around all these collective data points, there needs to be clear direction over who owns the customer experience.

And not a moment too soon: 52 percent of customers have switched providers in the last year based on poor customer service, representing a total of $1.6 billion in transactions.

Just like journalists and investigators are told to “follow the money,” those on the hunt for answers to their customers need to turn a new phrase: Follow the data.

However, that task isn’t as easy as it may sound. While it may seem obvious that the chief marketing officer — whose task is to really know what the customer wants to do before they do it and plan accordingly — should own the customer experience, often this position doesn’t have the access or ability to interpret all the data their company is collecting on the customer. Typically, it’s the chief technology officer who has the access and ability to parse through an enterprise’ customer experience information to understand what keeps some buyers coming back for more, while others are hitting “unsubscribe.”

Businesses need to take one of two approaches to ensure that customer experience data resides in the right hands. For the first approach, the CMO position needs to transform from its historical touchy-feely strategy into a hands-on-the-data, quasi-technical position. Finding someone with this blend is a challenge, and, as such, the CMO needs to work closely with the CTO on all projects so they can blend their knowledge of digital channels, programmatic and demand-side platforms, and big data analytics to understand the modern-day customer journey.

Alternately, companies can roll these two positions into one newer one — the chief data officer. Two years ago, Gartner projected that 50 percent of all companies would have a CDO by 2017, and the forecast is proving its worth. This position is not about determining the ideal experience for the customer, the way CMOs used to focus on experience-oriented marketing. This is about the concrete experience customers are having. With a deep knowledge of the data points that make up customer journey, the CDO could be the buck-stops-here owner of CX.

Regardless of the path companies choose, they must make one and ensure there is a defined point-person for the customer experience. Wrangling big data and understanding a buyer’s journey in light of these KPIs is a decent challenge now, but it’s only going to grow more complex as technology progresses. Digital disruption is going to alter the buyer journey in the near future in ways more complex than omni-channel monitoring currently presents. Will companies have to rethink customer engagement in light of the internet of things’ influence on an increasingly self-service economy? Will augmented reality differentiate a competitor, driving away customers that are seeking an interactive sales experience? While these seem like challenges for another day, they represent organizational risks for enterprises that aren’t prepared to answer the question: Who owns your customer experience?

Read more about related solutions or watch a video case study of how Proctor & Gamble optimizes the customer interaction for over 4.8 billion consumers in 180 countries, across more than 65 brands.


john-timmermanJohn Timmerman has spent the last 23 years with Teradata and has seen all sides of the Teradata enterprise and across most industries and geographies through his work in sales, business development, sales support, product management and marketing. For the last 12 years, his focus has been in the areas of CRM, Customer Interaction Management and Inbound Marketing.

Accelerate your career in big data and analytics

September 18, 2017

teradata-analytics-humans1Five reasons to join Think Big Analytics and see your career in analytics skyrocket

Data science continues to grow in popularity. With the global boom in data creation, plus the most successful organisations working to get the most from their information with big data strategies, the brightest professionals are flocking to build a career in data and analytics.

However, the industry is still relatively young, so many of those joining the ranks are early in their career. While the entry-level cupboard is well stocked, demand is outstripping supply when it comes to experienced individuals – people who can lead a project from start to finish, or jump in at different stages to provide the expertise that resolves problems. Just as in any other industry, getting experienced people with the right skill set to fit the organisational culture, along with commercial acumen – it’s not always easy.

Think Big Analytics offers variety and huge personal development opportunities by putting consultants and experts at the cutting edge of big data development. What’s more; we’re looking for people right now – from data engineers to data scientists and project managers.

Here are just five reasons why joining us could fast track your big data career:

  • Innovation – Think Big Analytics offers unparalleled opportunity to embrace innovation by working on ground-breaking projects, using emerging tools and novel methodologies. We work across industries: from banking security and fraud detection to e-commerce platform development and predictive analytics that enable real-time responsiveness for betting. The possibilities are as endless as they are exciting.
  • Creative focus – In-house data experts often have a narrow focus, working on the same thing for a long time. At Think Big Analytics, we value teams and welcome new ideas. Novel solutions are often borne from collaborative teamwork with our customers. Our experts work on challenging projects and develop ground-breaking big data solutions to drive innovation today, and in the future.
  • Find the tool that fits the job – Unlike most of our competitors, we are vendor and tool agnostic. That means not only do our teams work on varied projects across different verticals, they also get to test emerging technologies to see how they can be used to solve complex problems. One solution for a retailer on a project will vary greatly from the solution you create for a telecoms provider.
  • Skills development – We invest in our employees to ensure they develop their skills. Whether you want to upskill your domain or tech knowledge or become more customer-focused, we can help.
  • Rapid growth and career progression – As a relatively young company, there are countless opportunities to grow and progress. We’re moving extremely quickly, and we’d love you to join us for the ride.

Do you have what it takes?

If you understand business problems and have the ability to translate data into information and business outcomes, you might be just what we’re looking for. Apart from your technical abilities, ideally you should be comfortable talking business, and comfortable presenting new ideas to senior stakeholders.

Big data and analytics isn’t just a career for the future; it’s a promising field with room for incredible growth. We believe there will be a continued hiring demand for big data-related positions across industries as data and analytics adoption stops being the choice of the front-runners and becomes a necessity for the business to survive.

Think Big Analytics are looking for tomorrow’s analytics stars right now. If you want to become part of the team at one of the biggest businesses in big data, you need to get in touch.

JOIN US.

This post was originally posted on the Think Big Analytics website. You can find the original post here.

Teradata Cares – Mumbai

September 15, 2017

RS695_TDCaresLogo“Education is the most powerful weapon which you can use to change the world.”

Education is one of the most important ingredients to becoming all that you can be.  How open you are to learning will help determine your path in life.

As part of the CSR initiative to improve Child Education,  Teradata Mumbai, with the help of the  Seva  Sahyog team, sponsored a Science Lab for students of Shri Jayeshwar Vidyamandir  located in the village of Dengyachi.  The school’s students  and teachers were very excited with the learnings the lab provided and the Teradata was thrilled to help the community.

Teradata Cares Mumbai team wishes all the students of Shri Jayeshwar Vidyamandir School all the best for their future and we hope that the Science Lab helps these students find a passion for learning.

Sincerely,

Teradata Cares Mumbai team

 

Mumbai blog