big data

 

(Part 2 of a post illustrating how marketers are creatively leveraging big data to secure competitive advantages.)

Big data leveraged into insights have a strong likelihood to distinguish organizations from their competitors. Because of the infancy of this movement, few big data insights to date have been turned into marketing advantages - so early entrants into big data marketing have a distinct advantage. Consider the following big data marketing examples for a view into how other early adopter enterprises have sought advantage from big data:

1. Next Generation Customer Retargeting

As big data analytics become more sophisticated, marketers will find better ways to retarget customers. Imagine, for example, retargeting based on items that are viewed online but not clicked on. This and other tactics will provide more customizable methods than the retargeting currently being used.

2. Use Heat Map Technology to Track In-Store Customer Preferences

Use on premise camera systems with a heat map technology to view in-store customer traffic – just as websites use technology to register online activity. This offline traffic information can be contrasted with online data to tell retailers how products perform online versus offline in order to adjust marketing programs.

3. Leverage Geospatial Data to Communicate with Customers

Use geospatial data to prepare targeted offers AND drive online customers to store locations. Wireless carriers have increased revenue per user with targeted marketing campaigns and combined offline and online marketing efforts.

4. Analyze Social Media to Increase Revenue

Use social network analysis to identify and impact influential customers. Wireless carriers have found that by implementing social analysis they can increase the revenue that their top 10 percent of influential customers impact – from 35 percent to an impressive 80 percent.

5. Focus On Conversions

Marketers should talk in the language of conversions and place their focus there. “What is the source of leads that has the highest conversion?” “What type of content inspires the strongest brand advocates?” “Which channels host the highest rate of conversions.” Use big data to inform and drive all aspects of conversions.

Look at 6 Ways Big Data Marketing Helps Companies Be Competitive: Part 1 for more examples of leveraging big data marketing.

 

6 Ways Big Data Marketing Helps Companies Be Competitive: Part 1

Posted on: February 24th, 2015 by Chris Twogood No Comments

 

Big data – business changing data – is giving marketers new ways to be innovative and step ahead of competitors. A creative strategy or advertising campaign is only scratching the surface of mechanisms available today to drive revenue. Effective CMOs must appreciate the power of new and diverse data sources and demand marketing directors interpret and use statistical business and customer insights to create smart strategies and quality predictive analysis.

Understanding some of the more clever big data marketing examples helps to illustrate how marketers should be thinking analytically and creatively with non-traditional data. Consider the following big data marketing examples:

1. Measure Social Media Impact

Companies can measure the impact of social media with custom analytics solutions or social network analysis.

2. Identify Your Brand Evangelists

Identify alpha influencers and use these individuals in active marketing campaigns. Find alpha influencers not just through traditional transactions (recent purchases, customer service calls) but also through social media.

3. Translate Big Data Insights into Actionable Marketing Tactics

Translate big data insights into actionable marketing tactics with teams of different disciplines. The most successful are teams that work fast and are highly iterative – business, IT, and analytics specialists rapidly review real-world findings, recalibrate analyses, adjust assumptions, and then test outcomes.

4. Create Customer Buying Projections

Use historic behavioral data for a defined target as an indicator for behavior against a different category of product offering. For example, test payment history or upgrade likelihood for a utility service as indicators of behavior for an entertainment offering or emerging credit offering. Test into success.

5. Understand True Value of Different Marketing Channel

Combine sales data from traditional media and social-media sites to create a model that highlights the impact of traditional media versus activity reflected on social media (like call center interactions). Bad customer experiences are more powerful sales drivers than traditional media activity. Spending behind improving customer service can be more effective than funding advertising – to drive revenue.

6. Pinpoint Sales Opportunities by Zip Code

Rather than overloading sales reps with reams of data and complex models to interpret, create powerful sales tools with simple, visual interfaces that pinpoint new-customer potential by zip code. It’s a proven tactic for increased sales.

Look for Part 2 for more examples of clever examples of big data marketing. In the meantime, see other big data examples.

 

Data-Driven Design: Smart Modeling in the Fast Lane

Posted on: February 24th, 2015 by Guest Blogger 2 Comments

 

In this blog, I would like to discuss a different way of modeling data regardless of the method such as Third Normal Form or Dimensional or Analytical datasets. This new way of data modeling will cut down the development cycles by avoiding rework, be agile, and produce higher quality solutions. It’s a discipline that looks at requirements and data as input into the design.

A lot of organizations have struggled getting the data model correct, especially for application, which has a big impact on different phases of the system development lifecycle. Generally, we elicit requirements first where the IT team and business users together create a business requirements document (BRD).

Business users explain business rules and how source data should be transformed into something they can use and understand. We then create a data model using the BRD and produce a technical requirements documentation which is then used to develop the code. Sometimes it takes us over 9 months before we start looking at the source data. This delay in engaging data almost every time causes rework since the design was based only on requirements. The other extreme end of this is when a design is based only on data.

We have always either based the design solely on requirements or data but hardly ever using both methods. We should give the business users what they want and yet be mindful of the realities of data.

It has been almost impossible to employ both methods for different reasons such as traditional waterfall method where BDUF (Big Design Up Front) is introduced without ever looking at the data. Other reasons are we work with data but the data is either created for proof of concept or testing which is farther from the realities of production data. To do this correctly, we need JIT (Just in Time) or good enough requirements and then get into the data quickly and mold our design based on both the requirements and data.

The idea is to get into the data quickly and validate the business rules and assumptions made by business users. Data-driven design is about engaging the data early. It is more than data profiling, as data-driven design inspects and adapts in context of the target design. As we model our design, we immediately begin loading data into it, often by day one or two of the sprint. That is the key.

Early in the sprint, data-driven design marries the perspective of the source data to the perspective of the business requirements to identify gaps, transformation needs, quality issues, and opportunities to expand our design. End users generally know about the day to day business but are not aware of the data.

The data-driven design concept can be used whether an organization is practicing waterfall or agile methodology. It obviously fits very nicely with the agile methodologies and Scrum principles such as inspect and adapt. We inspect the data and adapt the design accordingly. Using DDD we can test the coverage and fit of the target schema, from the analytical user perspective. By encouraging the design and testing of target schema using real data in quick, iterative cycles, the development team can ensure that target schema designed for implementation have been thoroughly reviewed, tested and approved by end-users before project build begins.

Case Study: While working with a mega-retailer, in one of the projects I was decomposing business questions. We were working with promotions and discounts subject area and we had two metrics: Promotion Sales Amount and Commercial Sales Amount. Any item that was sold as part of a promotion is counted towards Promotion Sales and any item that is sold as regular is counted towards Commercial Sales. Please note that Discount Amount and Promotion Sales Amount are two very different metrics. While decomposing, the business user described that each line item within a transaction (header) would have the discount amount evenly proportioned.

Data driven design graphicFor example – Let’s say there is a promotion where if you buy 3 bottles of wine then you get 2 bottles free. In this case, according to the business user, there would be discount amount evenly proportioned across the 5 line items - thus indicating that these 5 line items are on promotion and we can count the sales of these 5 line items toward Promotion Sales Amount.

This wasn’t the case when the team validated this scenario against the data. We discovered that the discount amount was only present for the “get” items and not for the “buy” items. Using our example, discount amount was provided for the 2 free bottles (get) and not for 3 bottles (buy). This makes it hard to calculate Promotion Sales Amount for the 3 “buy” items since it wasn’t known if the customer just bought 3 items or 5 items unless we looked at all the records, which was in millions every day.

What if the customer bought 6 bottles of wine so ideally 5 lines are on promotion and the 6th line (diagram above) is commercial sales or regular sales? Looking at the source data there was no way of knowing which transaction lines are part of promotion and which aren’t.

After this discovery, we had to let the business users know about the inaccuracy for calculating Promotion Sales Amount. Proactively, we designed a new fact to accommodate for the reality of data. There were more complicated scenarios that the team discovered that the business user hadn’t thought of.

In the example above, we had the same item for “buy” and “get” which was wine. We found a scenario, where a customer bought a 6 pack of beer then got a glass free. This further adds to the complexity. After validating the business rules against source data, we had to request additional data for “buy” and “get” list to properly calculate Promotion Sales Amount.

Imagine finding out that you need additional source data to satisfy business requirements nine months into the project. Think about change request for data model, development, testing etc. With DDD, we found this out within days and adapted to the “data realities” within the same week. The team also discovered that the person at the POS system could either pick up a wine bottle and times it by 7 or he could “beep” each bottle one by one. This inconsistency makes a lot of difference such as one record versus 7 records in the source feed.

There were other discoveries we made along the way as we got into the data and designed the target schema while keeping the reality of the data in mind. We were also able to ensure that the source system has the right available grain that the business users required.

Grover Sachin bio pic blog small

Sachin Grover leads the Teradata Agile group within Teradata. He has been with Teradata for 5 years and has worked on development of Solution Modeling Building Blocks and helped define best practices for semantic data models on Teradata. He has over 10 years of experience in IT industry as a BI / DW architect, modeler, designer, analyst, developer and tester.

Selecting a Big Data Solution: 5 Questions to Ask

Posted on: February 18th, 2015 by Chris Twogood No Comments

 

Selecting a big data solution five questions to askFor years now certain enterprises such as big-box retailers, online pioneers and consumer credit innovators have been successfully leveraging big data – to the point where these early adopter organizations can outperform competitors 2-1. They gain insights across their world – from their view of customers, to customer interactions and their perspective of the category.

With such a disparity in performance between the big data literate and the big data phobic confirmed by the top consulting firms, how can there still be a lack of momentum in moving toward the big data light? Experts advise almost unanimously that big data must be the “next big move” among enterprises to stay competitive and have an edge in getting ahead.

The big data terrain is still foreign and intimidating. Assembled here are 5 things to consider as you approach implementing a big data solution. They have been tailored to give you an eye for identifying the most competitive costs, shortest time to value and biggest results. Familiarize yourself with these concepts. Make them your questions to ask providers.

1. How will this big data solution handle the rush of data today and tomorrow?

Big data will race toward you with a staggering velocity, in great variety and with extreme volume. With regards to high velocity, ensure your ability to implement real-time processing or ad hoc queries. Handling high volume is a matter of the right hardware and infrastructure. Accommodating variety is more complicated and requires subject matter expertise. Consider both acquisition of big data and big data processes for getting the data into usable shape. Experts can leverage variety into a big success, but it can also be an opportunity for big failure.

2. What is the total cost of the big data solution?

Total costs include the initial implementation charges for hardware and software, and the cost for maintenance and support for the second year. Add in necessary labor costs...for data scientists, IT resources and analysts. Consider the necessary manpower to achieve the desired ROI for year one and two.

3. Is the estimated time to value acceptable?

Extracting rapid value from big data is not easy today. Businesses are challenged to find, hire and retain big data analytic professionals who can handle the implementation and management.

Big Data solutions should be easy to implement and reduce time to value. The Teradata Aster Discovery Platform handlesmulti-structured data stores and offers 100+ pre-built analytics to quickly build big data apps. Included are visual functions for big data analytics & discovery.

4. What direct and indirect benefits should you expect from a big data solution?

Your organization should expect insights into increasing prospect conversions, reducing churn, upselling, improving customer experiences, marketing efficiency – all resulting in tangible benefits like increased revenue, efficiency or loyalty. Work with the big data solution provider to set realistic objectives like a lift in net profit margin for Year 1, Year 2, etc.

Enterprises should also discuss and expect increases in IT and end user productivity. Organizations have documented (with independent research firms) that as many as 20% of employees (IT and business) have a direct benefit of increased productivity from insight that can be quickly generated an implemented.

5. Are next generation short cuts or implementation aids available?

In your initial review of big data solutions and providers, compare offerings to determine if options like pre-built functions or applications or industry knowledgeable professional services are readily available and affordable. Search for means of significantly reducing the time to value, the ongoing labor costs and the magnitude of your return on investment.

Considering these factors will help ensure the fast and enduring success of your big data initiative so you can quickly take control of your organization’s competitiveness – in the era offering the biggest competitive growth opportunity in the last decade.

Get more insights into big data solutions.

What is Big Data?

Posted on: February 12th, 2015 by Chris Twogood No Comments

 

what is big dataWhat is Big Data? It’s not as simple as saying social media posts are big data or sensor data is big data. And it’s not sufficient to say big data is just a lot of data.

Beyond the idea of large volumes of data...or a greater scope of data...big data refers to data sets that exceed the boundaries and sizes of normal processing capabilities. They force “non-traditional” processing and require new forms of integration.

Without a new method of integration, more efficient processing and new analytics capabilities, it’s not possible to uncover the large hidden values from these large datasets that are diverse, complex, and of a massive scale.

So, understanding the answer to the question, "What is big data?" involves more than being able to identify a data type. Understanding the movement includes knowing the characteristics and origins of the data, its volume, its velocity and all the accommodations made to properly leverage it. Understanding the value of big data means being able to see how it can deliver an insightful, aggressive positioning for your organization.

As far as the characteristics of the data, consider three different formats:

1. Structured data (or traditional data) gets its name because it resides in a fixed field within a record or file. Structured data has the advantage of being easily entered, stored, queried and analyzed. Previously, because of costs and performance limitations, relational databases were the only way to effectively manage data. Anything that couldn't fit into a tightly organized structure couldn’t be used.

2. Unstructured data usually refers to information that is not stored in a relational database or is not organized in a pre-defined manner. Unstructured data files are typically text-heavy, but may contain data such as dates, numbers, and multimedia content. Examples include e-mail messages, call center transcripts, forums, blogs, videos and social network postings.

3. Semi-structured data is a cross between the two. With semi-structured data, tags or other types of markers are used to identify certain elements, but the data itself doesn’t have a rigid structure. For text documents, you can now include metadata with an author's name and creation data, but the bulk of the document is unstructured text. Emails have author, recipient, and time fields added to the unstructured content data. Semi-structured data is information that doesn't reside in a relational database but that does have some organizational properties that make it easier to analyze.

In real world or practical terms, review these examples where organizations have leveraged value from unstructured data and used it to give them an advantage over competitors:

You receive an e-mail containing an offer for a turnkey personal computer. You were exploring computers on that manufacturer’s web site just a few hours prior.

As you shop for homes on the web, you are served typical commute times to and from work for the homes you review. Drive times are determined by GPS signals from millions of drivers.

As far as understanding the question, What is big data... Today, companies derive value from diverse data sources using the latest in advanced analytic innovation. Big data analytics deduce previously inaccessible insights to inform decisions that can be more advantageous and tailored. These more enlightened actions may radically change how management views its business – and therefore can allow for new competitive strategies.

Get more Big Data Insights.

Lots of Big Data Talk, Little Big Data Action

Posted on: February 11th, 2015 by Manan Goel No Comments

 

 Apps Are One Solution To Big Data Complexity

Offering big data apps is a great way for the analytics industry to put its muscle where its mouth is. Organizations face great hurdles in trying to benefit from the opportunities of big data.  Extracting rapid value from big data remains challenging.

To ease companies into realizing bankable big data benefits, Teradata has developed a collection of big data apps – pre-built templates that act as time-saving short cuts to value. Limited skill sets and complexity make it challenging for analytic professionals to rapidly and consistently derive actionable insights that can be easily operationalized.  Teradata is taking the lead in offering advanced analytic apps powered by Teradata Aster AppCenter to give sophisticated results from big data analytics.

The big data apps from Teradata are industry tailored analytical templates that address business challenges specific to the individual category. Purpose-built apps for retail address path to purchase and shopping cart abandonment.  Apps for healthcare map the paths to surgery and drug prescription affinity. Financial apps tackle omni-channel customer experiences and fraud.  The industries covered include consumer financial, entertainment and gaming, healthcare, manufacturing, retail, communications, travel and hospitality.

Big data apps are pre-built templates that can be further configured with help from Teradata professional services to address specific customer needs or goals.  Organizations have found that specialized big data analytic skills like Python, R, Java and MapReduce take time and require highly specialized manpower. Conversely, apps deliver fast time to value with self-service analytics. The purpose-built apps can be quickly deployed and configured/customized with minimal effort to deliver swift analytic value.

For app distribution, consumption and custom app development, the AppCenter makes big data analytics secure, scalable and repeatable by providing common services to build, deploy and consume apps.

With the apps and related solutions like AppCenter from Teradata, analytic professionals spend less time preparing data and more time doing discovery and iteration to find new insights and value.

Get more big data insights now!

 

 

Teradata Aster AppCenter: Reduce the Chasm of Data Science

Posted on: February 11th, 2015 by John Thuma No Comments

 

Data scientists are doing amazing thing with data and analytics.  The data surface area is exploding with new data sources being invented and exploited almost daily.  The Internet of Things is being realized and is not just theory, it is in practice.   Tools and technology are making it easier for Data Scientists to develop solutions that impact organizations.  Rapid fire methods for predicting churn, providing a personalized next best offer or predicting part failures are just some of the new insights being developed across a variety of industries.

But challenges remain.  Data Science has a language and technique all of its own.  Strange words like: Machine Learning, Naïve Bayes, and Support Vector Machines are creeping into our organizations.   These topics can be very difficult to understand if you are not trained or have not spent time learning to perfect them.

There is a chasm between business and data science.  Reducing this gap and operationalizing big data analytics is paramount to the success of all Data Science efforts.  We must democratize and enable anyone to participate in big data discovery.  The Teradata Aster AppCenter is a big step forward in bridging the gap between data science and the rest of us.  The Teradata Aster AppCenter  makes big data analytics consumable by the masses.

Over the past two years I have personally worked on projects with organizations spanning various vertical industries.  I have engaged with hundreds of people across retail, insurance, government, pharmaceuticals, manufacturing, and others.  The one question that they all ask is: “John, I have people that can develop solutions with Aster; how do I integrate these solutions into my organization?  How can other people use these insights?”  Great questions!

I didn’t have an easy answer, but now I do. The Teradata Aster AppCenter provides a simple to use point and click web interface for consuming big data insights.  It wraps all the complexity and great work that Data Scientists do and gives it a simple interface that anyone can use.  It allows business people to have a conversation with their data like never before.  Data Scientists love it because it gives them a tool to showcase their solutions and their hard work.

Just the other day I deployed my first application in The Teradata Aster AppCenter.  I had never built one before, nor did I have any training or a phone a friend option.  I also didn’t want to have training because I am a technology skeptic.  Technology has to be easy to use.  So I put it to the test and here is what I found.

The interface is intuitive and I had a simple application deployed in 20 minutes.  Another 20 minutes went by and I had three visualization options embedded in my App.   I then constructed a custom user interface that provides drop down menus as options to make the application more flexible and interactive.  In that hour I built an application that anyone can use and they don’t have to know how to write a single line of code or be a technical unicorn.  I was blown away by the simplicity and power.   I am now able to deploy Teradata Aster solutions in minutes and publish them out to the masses.  The Teradata Aster AppCenter reduces the chasm between Data Science and the rest of us.

In conclusion, The Teradata Aster AppCenter passed my tests.  Please, don’t take my word for it, try it out.  Also, we have an abundance of videos, training materials, and templates on the way to guide your experience.  I am really looking forward to seeing new solutions developed and watching the evolution of platform.   The Teradata Aster AppCenter gives Data Science a voice and a platform for Next Generation Analytic consumption.

 

Are you a business person or executive involved in a data warehouse project where the term “normalization” keeps coming up but you have no idea what they (the technical IT folks) mean? You have heard them talk about “third” normal form and wonder if it is some new health fad or yoga position.

In my prior blog “Modeling the Data” I talked about how data integration is necessary to address many of your business priorities and that one of the first steps in data integration is to organize your data into tables. A “data model” is a graphical representation of that organization which serves as a communication tool between and within the business and IT as shown below.

Data Model Example – Accounts and Individuals Reflect Business Rules

Data Model Example – Accounts and Individuals Reflect Business Rules

Normalization

So now we get to normalization. Normalization is the process that one goes through to decide in which table a type of data belongs. Let’s take a simple example. I have two tables – one contains loan account information and another contains information about individuals who may be customers (see above Figure). I have a data type called “birth date.” During the normalization process I will ask “What does this data type describe?” Does it describe the account or does it describe an individual? This answer is simple – it describes individuals. You may think that this is a piece of cake. Well, not so fast. Which table is the best fit for the data type “birth date” may be obvious to us, but many times the “best table fit” for a type of data may not be so obvious and hence you need definitions for those data types.

One example of an ambiguous data type is “balance.” Does this “balance” describe a point in time for an account? Or does it describe the sum of the balances for a group of accounts at a point in time? Maybe it should be “average balance over a time period.” Maybe it is high balance or low balance or a limit at a point in time. Maybe it is the cleared balance or a ledger balance. Maybe it is a summation of all the deposit balances held by one person at a point in time. A data model is not complete unless all its components (tables and columns) have definitions.

The normalization process can get more involved when we talk about first, second and third normal forms (and sometimes fourth and fifth). Using the birth date example, if the type of data (e.g. birth date) describes the complete meaning of the table then it is third normal form. In the above data model example, if I put birth date into the INDIVIDUAL ACCOUNT table then that would not be in third normal form because the birth date describes only part of the meaning of that table – the individual part. In this case it would be in only second normal form. By putting birth date into the INDIVIDUAL table it is in third normal form because it describes the complete meaning of the table. In most cases we take a model to third normal form but not fourth or fifth.

Why Normalize Your Data?

Why is it important to normalize your data? There are two basic reasons. (1) The first is to eliminate redundancy. When you bring your data together from different sources you will inevitably have duplications in data values for the same data type across the source systems. One example is the same person may have their name spelled differently on a loan account versus a deposit account. That person does not have two names, the name just needs to be represented in one place with one value in the right place in the integrated database. (2) The second reason is to make sure that the data is organized into tables in a way that reflects the business rule – our example of birth date describing the individual and not the account. Putting data where they logically belong will make it easier and more cost effective to maintain over the long term.

In Summary

So the next time someone brings up the concept of normalization think about the buckets of data you have in the enterprise, how you need to bring it all together so you can answer those tough business questions. Finally, when you bring it together, you need to eliminate redundancy and organize data in a logical way that makes sense to the business so that your efforts and design will last over the long term. Normalization is one of the processes to get you there.

Kalthoff Work resized Photo (2)

Nancy Kalthoff is the product manager and original data architect for the Teradata financial services data model (FSDM) for which she received a patent. She has been in IT over 30 years and with Teradata over 20 years concentrating on data architecture, business analysis, and data modeling.

6 Big Data Examples From Big Global Brands

Posted on: January 29th, 2015 by Chris Twogood No Comments

 

big data examples from major global brandsWhere are we with big data? Has it moved from theory into practice? Are global organizations really using big data today to make significant changes in their organizations? Are there working solutions in the market that derive value from the analysis of non-traditional data?

Let’s discuss six big data examples that illustrate how big global brands are realizing big gains from their use of these new data sources.

Get a better understanding of how major corporations like wireless carriers, healthcare providers, tech players and hardware manufacturers are supporting a culture of innovation with these examples of new capabilities with nontraditional big data.

A major U.S. wireless carrier is a perfect example of a big data innovator. As part of their big data solution, the carrier now has a repeatable process for uncovering insights that solve issues management wants addressed.

Consider this big data example: recently the carrier analyzed voice conversations (a difficult source of unstructured data to handle previously) between customers and call reps. The organization needed to discover why customers were calling into service centers after making online payments. Using big data analytics , the internal team uncovered an answer that wasn’t available by other means – it learned that customers needed reassurance from a call rep that their service would not be interrupted due to late payment. With this valuable insight, the carrier understood that it needed to confirm the payment assured uninterrupted service. The result was an immediate drop in calls to customer service.

The same carrier “listens to customers” in real time and in a way never possible before big data analytics. By integrating and analyzing big data examples like social posts and call verbatims, the company has greatly enriched its individual customer portraits. Now it gains a significant advantage over competitors by quickly targeting communications or marketing efforts based on customer sentiment instead of marketing guesses about customer groups.

Another big data example lies within a solution used by premier global wireless carrier Vodafone Netherlands. With its new capabilities, the phone company has the advantage of being able to integrate large amounts of customer-based nontraditional data – such as social posts and web history. It can now generate a higher level of insight from its greatly expanded variety of data sources. The result is a more sophisticated view of customers. These new insights allow marketing to deliver more relevant offers to realize two significant benefits: stronger marketing success and the ability to distinguish its premium brand from lower-priced competitors. Learn about other ways international carriers benefit from big data analytics.

Wellmark Blue Cross/Blue Shield also use big data to find new answers. One healthcare provider had a troubling inbound call volume – with a distinctive pattern. Six percent of its members accounted for 50 percent of its inbound call volume. With its new big data solution, the insurer can leverage text from call logs and integrate that with transaction data. This establishes a repeatable process to untangle odd call volume phenomenon like this, and others. Learn how Wellmark Blue Cross/Blue Shield benefits from big data analytics.

A global leader in consumer transaction technologies, NCR, has effectively leveraged diverse data sources to fundamentally modernize its business capabilities. Big data has supplied near limitless opportunities for the hardware and software giant. For example, it can now predict when certain devices in the field will fail. This opportunity is large considering the organization services tens of millions of devices around the world. The company receives telematics data from devices in the field and performs predictive analytics to determine the health of the technologies it both manufactures and/or services. The benefit? It can send digital instructions remotely to wired devices to address issues – or it can send technicians with the correct parts and tools, to the right device, at the right time. Downtime at customers can be planned or even prevented. Learn how NCR uses big data analytics.

Dell uses social media data to double closure rates and drive business results. With a sophisticated big data solution, it enhances the view of its customers and prospects in real time by tying traditional transaction records with things like social media user names and email addresses. With this information, the computer maker can offer personalized promotional offers. Big data analytics produces insights based on this newly expanded view of the customer that improve propensity models, which in turn, makes future campaigns more successful. Prior to this novel use of social data, marketing could only identify names for about 33 percent of those who responded to promotional programs. Now, the manufacturer accurately identifies 66 percent of respondents. Learn about ways that Dell drives business using big data analytics.

Learn more about Teradata’s Big Data Analytics Solutions.

Business Highlights in Big Data History

Posted on: January 22nd, 2015 by Chris Twogood No Comments

 

If you’re relatively new to Big Data, you might find this snapshot of the last 20 years of big data history helpful. Hopefully, you can build your understanding and figure out where you reside in the journey of Big Data development, adoption and optimization.

white man in brown business suitGentlemen Start Your Spreadsheets (1995) The world wide web explodes and business intelligence data began piling up – in Excel documents.

Data Storage and BI (1996) The influx of huge quantities of information brought about a new challenge. Digital storage quickly became more cost-effective for storing data than paper – and BI platforms began to emerge.

Houston, We have A Problem (1997) The term Big Data was used for the first time when researchers at NASA wrote an article identifying that the rise of data was becoming an issue for current computer systems.

Yes, Big Data was first considered a problem.

Ask Nicely (1998) By the point that enough data was able to be stored, IT departments were responsible for 80% of the business intelligence access. At this time, "predictive analysis" forecasting was starting to also change how organizations do business.

A Lotta Data (2000) The quantification of new information creation began being studied on an annual bases. In 2000, 1.5 exabytes of unique information is documented per year.

Control Freaks (2001)Papers were being written about controlling the big data problem. To describe it, they had to define it and they did so with the three V’s....data volume, velocity and variety as coined by Doug Laney, now a Gartner analyst. Work begins on capabilities like language processing, predictive modeling and data-gathering.

It Was A BIG Year (2003) The amount of digital information created by computers and other data systems in 2003 blows past the amount of information created in all of human (or big data) history prior to 2003.

Problem Child Becomes Prodigy (2005) Web 2.0 companies are assessed by their database management abilities. The issue becomes a given or core competency. Big Data begins to emerge as an opportunity.Apache Hadoop, soon to become a foundation of government big data efforts, is created.

Not Your Dad’s Oracle (2005) Alternatives (to Oracle) that are more focused on the usability of the end-user emerge. Big Data solutions that work the way people work collaboratively, on the fly and in real-time are the gold standard.

Taming the Big Data Explosion (2006) A solution to handle the explosion of big data from the web is more prevalent...Hadoop. Free to download, use, enhance and improve...like Java in the 80s. Hadoop is a 100% open source way of storing and processing data – that enables distributed parallel processing of huge amounts of data.

Can I Interest You In A Flood? (2008) The BIG part of big data starts to show itself. The number of devices connected to the Internet exceeds the world’s population.

Real questions: In 2015, will the internet be 500x larger than it is now? Will IP traffic reach one zettabyte?

How Big is Big? (2008) The term “Big Data” begins catching on among techies. Wired magazine mentions the “data deluge.” “Petabyte” age is coined...too technical to be understood...it doesn’t matter...as it is soon replaced by bigger measures like exabytes, zettabytes and yottabytes.

No They Didn’t (2008) Yes, they said it. Big Data computing is perhaps the biggest innovation in computing in the last decade. We have only begun to see its potential.

Business Intelligence became a top priority for CIO's in 2009.

BI this....BI that (2010) Recognition and use of Business Intelligence (BI) becomes common as 35% of the rank and file enterprises began to readily employ “pervasive” business intelligence. Look at best-in-class organizations, and you find an adoption of 67% – and it’s moving to self-service.

Moving On Up (2011) Business Intelligence matures with trends emerging in cloud computing, data visualization, predictive analytics and big data is on the horizon.

Big Government and Big Data (2012) The Obama administration announces the Big Data Research and Development Initiative – 84 separate programs. The National Science Foundation publishes “Core Techniques and Technologies for Advancing Big Data Science & Engineering.”

Even Better Than a Rewards Program (2013) (Big) Data as “a real business asset used to gain competitive advantage in the market” becomes accepted. The widespread drive to understand and make use of big data – to remain relevant – is well underway.

Want to leverage big data analytics for better and more efficient business? Learn more about Teradata’s big data solutions.