Monthly Archives: March 2017

Making Smart City projects smarter with Smart Organisations

Wednesday March 29th, 2017

Smart City

Smart City is a future vision state of a city that is fully optimised and highly efficient in managing its resources (i.e. energy, mobility, building), quality of life (i.e. social, health, environment) and is innovative in how it operates (i.e. education, economy, research). Smart City projects aim to alleviate traffic congestion, prevent water pollution, minimise the impacts of weather in its environment, reduce CO² emissions, improve energy efficiency in buildings and make advances in social well-being.

Here is an example of how a Smart City project manages energy efficiencies. Accurate weather predictions can help to optimise heating and cooling via smart thermostats and pricing plans. It also influences maintenance and operation schedules and helps plan for disruptions, rather than just reacting to severe events. Imagine managing solar power, heat pumps, huge transformers, photo-voltaic, and all those wonderful underground wires in a new city.

What makes Smart City smart?

What was previously a ‘dumb’ thermostat or other industrial control system is now being embedded with sensors that continuously transmit the status of performance of equipment or instrument or machine (also known as ‘thing’ in the terminology of Internet of Things – IoT). By continuously monitoring streams of sensor data and applying analytics on the multitude of sources of sensor data in meaningful ways it will be possible to employ the telecommunication network technologies to remotely communicate with ‘things’ to alter their state to repair and prevent malfunctions. This, in essence, is the underlying smartness.

Smart Organisations

It is evident from the Smart City projects that exploiting the smartness of ‘things’ cannot be achieved alone by the city government and needs the involvement of citizens, partners and collaboration among participating departments and organisations. Increasingly, industry boundaries are expanding beyond individual organisation / systems so as to exploit related external information that can be coordinated and optimised for making homes, building and cities smart.

Fortunately, Digital, instrumented by information and communication technologies has been the enabling factor in organisational change and innovation, and there is now evidence on their impact on industrial value chains.

The term “smart organisation” was coined by the European Commission’s research program for organisations that are knowledge-driven, internet-worked, and dynamically adaptive to new organisational forms and practices, learning as well as agile in their ability to create and exploit the opportunities offered in the digital age.

Are you interested to joining the ranks of smart organisations?

Redefining Michael Porter’s Value Chain for digital age

Smart organisations involve more than the capability of setting up and exploiting a digital infrastructure. Digital is not just about omni-channel campaigns for marketing purposes. It is an enabler and part of end-to-end integration of the organisation’s business processes to achieve strategic capabilities, remembering that an organisation’s capabilities are not confined to those it owns and is strongly influenced by resources outside the organisation which are an integral part of the chain between product or service design, through production and marketing to the use of the product or service by consumers.

One executive of a multi-national organisation operating in Australasia admitted that they have had Digital Portal (i.e. Online presence) for a long time but they have not been successful in translating this capability to online-commerce due to lack of seamless integration between their value chain partners.

“This new product data is valuable by itself, yet its value increases exponentially when it is integrated with other data, such as service histories, inventory locations, commodity prices, and traffic patterns…” (Dr. Michael Porter).

Sundara Raman - Smart Cities 1

Organisation Value Redefined

Today, individuals and organisations understand value as something different from value in its traditional sense (e.g. from investments in tangible assets labour, plants and machinery). For instance, Boston Consulting Group (BCG) studied organisational performance showing direct relationship between volumes of production and declining cost which they called the experience curve. The premise of BCG is that in any market segment of an industry, price levels tend to be similar for similar products. Therefore, what makes an organisation more profitable than the other must be the level of its costs. The component parts that contribute to declining costs are self-learning (i.e. efficiencies from learning to do job better), specialisation (i.e. division of labour) and scale (i.e. reduced capex with volume growth).

Sundara Raman - Smart Cities 2

Value in a networked economy grows with the number of intermediation opportunities (e.g. relationships). “Smart” resources – e.g. information, analytics, software, knowledge, brands and innovation capability– contribute increasingly to value creation in today’s economy.

The experience curve in networked economy arises from employing machine learning to drive efficiencies in decision making. Learning and knowledge creation comes from gaining new insights by continually doing discovery analytics.

Sundara Raman - Smart Cities 3

How can organisations become smarter?

Most organisations are not designed to be stable but they evolve much as its biological analogy. In a digital economy the law of survival of the fittest will evidence its relevance to organisations as it does in the biological domain.

Smart organisations are committed to building collaborative partnerships with a clear focus on customer advocacy surpassing customer expectations and is recognized as a key success factor. A smart organisation identifies and exploits new opportunities to optimise and adapts to process changes by leveraging the power of digital, innovation, collaborative intelligence and knowledge through discovery analytics. Smart organisation survives and prospers in the digital economy because it can respond positively and adequately to change and uncertainty. Read more here.

Would you like to join other successful organisations that have become smarter by using discovery analytics? Click here to find out more.

Or perhaps learn how to make smart grid smarter towards your quest for getting started with the next Smart City project! Click here for more information.

Or may be simply get motivated by admiring the arts that make analytics come to life with the Art of Analytics. Find more on that here.

Service Recovery that Deepens Relationship & Brand Loyalty

Wednesday March 22nd, 2017

Optimising customer’s revenue contribution depends heavily on a company’s ability to deepen and effectively maintain loyalty along with emotional attachment to its brand. There has been plenty of rhetoric around Customer Experience Management as the strategy to achieve this competitive edge. Yet the fact remains that the vast majority of such initiatives either concentrate solely on cross-sell/up-sell marketing or they are Voice-of-Customer (VOC) service related surveys. These are laudable efforts but are unlikely to result in sustainable differentiation when they are not part of coordinated customer-level dialogue.

What has been proven to deliver a superior business outcome is the ability to engage with “One Voice” when communicating with customers, especially after negative experiences. For example, only a handful of customers would do more business with a company when their complaints remain unresolved. Therefore, at a minimum, these customers should be excluded from promotional marketing until after satisfactory resolution. Ideally a system should be in place to automatically replace a cross-sell message with a service recovery one for the affected customers.

According to BCG[1] , regardless of the channel they started in, most consumers would seek human assistance (usually via telephone) if they do not get their problem resolved. There would already be a degree of annoyance right from the start of such service calls, especially if customers have to retell background from the beginning. Ideally the service rep should already know the service failures and breakpoints from an earlier interaction. The key capabilities differentiator here is data integration for just-in-time analytics specific to each customer’s context. Even better is to avoid the negative experience in the first place – i.e. develop the capabilities to detect and predict potential servicing and quality issues that erode customer satisfaction.

A company that only proactively contacts customers to sell would very likely condition more and more recipients to switch off and disengage from these communications. A different approach is needed to succeed with Customer Experience Management strategy. In order to inculcate a customer-centric mindset and systematically deliver bespoke servicing across the entire customer base, an organisation will need to align processes and introduce new performance metrics (e.g. customer’s share of wallet) to drive appropriate contents of automatic communication management capabilities. Successfully deploying service recovery into its broader marketing dialogue would stand the company in good position to take advantage of a sale opportunity as and when it emerges for each customer.



[1] BCG, “Digital Technologies Raise the Stakes in Customer Service”, May-16

Here’s some data. Now amaze me, data scientist!

Wednesday March 15th, 2017

— Or why discovering insight is often inevitable.

Sure, give me your data, and there is a good chance I can “wow you”. Why am I so confident? Because I am an astronomer by trade, and I believe that the path to discovery in astronomy, and discovery in data science, share some fundamental underlying principles.

Major discoveries in astronomy (and many other branches of science) often occur when a previously unexplored area of observational parameter space is opened up by new instruments, or new ways of analysing data.

What do I mean by “observational parameter space”?

Let me give you an example: When Galileo first turned a telescope to the sky, he was seeing the universe in a way it never previously been seen before, and in doing so, he made arguably some of the most amazing and important discoveries in the history of humankind.


This is what it means to open new areas of observational parameter space – to move beyond the current limitations of data quality, precision, or type of information that is available. That is, to gain visibility to things that were previously invisible. The power of bringing new data, improved data or new analysis to bear for the purpose of discovery is manifest in the history of astronomy.

There are countless stories, too numerous to list here, of major unexpected discoveries that came about simply through recording new types of data (for example the discovery of Gamma ray bursts), analysing data in new ways (for example the discovery of pulsars), or by combining different types of data for the first time (for example the discovery of quasars).

In the world of data science, the situation is no different. When an organisation or government department records new types of data, or enables the combination of different types of data, or significantly improves the accuracy and reliability of existing data, there is a very high probability that new insights will be uncovered. This is simply the result of gaining visibility to things that were previously invisible. Astronomers are so confident in this path to discovery, that it often drives the design and construction of new telescopes.

However, data is a necessary but not sufficient condition for insight discovery: there are other key ingredients.

Discovery happens when data meets the prepared mind: there is no magic algorithm that will sift through the data and provide all the useful insights on a plate. Ultimately, there is no substitute for a deep knowledge of the business problems, and deep knowledge of the data.

An excellent example of precisely this point in the annals of astronomy is the Nobel Prize winning discovery of the cosmic microwave background – the fossil light left over from the big bang. The two researchers who discovered this fossil light thought it was just an annoying source of noise that was hindering their research, and they tried their hardest to avoid seeing it. It was a nearby research group that was able to interpret the “annoying noise in the data” as the smoking gun of the big bang, which ultimately led to the Nobel Prize winning discovery.

The moral of the story: developing intuition, understanding the business, and understanding the data, are of utmost importance. Without it, you may miss that “Nobel Prize winning” insight, no matter how ground-breaking your data!

Mastering colours in your data visualisations

Wednesday March 8th, 2017

I’ll be the first to admit that I am terrible at colours. Be it the selection of paint for a room through to the colours of an Excel chart. I simply choose the ones that I liked without much regard to everyone else. It’s natural for me to think “well I know what I’m talking about or looking at”. What I often forget is “what will the end consumer of this think?”. We all know that data visualisation brings data science into a consumable format for end users and dramatically helps us humans to interpret information easier and quicker.

Take the following table for example:

Ben Davis_Visualisations 1

If we wanted to compare Domestic with International sales and find the peaks and troughs in the data we would need to read each line and work it out in our head. Although not an arduous task, it still takes time to interpret and calculate in our heads.

But if we visualise that same data we can quickly see this information:

Ben Davis_Visualisations 2
Hence visualisation of data generally makes it easier to interpret the results. Now of course not all data can be visualised as large datasets with many plot points often end up looking crazy.

— “Maps were some of the first ways that the human race looked at data in a visual format. —

But an often-overlooked component of data visualisation is the colour aspect. Do it right and the results will speak for themselves and your work will be well accepted by the business. But get it wrong and it can lead to confusion and misinterpretation. Heaven help us that all our hard work in data wrangling, sorting and analysing all comes to nothing just because we chose the wrong colour for a data value.

So why is colouring so hard to get right? The answers are quite simple

Cultural interpretation of colour – If you see a red light you stop, stop signs are red, warning sirens are generally red. So as a result you generally accept red being a colour of danger. But this isn’t necessarily so in other cultures as the colour red in China means prosperity and luck. So think twice about colouring negative values in red if you’re end user’s are Chinese.

Colours are hard to tell apart – How often have you been stuck trying to find a different shade of blue, brown or a pastel colour? Representing different values in similar colours often leads to confusion resulting in the consumer having to refer to a legend to understand what colour value matches the colour they are looking at. Worse still is those who are colour blind may interpret a result entirely different as the colours are so close to each other that telling them apart becomes impossible.

Apart from employing a visual designer to ensure your data visualisations are top notch there are two very simple rules to keep in mind. Of course there are numerous other rules and there are volumes of thesis written on this very topic, but two simple rules listed below are a start if you are like me and suffer from a lack of colour skills in the design stage.

1) Colouring sequential data
Sequential data is data that progresses from low to high or high to low and therefore you should use gradient colours to represent the change in gradient. Once again it is a fine line of colours you use here as you want to ensure the colours are distinct enough to represent the gradient curve, but not so distinct as to represent dramatic changes in values. Stick to colours from the same colour group.

2) Colouring qualitative data
The opposite of sequential data is qualitative data that represents categories that are distinctly different from other categories on the screen. They want or need to be seen as totally different from others. This where you need to apply contrasting colours to highlight the differences. For example green against a blue.

Always keep the end consumer of your visualization in mind. You may know your data inside/out and therefore understand it, but until someone who doesn’t know your work looks at it, you’ll only be designing from your perspective.