Categories
Uncategorized

Over-Generalisation of Success: Implications for System Implementations

Is a very successful actor also very successful at choosing the best financial product?
COURTESY OF CAPITAL ONE FINANCIAL CORPORATION

One of the greatest problems of the business world today is the over-generalisation of success into contexts where that success is simply not transferable. We then compound that problem by stubbornly refusing to believe we have over-generalised and try to force that model where it just don’t fit.

Let’s look at a salient and virulent example: process based automation. Process based automation is the concept of (i) understanding the process, (ii) converting it into a workflow and then (iii) designing and developing machine-based automation (e.g. a software system) that ensures the process is undertaken quickly, consistently and verifiably. This is fantastic for manufacturing. Programming software to automate and accelerate highly repeated processes has delivered enormous productivity benefits. Indeed major software vendors, like SAP, grew up in chemical industries and other manufacturers, undertaking materials management, production planning and payroll: i.e. highly repeatable processes.

When we move over into other contexts we start to move further along a continuum from “frequently repeated processes” like manufacturing through to “goal driven” activities like an air crash investigation. It is very hard to automate an air crash investigation, because your initial findings will highly influence your subsequent activity. You might spend several days, weeks, months or even years simply looking for the “black box”. Most business contexts fall somewhere in between these two extremes. 

And here’s the rub. Most software implementations today consist of the following: 1. Send the business analysts out to capture the processes (possibly to the BPMN standard), 2. Undertake a business process improvement (BPI, possibly using lean six sigma), 3. Convert this into user requirements, functional specifications -> configure the software, test and go live. Regardless of context we expect to get the kinds of results achieved when this approach was successfully deployed in manufacturing. But it doesn’t seem to work quite so spectacularly in office environments, especially where the work is investigatory rather than process oriented,

There is also an issue with how we implement IT projects. We don’t often capture when the current state process goes off track or when things that are infrequent, irregular or ad-hoc occur. What happens when the form is not filled in correctly or the payment is rejected? What happens at the end of year or each budget? What happens when an unscheduled event occurs, like a strike, an earthquake, tsunami or hurricane/cyclone? What happens when the board or a government regulator asks a question about something? Very rarely are these events or processes captured and, as a result, they are very often missed in the requirements for the new system.

Even worse, the legacy systems we are planning to decommission have often evolved to handle these unusual, irregularly repeated events. But our initiating business analyses will likely view these hard learnt modifications as inefficiencies to be six-sigma-ed into oblivion. Our shiny new system, cleansed of inefficiencies, is then totally unprepared for the stream of irregularities that come down the pipe post-go-live. It is then, months after go-live that we discover why the organization used to do that seemingly inefficient process that used to prepare for that irregular event (or even more likely, we suffer in naivete and just blame the new system).

This may not hurt if your business context has large quantities of highly repeated processes that benefit from the automation. However you are likely to get no or even negative outcomes if there isn’t. But we pretend that every business context is like manufacturing. We just implement ERPs and CRMs ad-nauseum everywhere: blue collar, white collar, private, public, manufacturing, service, profit, not-for-profit. But we are overgeneralizing, especially in our implementation process.

We can also see this in the public sector with its current focus on customer service. Inspired by the customer focus efforts of private sector customer-facing organisations like telecommunications, retail, and insurers, the public sector has become obsessed with improving “customer satisfaction”… as if all they do is provide services to individuals. But a key function of government is regulatory. Does anyone ask the convicted felon if they are satisfied with the service they received from the courts? How about the citizen who has just paid their speeding or parking fine. What about the one who has just been deemed unable to provide medical or construction services to the market and has been de-certified. Unlikely to be satisfied customers there.

These “services” are to the wider public and not to an individual “consumer”. Focusing purely on “customer service” is an over-generalisation of a model that was successful in another context. There is certainly always room for improvement in the way all ( specially public sector) organisations treat the citizens they are trying to deal with. But to believe that they should become myopically centered and focused upon the “customer” is a misreading of their raison d’être. Many government organisations are there for the general public service and not simply to service individual members of the public.

Perhaps a more important function for your IT systems is supporting decision making, ad-hoc investigation, what-if scenario modelling or possibly evidentiary record keeping. Perhaps process optimisation and customer focus is only a minor component of the activities of your business.

So, before launching head long into that multi-million dollar CRM or ERP implementation, check that your organization actually fits the context in which those solutions have proven successful in the past. And if you do need to implement a big process based system, ensure you capture all of the processes including the irregular, unusual, intermittent and error-correcting processes…business users don’t normally recall these in a workshop without significant prompting.

https://www.linkedin.com/pulse/over-generalisation-success-implications-system-jeffrey-popova-clark/

Categories
Uncategorized

Neural Nets that don’t stop learning

Over the weekend, I was reminiscing over my 1990 copy of Rumelhart and Mclelland’s seminal work on Parallel Distributed Processing (its about using backpropagation to teach Neural Nets). It reminded me that most modern efforts have missed the point over the efficacy of using Neural Nets in artificial intelligence.

Unlike artificial neural nets, us biological neural nets are not taught everything during a learning phase and then released unto the world as a fully taught algorithm. Instead we never stop learning. This is enormously useful for a number of reasons, but is also enormously dangerous for others.

Consider the driverless car that was incorrectly parking in vacant disabled parking spaces. Engineers had to teach the car that it had made a mistake until eventually the AI learnt that vacant parks with the relevant symbols are not for parking (presumably unless the car itself contains a handicapped passenger). The same neural net has to learn that those same symbols are irrelevant during regular driving and only of relevance when undertaking parking maneuvers.

Us humans have a major advantage. We don’t have to keep all potential contexts in our head simultaneously, because we can hold contexts in our short-term memory. Short term memory is simply recently learned material. If we are driving past lots of parked cars and searching for vacant parks, we have recent memories of driving slowly over the past minute or so and seeing lots of parked cars. This recently learned material is invaluable in determining context.

When an AI neural net is no longer in learning mode, it must have sufficient knowledge in its net to decide a course of action in all potential contexts… parking, driving, recharging/refueling, loading, unloading etc. It’s like trying to determine a story from a photo instead of a video. So why don’t we just let our neural nets continue to learn after we feel they are performing sufficiently well at the task at hand? The AI could then take advantage of recently learned context-relevant information which should simplify the AI’s task during operation…just like it does for us humans (see Google’s recent paper: https://www.technologyreview.com/s/602615/what-happens-when-you-give-an-ai-a-working-memory/).

This sounds good until we realise that neural nets that are prevented from continuing to learn in the field are more predictable. Imagine a driverless car accident occurs where the neural net decided to crash the car, killing its passenger, rather than plough through a zebra-crossing full of school children. The car’s manufacturer can take an identically trained neural net and test it under the same conditions experienced during the accident. The responses of the test will be the same as that produced by the crashed car’s neural net. However, if we have a net which continues to learn in the field, it becomes almost immediately unique and unreplicable. We are unable to reproduce the conditions and state of the AI during the accident. In fact the decision processes of the AI become as unpredictable as us human neural nets.

A machine controlled by a continually learning AI will not necessarily perform as expected. The impact of all of the uncontrolled experiences on the neural net are essentially unknown in total. Effectively this is the case for all humans. We trust human neural nets to be airline pilots, but the odd one may decide to deliberately fly the plane into the ground. Will we be able to accept similar uncertainty in the performance of our machines? And yet, it may be that failure to accept this uncertainty may be the key reason why our neural nets are being held back from performing general intelligence tasks.

Categories
Uncategorized

Prediction: Do we need perfection or just to be better than the rest?

Prediction is a difficult art. There are some questions that include random variables that simply can’t be predicted. What will the spot price of oil be at close on June 20, 2017? You may be a fabulous forecaster who takes into account historical trends, published production figures, geopolitical risks etc etc and be more informed than anyone on the planet on the topic, but the likelihood of precisely hitting the spot price at close is very very low. This means that although you may be, on average, more accurate than a less capable forecaster, you will nevertheless more than likely to get the final answer wrong. Is this just as useless as someone who is just guessing?

So, are perfect forecasts really the golden standard we need to aim for? Or instead like the metaphorical “running away from the bear meme”, don’t we just need to be better than “the other guy” to get competitive advantage? The answer is Yes, you just need to be better at predicting than your competitors. You don’t need to achieve the impossible…i.e. perfect accuracy of your predictions. Many people simply abandon any effort to get better at prediction once they realise that perfection is unattainable. This is a big mistake…getting better at prediction is both worthwhile and eminently doable.

There is no advantage in predicting things that are perfectly predictable and no-one can predict the totally unpredictable. The competitive advantage lives in the middle. Being better than everyone else at forecasting hard to predict things gives you an edge even though you are unlikely ever to get the answer perfectly right.

In fact, as the diagram above shows there is no competitive advantage in either “totally unpredictable” or “fully predictable” events. No-one is going to get rich predicting the time of the next lunar eclipse anymore. Equations and data exist that make forecasting eclipse events to the second quite mundane. Similarly no-one can predict the next meteor strike (yet), so we are all as inaccurate as each other and no better than pure guesswork regarding when and where the next one will strike. But in between these two extremes there’s plenty of money to be made.

In the above chart the actuals are the orange dots and the blue line is a typical forecast. The typical forecast (blue line) even got the answer perfectly right in period 5, hitting the actual number of 33 precisely. But the Superforecast (orange line) is almost twice as accurate as the typical forecast and yet never got the precise answer correct in any one period. A decision maker armed with the Superforecast is going to be in a much better position than someone armed with the Typical forecast.

So the key is to be as accurate as possible and more accurate than your competitors when it comes to predicting market demand, geopolitical outcomes, crop yields, productivity yields etc etc. Although still unable to predict perfectly accurately, being better than everyone else yields significant competitive advantage when deciding whether to invest your capital, divest that business, acquire that supplier etc etc. So how do you get better at predicting the future…Well that’s where a combination of Big Data and Superforecasting come in.

Big Data is the opportunistic use of the data both internal to your organisation and available from 3rd parties which can use modern data crunching technology to make better predictions about what is likely to happen. Superforecasting is the practical application of techniques borne from cognitive science (commonly misnamed as Behavioural Economics) that overcome human’s natural cognitive biases and lack of statistical/probabilistic thinking to improve forecasting across any expertise domain. Between the two, any organisation can significantly improve its forecasting capability and reap the benefits of clearing away more of the mists of time than their competitors.

The key is not giving up simply because perfect prediction is impossible.

Do you know what activities in your organisation would seriously improve their performance based on efforts to improve their predictive accuracy and then significantly impact the bottom line?

https://www.linkedin.com/pulse/prediction-do-we-need-perfection-just-better-than-popova-clark/

Categories
Uncategorized

Traits of a Superforecaster..Hang on that’s a top Data Scientist

I have been a follower of Philip Tetlock, Dan Kahneman, Richard Nisbett, Thomas Gilovich etc for over 20 years and devoured Tetlock’s recent Superforecasting book on its release. Tetlock has led a team of forecasters using a combination of crowdsourcing, cognitive bias training, performance feedback and other techniques to outperform a suite of other teams at forecasting geopolitical events in a controlled multi-year forecasting tournament. The tournament was run by IARPA (Intelligence Advanced Research Projects Activity including FBI, CIA, NSA etc), which is the Intelligence community’s equivalent to DARPA, as part of their Analysis and Anticipatory Intelligence streams.

IARPA invited a range of teams from multiple elite US universities to forecast hundred’s of important geopolitical questions to gauge their performance against professional intelligence analysts’ forecasts (the professionals had access to classified information but not the teams). In the first year Tetlock’s team beat all their competitors handsomely but also outforecast IARPA’s forecast accuracy goals, not only for the first year but also IARPA’s initial goals for the second and third year of the tournament. The next year Tetlock’s team vastly improved again, vastly outperforming all the new goals (and competitors) such that IARPA decided to cancel the tournament and bring Tetlock’s team into to do all future IARPA forecasting. In many cases Tetlock’s team of Superforecasters, with access only to publically available information, significantly outperforms the intelligence analysts with access to classified information.

But what makes Tetlock’s team of Superforecasters so good at predicting the future? Tetlock, being an academic, took the opportunity to identify what habits and characteristics his team of forecasting geniuses had that other’s don’t. Here’s the attributes he found:

  • Cautious – they predicted outcomes with less personal certainty than others, they always kept “on the other hand” in mind
  • Humble – they didn’t claim that they knew everything or that they fully understood all aspects of a problem and would readily change their mind when new evidence came in
  • Nondeterministic – just because something has happened they don’t ascribe a “Monday’s quarterback” explanation for its occurrence. They keep in mind that events may have turned out differently for potentially unknown reasons.
  • Actively open-minded – they’re always testing their own beliefs and see them as hypotheses to be tested, not dogma to be protected
  • Naturally Curious with a need-for-cognition – love solving problems and adding new ideas and facts to their own knowledge
  • Reflective, introspective and self-critical – they are constantly re-evaluating their own performance and trying to uncover and correct the errors they themselves made
  • Numerate – tend to do back-of-the-envelope calculations, comfortable with numbers
  • Pragmatic (vs Big Idea) – not wedded to any one ideology or worldview, prefering reality and facts over opinions and untested theories
  • Analytical (capable of seeing multiple perspectives) – break problems up into logical parts and consider many different views of a problem
  • Dragonfly-eyed (value multiple perspectives) – value the inputs of viewpoints that are new and different to their own and can handle assessing differing theories at the same time.
  • Probabilistic – see events as likely or unlikely (as opposed to will or won’t happen) and are comfortable with uncertainty
  • Thoughtful updaters – willing to cautiously adjust their previous assessments as new information comes in
  • Good intuitive psychologists (aware of and able to compensate for common biases) – they understand the various cognitive biases us humans have when thinking about problems and put in the effort to overtly overcome the shortfalls the biases cause
  • Personal Improvement mindset – always trying to learn more and get better at whatever they are doing
  • Grit – simply don’t give up until they feel they can’t improve their assessment any further

When I reviewed this list of personal attributes and habits it struck me just how similar these Superforecaster attributes were to the attributes ascribed to truly excellent data scientists. A quick review of articles (e.g. Harvard Business Review, The Data Warehouse Institute, Information Week etc) about the differences between good Data Scientists and great ones turned up this aggregated list (a ✔ indicates an overlap with Superforecasters and ✗ being unique to excellent Data Scientists ) :

  • Humble (before the data) ✔
  • Open-minded (will change their mind with new evidence) ✔
  • Self-critical ✔
  • Analytical (breaks problems down, does quick back-of-the-envelope calcs) ✔
  • Persistence/grit ✔
  • Comfort with uncertainty/able to hold multiple theories at once ✔
  • Cognitive/Innate Need-for-understanding ✔
  • Pragmatic (as opposed to theoretical) ✔
  • Interested in constantly improving ✔
  • Creative (able to generate many hypotheses) ✗
  • Has innate understanding of probability/statistical concepts (conditional probability, large numbers etc) ✔
  • Business Understanding ✗
  • Understands Databases and scripting/coding ✗ 

This also works in the other direction with only a couple of Superforecaster specific attributes that aren’t covered by the Great Data Scientist attributes list. Tetlock has studied his Superforecasters with academic rigour whereas the Data Scientist list is likely to be more opinion and untested hypotheses (so I believe the Superforecasters won’t like this article :-), however one cannot but be impressed by the significant overlap of the two lists.

Is it possible that the big successes provided by Data Science to date have actually not been due primarily to the massive data crunching capability and the ubiquity of IoT and social media data collection, but is actually primarily the result of applying the personal habits and attributes of Superforecaster-like Data Scientists to business problems? If so, this says we could get a lot of business value out of applying Superforecaster-like people and approaches to many business problems, especially those that may not have a lot of data available.

Do you know of any applications where a Superforecaster approach might help your organisation?

List of articles:

https://hbr.org/2013/01/the-great-data-scientist-in-fo

http://www.informationweek.com/big-data/14-traits-of-the-best-data-scientists/d/d-id/1326993?image_number=1

https://infocus.emc.com/william_schmarzo/traits-that-differentiate-successful-data-scientists/

http://www.cio.com/article/2377108/big-data/4-qualities-to-look-for-in-a-data-scientist.html

http://www.boozallen.com/insights/2015/12/data-science-field-guide-second-edition

https://upside.tdwi.org/articles/2016/06/13/five-characteristics-good-data-scientist.aspx

Note that I had to edit the aggregated list due to the fact that some articles were comparing Data Scientists to general IT people, or general executives, as opposed to less effective Data Scientists. This led to some articles concentrating on the very technical capabilities required by every Data Scientist as opposed to the personal characteristics and habits which resulted in excellence.

Categories
Uncategorized

Neural Nets: Use with caution

There has been significant hype surrounding artificial intelligence and neural nets recently and with good reason. AI is superb at learning to respond appropriately to a set of inputs like a stream of video or audio data and handling complexity beyond the capacity of us mere humans. Recent efforts in natural language processing and driverless vehicles have been nothing short of astounding.

As the tools of data mining have morphed into big data, the capabilities coming from these global leading AI projects are being incorporated into the toolkits available to the corporate data analyst. Many tools such as R, Oracle Data Mining, Weka, Orange and RapidMiner as well as libraries associated with languages like Python, make neural nets readily available to a vast range of analytic endeavours.

But caution is advised. The great results achieved by these major AI projects have been achieved with a few little noticed advantages. Firstly, the humans have already determined that there is indeed a signal in the data that they are trying to model. When the video stream comes in that the parking sign says “Handicapped Only”, the AI Neural Net quickly learns that parking the driverless vehicle there is an error. But the fact that the parking sign is verifiably there in the data is known by the modelling Data Science team apriori. A neural net will find the signal that we knew was there. Another advantage is the sheer quantity of data. Teams working for Google, Apple and the like use billions and even trillions of data records too train their neural nets. This makes up for the relatively high number of “degrees of freedom” inherent in their neural net models.

However in many real world situations we are looking for signals in the data that may or may not be there. For instance if we are trying to predict the future performance of one of our regional sales locations based on attributes of the centre and its surrounding catchment we may or may not have sufficient information in the data to detect a signal. Perhaps a natural disaster will impact the performance or there will be industrial action, or a terrorist act or… The key is that the data we possess may or may not be sufficient to make a reliable prediction. If it is sufficient, that’s great, but, if it isn’t, we need to know that fact.

Unlike more traditional predictive analytic techniques (like regression, survival analysis and decision trees), neural net models are difficult for us mere humans to interpret. But the powerful data tools I mentioned above will let our corporate data science team casually throw a neural net model at the data. What’s more they will invariably produce seemingly more accurate results than the traditional models. The old “this model makes no logical sense” is not available to neural nets and leaves our data science team at high risk of modelling simple noise. This is particularly a problem when we re-use the same holdout sets when trialing hundreds of slightly differently configured neural net models.

So who cares? Essentially using un-interpretable but powerful neural net models may make you feel like you are more accurately predicting the future. But in reality it may be simply capturing the noise from your input data. You may waste lots of time chasing ghosts or worse you may deploy a model into operation which performs little better than chance in the real world.

Have you seen an example of a Data Science team chasing ghosts?

https://www.linkedin.com/pulse/neural-nets-use-caution-jeffrey-popova-clark/

Categories
Uncategorized

Beware of the just-so “Use Case” stories

Us humans love a story that has a nice beginning, some events in the middle and a positive end. And the people who are trying to sell us something know this all too well. These are known as “just so” stories. Everything you need to know about the outcome is assumed to be contained in the story. Here’s an example:

“Company X was losing customers at an ever increasing rate. The CIO and COO got together and decided to create a Data Lake using [place your technology/software here] to analyse all of the various disparate data sources in a single repository. They also hired a Data Scientist who used the technology to identify a cohort of at-risk customers. A churn prevention program was developed to target the at-risk group and Company X recovered its lost ground and began to grow its customer base again.” 

The implication is that “if your business has customer churn issues then you should consider talking to [place your vendor here].”   

The problem is that the reality of the above use case is probably a lot messier than the just-so story leads us to believe. This is the far more likely scenario:

“The CIO of Company X hired a Data Scientist who managed to obtain some IT resources out of the datawarehousing team to germinate an unstructured data repository. The Data Scientist, as a side project, poured data from multiple systems into the new repository. The Data Scientists official project was developing some scorecards and dashboards for the COO, based on some updated efficiency KPIs. As a result of building the dashboards, the Data Scientist became aware that churn was high and, on a hunch, decided to hit the new repository to see if there were any unique characteristics of the churning customers. The Data Scientist happened to get hold of an unused license of [vendor product here] to do the analysis. Turned out that there were indeed unique characteristics of churning customers, and the Data Scientist proudly told the CIO and COO about it….etc”. 

Firstly the CIO’s reason for hiring the Data Scientist was not deliberately to tackle customer churn. Also it was the Data Scientist who managed to, almost covertly, divert IT resources to create the embryonic Data Lake. So investment in a Data Lake was not a deliberate decision by executive trying to solve a particular business problem. The Data Scientist could easily have looked into the unique characteristics of the churning customers and not found any, simply because there was nothing there to see. In fact, had the Data Scientist been developing some financial reports for the CFO, instead of a dashboard for the COO, the customer churn problem may not have been noticed at all. The just-so story could just have easily turned into an accounts payable fraud detection story instead. Most of all, the technology used to determine the customer characteristics could have been any product that was capable of the type of analysis the Data Scientist decided to try and was only utilised because of the unused license.

So you as the reader of the Use Case, and potential buyer of the vendor/software/technology, need to keep in mind the possibility that the story has been given to you in a “just so” form. This is particularly important as you will find that those companies that try to start a Data Lake promising to “increase sales by x%” or “decrease customer churn by y%” will be sorely disappointed. Despite the impression given by just so Use Case stories, Data Science is more akin to diamond mining than constructing a bridge. You need to add data capability to help uncover the gems that are no doubt there… but how big, how many and what type of diamond? No-one will know until you’ve uncovered the gems, a long time after the decision was made to invest.

Postscript: I’m aware there is a Gartner Report which fundamentally disagrees with the above (i.e. 90% failure due to uncertain use cases). However I submit the analysts may have fallen for the trap described in this article, looking at results after the fact. Were the 10% successes really designed specifically for the issue that eventually rendered them a success. Or were they they just the lucky 10% who found something. More likely the 10% figure itself comes from a Woozle Effect.

Categories
Uncategorized

Users: Transaction Generators vs Information Consumers

When designing enterprise level information technology solutions, a new dichotomy is becoming increasingly apparent: transaction generators vs information consumers.  When thinking in terms of single applications we have traditionally thought in terms of business functionality: e.g. an accounts receivable function needs to be supported by an accounts receivable system.  It has been standard to rely on the individual application’s reporting system to satisfy the analytics requiring portion of the user base. For more esoteric requirements we have started to rely on extracting data into data warehouses and, more recently, data lakes to mash up data from the various functions and also with 3rd party data (like data streams).

However, it is becoming more clear that there is indeed a set of users who do very little transaction generation and really don’t need to be users of our transaction system at all.  This includes executives, analysts, consultants, auditors, data scientists, and so on: information consumers. This class of users can be better served if they can instead obtain almost all of their requirements from a comprehensive data lake.  Indeed such a design is preferable as this can be a single source for all of their analysis, removing the need to get access to and learn how to use all of the various data transaction systems which generate the source data. The transaction generation systems can then concentrate on capturing transactions efficiently and effectively, freeing themselves from needing to accommodate the needs of these fundamentally different types of users.

Aggregating all of the required analytics data into a single enterprise repository has many synergistic benefits.  In a single repository, data from multiple functions are able to be seen in light of whole-of-enterprise and even whole-of-supply chain and whole-of-market contexts.  User familiarity with their analytics interface is dependent not on the content of the data, but on the analytics task: Visualisation (e.g. Tableau, Qlik, PowerBI) for building interactive dashboards, continuous monitoring for alerting and compliance (e.g. ACL, Logstash, Norkom), data discovery for investigations (e.g. SAS Analytics, Spotfire), reporting (e.g. Cognos, BusinessObjects), search (e.g. elastic), data mining (e.g. Oracle Data Mining, EnterpriseMiner) and statistical analytics (e.g. SAS, R). 

But doesn’t this approach violate the concept of single point of truth (SPOT)? No, the transaction system might indeed be the SPOT, but our data repository can be a replica of the SPOT and differ in known ways (e.g. is a copy as per midnight the previous night or as per 30 minutes ago).  For the vast majority of information analysis needs, this level of “fresh enough” is perfectly fit for purpose.  

Functional systems have the most rudimentary analytics capability (likely just some basic reporting) which is only a small fraction of the analytics capability of value to information consumers.  Executives and other information consumers have really been short changed by the rudimentary data analytic tools provided by functionally focussed transaction systems up to now.  

Modern enterprise solution architects need to split the data analytics functions from the transaction systems: free the data and then deploy the analytics power of Big Data.  

Categories
Uncategorized

KPIs ain’t everything

Previous articles in this series about the pragmatic use of KPIs in organisations, are available at the links following this article. This particular article looks at KPIs as part of an overall framework of organisation control.

KPIs are part of a whole
KPIs, especially when widely published and/or used to calculate personal bonuses, are an amazingly powerful set of behaviour modifiers. As explored in previous articles, KPIs can set and/or communicate direction to staff and other stakeholders, encourage hard work and innovation and provide feedback on the success or failure of past efforts. When viewed from this angle it becomes obvious the potential consequences of setting KPIs in ignorance of the many other influencers of behaviour in an organisation.

Managements use a great range of levers to influence and shape behaviour of the individual parts of their enterprise including:

Corporate Policy
Risk Mitigation Plans
(Approved) Budget
Process Manuals
Organisational Structure
Computer System (automated) Controls
Contracts/Agreements with Partners, Suppliers, Customers and Regulators
Project Mgt Plans
Position Descriptions
Vision/Mission/Strategy Statements
Water Cooler chats
Public and media proclamations
Executive or Board Orders/Decisions
C-Suite meetings/roadshows with staff
Reward and Recognition Programs
Review Report Recommendations
Training and Development Programs
etc
Staff, suppliers, partners and other stakeholders glean what is expected of them from the above and use their understanding of the levers to guide their own actions and behaviour. The Performance KPI, when tied to personal bonus schemes, will often be seen by staff as the most important of these, as that is where the organisation is putting its hard cash.

KPIs are more important than what the CEO says
This is how some of Australia’s largest insurance organisations recently sent mixed messages to their staff: The Executive and Board emphatically stated that they wanted an ethical institution, but the KPI driven personal reward systems were totally based on profit-focussed measures. Decreasing costs (like minimising claim payouts) and increasing revenue (like making policy sales to any buyer) will become a very large focus for those measured and rewarded exclusively on profit. Essentially staff thought: “I hear what they are saying, but if they really wanted that, they’d change my KPIs”.

It becomes easy to see that KPIs must be in close synchrony with the remaining behavioural signals being sent to staff. If you want different behaviour to the past, you must look at all behavioural controls and ensure they are sending a synchronised message of change or else your staff will choose which of the conflicting messages to follow.

Not only must KPIs remain in synch with other influencers of behaviour, it is also important that they remain in synch with each other. If the KPI system is rewarding one set of management in one direction and another set of management in an entirely different direction, it is likely the two executive will come into conflict. For instance, if some executive’s KPIs and bonuses are tied to completion of a capital plan, and another set of management have KPIs about minimising budget expenditure, it is likely the two set of management will clash.

Don’t Ignore the Knitting
As with Strategy generally, if KPIs concentrate exclusively on what needs to change without including maintenance of core activity, it is likely that core activity will be sacrificed or, at best, neglected. Once we provide contingent remuneration, we are asking our staff to focus on those issues we are measuring and rewarding, which is implicitly asking them to not focus on others. A KPI developer must keep this in mind when developing a portfolio of KPIs for a team or individual. Once again this is another reason why some KPIs may not need to be “stretch”. They may act as a floor threshold (i.e. do not let this figure go below this level – e.g. do not allow employee churn to rise above 12% p.a) which are easy to achieve but ensure that core activity is still being completed whilst the individual strives for excellence in other areas of continuous improvement.

The Takeaway: What to Do
As a takeaway, KPI developers must be cognisant of the many ways that behaviour is influenced within an organisation and must keep in mind that the KPI and bonus are just one of these influencers. Do you have any examples of where KPIs have been inconsistent with other indicators of corporate intent leading to unintended dysfunction?

Previous KPI Series Articles
A Tale of Two Managers – a hypothetical where two managers react differently to a revolutionary idea. The story highlights the dangers of annual % increment KPIs to overall organisational performance.

The Parable of the Wet Driveways – an alternate reality were scientists need to figure out how to determine if rain occurred at night when everyone’s asleep. The story warns of the dangers of confusing correlation and causation when measuring performance/outcomes.

Oils ain’t Oils & KPIs ain’t KPIs – the many reasons KPIs are currently used in practice and how easy it is to accidentally start using the ones you have in ways for which they were not originally designed.

KPIs ain’t KPIs: Part 2 – exploring the ways that humans interact with KPIs once they begin to be rewarded based on the results. It’s not always as expected/planned.

https://www.linkedin.com/pulse/kpis-aint-everything-jeffrey-popova-clark/

Categories
Uncategorized

If Oils Ain’t Oils then KPIs ain’t KPIs either

There’s something you gotta get straight about KPIs!

KPIs are like fire: they are very powerful, but if you don’t use them carefully they will burn you and your organisation.    An honest look at the use of KPIs in practice leads to a number of pragmatic suggestions which can vastly improve the positive impact of these powerful creatures and avoid their many pitfalls.

What are they for again?

 KPIs are for a great many things:

  1. To provide an incentive to staff to strive a little more (employee incentive)
  2. To send a signal to staff about what goals the executive would like achieved (communicating staff direction)
  3. To allow executives to see if their staff need to be paid a bonus (remuneration calculation)
  4. To help align staff action to organisational goals (staff alignment) and to each other
  5. To efficiently monitor progress on how the business is progressing (monitor business performance)
  6. To quickly identify any issues that need addressing (problem identification) and how big those issues are (problem assessment)
  7. To allow comparison to benchmarks (either compared to the past, other parts of the organisation, other organisations or some gold standard ) to determine remaining potential (benchmarking)
  8. To communicate to external stakeholders what the organisation intends to focus upon (stakeholder communication re intent)
  9. To tell external stakeholders how the business is performing (stakeholder communication re performance)
  10. To assess the current state of the environment that impacts on the performance of the organisation

The big problem is normally we start unconsciously creating KPIs for one or two of these purposes. More often than not, though, our KPIs end up getting (ab-)used  for the other purposes as well. 

For instance, KPIs that are focussed on aligning staff may be subtly different to the ones that are meant to stretch them.    An “align staff” KPI target does not need to be achievable, just able to be understood to help staff understand the direction management intends for them to progress.  This could be an impossible target like “No imperfections in any final product”.   However a stretch KPI may be very difficult to achieve but is still possible: e.g. “no less than 6 sigma quality across the year”.  If the intent of management is to convey a focus on product quality then a “No imperfections in any final product” KPI target is very useful.  Just make sure that bonuses are not tied to achieving it.

Some measures are very useful for monitoring the performance of a business, but are not actually intended to change behaviour in any way.  For instance, one may monitor employee churn rates to ensure that they are not increasing or decreasing.  If they are decreasing it could signal that  employees are feeling job insecurity or that the staff pool is not valued in the external labour market.  If it’s increasing it could be because of better job opportunities elsewhere, or an ageing workforce that is increasingly retiring, or poor supervisory practices.  Essentially these kinds of KPIs may be a watch and see type: Changes warrant attention but no one is trying to “hit a number”.  Such KPIs tend to be underutilised because there’s no stretch aspect to them. So the confusion is that it can’t be a KPI because it can’t be used to incentivise or stretch a team or staff member.  But the KPI is useful because it will help monitor business performance and also identify issues that require business action. 

The Take Away

The key is to overtly label each KPI based on its raison d’être.  If a KPI is for business monitoring and not for remuneration…label it so.  Contact me for a useful KPI purpose assessment tool that can assist with this process.

Next week we’ll look at how KPIs tied to incentives can sometimes have unintended consequences.

Categories
Uncategorized

A Tale of Two Managers

Once upon a time there were two managers, from different firms, who each ran a section expending $20M per year to produce their outputs.  They both wore glasses, they both spoke well at large gatherings and both were well liked by their respective senior executives.  Not rising hot shots, but solid performers that no doubt will eventually earn their way into executive ranks in the not too distant future. 

The Idea

One morning in the shower, whilst pondering ways to improve their respective areas, they both independently come to the same insight.  This idea was awesome…revolutionary…absolutely fabulous: it would save their sections not only the 5% stretch efficiency target in their personal KPIs, but a whopping 25% saving.  They could reduce the costs of their section from $20M per year to just $15M per year without interrupting production and without any capital expenditure. They both could smell the executive suite leather….what an idea!

However this is where our two managers began to differ.  One, named White, is totally committed to benefiting the organisation, a total team player, whilst the other, Black, thinks more in personal terms.  White immediately implements the idea and is absolutely overjoyed to discover that the plan is working perfectly: costs have dived 25% and the bottom line for the company is a full $5M per year better off and will be for every subsequent year. White enjoys hearty congratulation from the executive and a full bonus for the year.

 Black however decides to implement the idea but simultaneously makes other changes that decrease the efficiency of the section by $4M. This means that in the first year Black meets the stretch target of 5% by saving only $1M, but does get the full bonus and also the same hearty congratulations from executive enjoyed by White.

End of Year One

At the end of the year White sits down with the boss to negotiate next year’s KPIs.  “Wow, you smashed that 5% out of the park.  25%! Well done.” Says White’s boss.  “Thankyou”, says White. “So, look we can’t expect 25% every year, so I’m happy to leave the target as 5% again. That shouldn’t be a problem for a hotshot like you.”  But White says “Well we are of course enjoying the 25% reduction again this year you know.  Another $5M in the bank!”. But White’s boss says “Can’t rest on your laurels though…continuous improvement and all that.  Should be able to find another 5% surely”  White walks out a little dubious, not confident at all that another 5% is really available after cutting 25% out of the expenses last year.

Black also sits down with the boss.  “Well done. You not only beat the target of 2.5%, you even hit the stretch of 5%.  That’s like $1M.  Well done. I hope you enjoy the bonus. So do you think you can do it again? Another 5%?”  Black says “Sure, lets see if we can’t do it again.” Black walks out knowing that next year’s bonus is in the bag.  Just remove some of the inefficiencies deliberately put in last year and the initial saving idea will do the rest.

End of Year 2

White tries everything during Year 2 but just can’t make a further dent in the costs:  $15 M expenses again.  Black, meanwhile, merely removes some of the deliberately planted inefficiencies and brings the costs down from $19M to $18M, beating the target again for the second year in a row. 

White’s boss says “Well…you couldn’t even hit the base target this year.  A bit slack, White. Look we all remember last year, that was great, but you’ve got to keep that up. And remember the CEO is retiring in a couple of months and the new CEO won’t remember your great year. You’ve got to at least get the base target of a 2.5% reduction. OK?” White says “But we’ve only spent $30M over two years now, when we used to spend $40M. We’ve saved $10M..which goes straight to the bottom line!” But the boss says “Look White. We can’t just have one hit wonders around here.  You’ve got to show us you’re able to get continuous improvement.” White leaves the meeting feeling a little shaken, determined to find another efficiency in the section.

Black, meanwhile, is having a great meeting. “Well done, Black, you did it again.  2 years in a row hitting the stretch target.In fact a bit more this time.  Another bonus.  From $20M to $19M to $18M.  Surely you can’t do it again? Another 5%”. Black starting to get cocky says, “Look we’ve hit 5% twice lets go for 5.5% this year as the stretch”. “Really,” says Black’s boss, “You really think you can do it”. Black says “Continuous improvement shouldn’t only be for operations, but management should get better as well”. “Ok then, if you really think you can do it, 5.5%”.

End of Year 3

In Year 3 you can guess what happens. White makes no inroads on the $15M and Black shaves another $1M off the cost base.  White has missed base targets two years in a row and Black has now hit stretch 3 years in a row.  Black is a hero, with growing fame and talk of fast track promotion, whilst White’s job is in jeopardy.  White’s boss says “The new CEO can’t understand why I keep a manager who misses base targets two years in a row. You’ve simply got to improve White”.

End of Year 4

By the end of Year 4 White is sacked and Black is promoted, even though Black’s unit never once became as efficient as White’s.  In fact, over the 4 years, White’s unit outperformed Black’s and managed to produce the same outputs for $60M (a saving of $20M), whilst Black’s unit used $70M (a saving of only $10M).  Why is White sacked even though she has performed twice as well as the hero Black?

Indeed the “hero” Black has deliberately spent $10M of the company’ s money being deliberately inefficient purely for their own gain (see the lighter red section of Black’s chart). Why are the rewards for the individual so totally out of whack to the benefits to the company?

This is a simple example and the likelihood that there are sociopaths like Black deliberately manipulating performance to optimise their personal long run gain at the expense of the overall business is probably low. But it does illustrate how seemingly corporate aligned KPIs can lead to unintended counter-productive decision making amongst those measured and rewarded by KPIs.

 Have you seen KPIs encourage counter-productive behaviour? Let me know your examples.