Categories
Uncategorized

What is someone worth to your enterprise?

Working out what remuneration someone is worth is of enormous interest to both employers and their people alike. But what are the key ways to determine the appropriate compensation for the contribution of current and potential future contributors to an enterprise. I thought I’d jot down some of the main approaches to work out that elusive number:

  1. Market value approach: what do persons who do that type of role generally receive for performing that kind of work or what does the market think that person is worth. Essentially what is the market rate for that type of work or for the individual in question. This is a common approach for commodified positions in the private sector and for individuals who are well-known in their industry.
  2. The marginal utility approach: if the person is undertaking their role to the standard expected, what extra revenue/profit/gross margin will the enterprise achieve over and above what the enterprise will achieve if the person was not undertaking their role. This value can change over time based on variations in market conditions and on the performance of other roles in the enterprise. An example may be the remuneration of a football player. If the football player is so good that the club is much more likely to win a football game and attract more fans to watch the game, then they are worth a significant amount based on marginal utility.
  3. Equal share approach: if the surplus value being created by the enterprise is shared equitably around the firm amongst the people responsible for achieving that surplus value, then this is a fair compensation for that contribution. So if the value add is $1M per annum over and above other input costs (including return on equity), and 10 staff helped achieve that value add, then everyone should receive $100K. This is more often used for bonus calculations, but is nonetheless, part of an individual’s remuneration.
  4. Company value delta: what is the increased value of the enterprise in the market if the individual is in the role as opposed to if the individual is not in the role. Some CEOs are valued based on this concept, where the value of the company is increased because the market believes this CEO will improve the performance of the company or bring a loyal book of established clients. Therefore the CEO is worth a proportion of the net asset value delta of the company with him/her vs the value without him/her.
  5. Compensation for investment approach: how much would a person have to have invested in their education/certifications/experience to have been capable of undertaking the role and what is a fair compensation for that. For instance, some medical specialities require over 10 years of tertiary studies to achieve a level of education and expertise to be able to function autonomously. This level of investment leaves less career lifetime to earn and therefore requires higher compensation to make up for the level of prior investment made by the position holder/candidate.
  6. Optimise churn approach: what level of compensation is sufficient to deter an incumbent from choosing to go elsewhere. This can even be below market rate, as there is a cost to change employers that incumbent staff may factor into their decision to quit to start with a new employer who may be paying the higher market rate.
  7. Attract the best approach: what remuneration is likely to attract, retain and motivate the best people who are more likely to contribute more to the enterprise’s success. This may be a “set the tone” approach which says that a high performing culture is the expectation and that the enterprise attracts and retains the best, and appreciates the added value the best bring to the enterprise.
  8. Lifetime value approach: as opposed to viewing a person in their current role or the value they are currently contributing, this approach looks at the value add of a person over the lifetime of their tenure with the enterprise. This looks at the person’s potential to add value both now and into the future. This approach decides what is the right remuneration to remit as a function of their expected lifetime value to the enterprise. Some individuals who are being groomed for future leadership positions may receive more remuneration than their current role or value might justify.
  9. Seniority approach: the key here is the current size of the budget, the current delegated authority, the current number of staff and the general seniority of the position held. This is common in the public service where marginal utility or market rates can’t be determined easily.
  10. Length of Experience approach: the key here is how long the person has held the position or has worked in the enterprise. The theory is that more experienced incumbents are more likely to contribute more than the less experienced. Once again this is common in the public service.
  11. Industrial Award approach: based on the pre-negotiated industrial award, pay what is specified in the award for an incumbent in a particular position. Depending on the power-relationship between the negotiating parties, award rates may be a little above market rates. Essentially a union is likely to try and capture more of the enterprise’s surplus for the workers out of the owner’s capital return. Unions do have a vested interest in maintaining the continued operation of major employers.
  12. Incentive approach: what level and structure of compensation will incentivise the person to perform at a high level adding further value to the company than an unmotivated incumbent, whilst encouraging them to stay with the company to continue contributing.
  13. The perceived-by-peers fairness approach: what would the majority of the person’s work colleagues believe is fair compensation for the work they do, the effort they put in, the sacrifices they make and the contribution they make to the success of the enterprise
  14. The perceived-by-incumbent fairness approach: what would the person themselves feel is a reasonable compensation for the work they do, the effort they put in, the sacrifices they make and the contribution they make to the success of the enterprise. This can be impacted by the incumbent’s knowledge of the remuneration of others.
  15. Past achievement approach: how much better did the organisation do last year/period as a result of the contribution made by the incumbent. Once again some CEO’s claim responsibility for the increased revenue/profit/gross margin from the previous year/period and claim that they are worth a proportion of that increase.
  16. Compensate for opportunity cost: how much could the person have earned if they were investing their time, expertise and effort elsewhere? This is commonly used by headhunters where they offer a potential recruit at least a bit more than their current remuneration to attract them to give up their current role.
  17. Minimum allowable approach: some enterprises will try to minimise remuneration to the lowest allowable by law. This is common when the labour market is a buyers market, with plenty of alternate labour available to replace the leaving of any incumbents and when the expertise and skill required to undertake the role is very low.
  18. Hardship compensation approach: How much hardship, danger or sacrifice is required to undertake the role and what is a fair compensation for the sacrifices required to undertake the role. The role may be based in a remote location, or require unusual hours of attendance, or be particularly physically demanding or emotionally traumatising (e.g. mercenary work).
  19. Key contribution approach: a person may have an idea or piece of intellectual property upon which the unique differentiation of the business model relies. This person could have taken their IP to elsewhere but chose to contribute that unique differentiator to the enterprise. In recognition of that unique and key contribution and the importance of it to the overall value or competitiveness of the enterprise, perhaps compensatory remuneration is justified.

In some circumstances, some of the above approaches amount to the same thing, and often the final number does include consideration of multiple of the above approaches. How many of the above considerations does your enterprise use when determining compensation for current and potential contributors? Are there others not included on this list?

https://www.linkedin.com/pulse/what-someone-worth-your-enterprise-jeffrey-popova-clark/

Categories
Uncategorized

IT Project Management Discipline isn’t working!

In 1994, Standish group dropped a bombshell on the rapidly growing IT industry by publishing the gob-smacking rate of only 16% successful IT projects in the prior 12 months. They suggested that the reason was a lack of project management discipline for IT projects; akin to the kinds of project management discipline that engineers use to build skyscrapers, cruise liners and bridges. Standish’s conclusion: “IT needs its own project management methodology and the skills and tools to deploy it!”

The impact was dramatic. A number of efforts began across the world to develop a suite of standards and methodologies that could help project managers and their stakeholders improve the chance of IT project success. Some (e.g. PRINCE2) were based on embryonic existing methodologies, whilst others were genuine efforts to develop new methodologies from the ground up.

Over the first few years of the Standish Group results, the new project methodologies were still maturing and adoption was slow. Most practitioners did not hear of PMBOK or Prince2 until some 4 or 5 years later, so widespread adoption of these methodologies (and therefore their potential impact) lags their development. However, that initial report was 25 years ago now and we have since developed globally-adopted and widely practiced standards in (i) project management, (ii) program and portfolio management, (iii) business analysis, (iv) change management and (v) benefits realisation. Indeed an entirely new multi-billion dollar education, training and certification industry has arisen to service this apparently pressing skills gap.

So, if Standish was right and it was methodology and project discipline that was the problem, then we should by now see a significant improvement in IT project success hit rates. So lets take a look:

No alt text provided for this image

Analysis: The first few Standish Reports had changing definitions and sampling frames which explains the initial fluctuations particularly between the “challenged” and “failed” categories. However, eventually the rate of “failed” projects has settled to around 20%, “challenged” to around 45% and “succeeded” to around 25%. What looked like improvements up to 2012 have since turned around and have generally headed in the wrong direction for the last few years. Some have suggested the apparent improvements up to 2012 were actually due to the increased proportion of smaller projects in the survey (particularly post-GFC). Smaller projects have always shown a higher rate of success throughout the entire period. Indeed comparing 1996 vs 2015 shows an increase of just 2% of projects successfully completed (27% to 29%).

A 2% improvement is scant justification for the enormous investment in training, standardisation, certification, discipline and management effort. The project management education industry is now a multi-billion dollar industry globally, but as far as we can tell from the above analysis, it is not contributing to improved IT project success rates. If so, then how is all of this investment and effort contributing to the economy beyond John Meynard Keynes’s hole diggers.

Us humans do lot of things because they sound right. If it has a good story (see Beware of the “just-so” Use Case Stories) that’s good enough for entire industries and academic disciplines to continue working away for years and even decades before its noticed that it is all based on nothing tangible. I’m afraid that the evidence is in:

Project Management discipline has not improved the success rate of Corporate IT projects!!

A common reaction is to just do things harder. The story that project discipline improves projects must be true. So the lack of empirical results is simply evidence of a lack of effort/discipline/application: If we just hired a more qualified/experienced/talented project manager…if we just documented user requirements more thoroughly!…if we just applied more management effort toward realising the benefits in the business case! “The floggings will continue until morale improves”. No! The problem is that Standish, even though it sounded right at the time, have proven to be wrong and there are other (much more important and prevalent) causes for such widespread IT Project failure rates. So we must look more widely for clues as to why we still have such high project failure rates. I believe some clues can be found here (over-generalisation of success in different domains) and here (the planning fallacy).

Do you agree that project management methodologies have been oversold as a panacea for IT project failure rates?

https://www.linkedin.com/pulse/project-management-discipline-isnt-working-jeffrey-popova-clark/

Categories
Uncategorized

Privacy for Corporations

Are Corporations People?

“Corporations are people, my friend” said Mitt Romney in 2011 during his ultimately unsuccessful presidential campaign against Barack Obama. But we all know that he is not correct. Corporations (or any disembodied entity like companies, trusts, partnerships etc) cannot be embarrassed about an unexplained lump on an inconvenient body part, or feel the need to hide a secret love of Rick Astley tunes from their friend group, or, perhaps more importantly, have a need to suppress public knowledge of racial or cultural origins, a current or prior disability or of a personal religious belief for fear of vilification. Let alone the inability to have their liberty curtailed by spending time behind bars for breaking the law.

What is Privacy Protection For?

Indeed privacy is primarily about these issues. Privacy helps protect minority individuals from persecution by ensuring that they are the only one’s who can reveal their private information… to whom they desire & if and when they so choose . The other purported benefits such as protection from identity theft or reduction in being hassled by telemarketing companies are, in fact, primarily treated via other legislation. Note that the right to ensure that data held about you is accurate (and therefore decisions based on such are well informed) is related to privacy, but actually does not relate to the right to have that data restricted from distribution.

Fair Use vs Privacy

Fair use (not privacy) is the concept that it is a form of con job if you ask for someone’s information for one purpose and then use it for another purpose, which may be harmful to that person. The idea being that if the person had known the other secondary purpose was a potential use and that that secondary use may result in a negative outcome for them, then they must be allowed to have chosen to restrict the provision of the information in the first place. But what if the secondary use is for regulatory compliance checking or criminal investigation. If such information collection is compulsive then the individual could not have chosen to not provide to the second use. So secondary use, in such cases, is simply a more efficient method than compulsively re-asking for the same information.

Privacy as a means to hide criminal activity

Privacy rights do not imbue an individual (nor their agent) the right to restrict access to information based on the argument that it may reveal the individual’s illegal activity and therefore result in a negative outcome for them. This is called obstruction of justice. So, if a regulator or police investigator is attempting to detect, prevent or discourage illegal activity, individuals do not have the right to prevent data about them being used for this purpose. This is doubly so for corporations, partnerships, companies, and trusts etc. Firstly, privacy does not apply to these disembodied entities as explained above and secondly these organisations are simply legal entities which possess publicly recognised and accepted associations between multiple individuals. These associations (e.g. a corporation) and the entity’s rights and privileges are bestowed by community licence. Therefore their privacy is anathema to the community’s ability to oversee whether the community licence should continue to be granted.

Privacy should not be granted to corporations (only to the individuals inside them)!

Implications

This is particularly important for regulators; whether they be regulators of markets, industries, elections or parts of government. If they are conducting regulatory compliance assessment activity they are looking for non-compliance with regulation. Mostly this is regarding the actors within a market or industry that are corporations or are, at most, individuals as it pertains to their activity in a market. None of this should be considered private information. So regulators, government agencies and 3rd party data holders should be able to share data about corporate activity without having to consider the corporation’s “privacy”. Even sole trader’s data will only be of interest in so far as it relates to the sole trader’s activity in the market. Such activity needs to be transparent to regulators and so, it too, should not be subject to privacy.

Similarly corporations cannot claim commercial-in-confidence as regulators are not competing with them. Such data, of course, should not be shared by regulators with competitors nor shared publicly; but it can be safely used for regulatory compliance analytics work.

If the data required to assess regulatory compliance is inextricably inter-twined with an individual’s preferences for Rick Astley tunes, then we may have a problem.

So does your organisation separate the information about individuals from that of disembodied entities (e.g. corporations) and treat these cohorts differently with regard to privacy legislation or is at all treated in the same way?

https://www.linkedin.com/pulse/privacy-corporations-jeffrey-popova-clark/

This article is the second in a Regulatory Analytics series. The first, titled Auto-Compliance is about the concept of Presumed Omniscience and the power this confers to make markets and other community interactions fairer and more productive (see https://www.linkedin.com/embeds/publishingEmbed.html?articleId=7326197899618471194 )

Categories
Uncategorized

Auto-Compliance: Regulatory Analytics

The world of big data has certainly revolutionized the areas of marketing, promotion and customer fulfillment (see Google and Amazon respectively) and has also had significant impact in the areas of finance and insurance. Government and regulators, however, really haven’t been capturing the benefits of big data a great deal to date. All that is about to change.

Auto-Compliance

The new concept of Auto-Compliance is the regulatory nirvana of all participants in a market complying with all regulations because its the simplest, most efficient, lowest risk and most profitable thing to do. Its a state where all participants in a market evaluate the risk of non-compliance as too high to make pursuit of a non-compliant approach worthwhile. Sounds great but how can this be achieved?

To illustrate, lets imagine you are a manufacturer who can (i) choose to dispose of industrial waste compliantly at significant cost or (ii) choose instead to dispose of the waste non-compliantly at next to no cost. The temptation is there to decrease costs by disposing waste non-compliantly. What’s worse is that if your competitors are all disposing of their waste non-compliantly, they can cut their costs & their prices and then put you out of business. You may be almost forced to violate the waste disposal regulations. The only (ok, major) disincentive is the risk of getting caught… known as compliance pressure. But if there’s little risk of getting caught, then not only will you not get caught, but neither will your competitors!!

Many regulators only receive data and information provided to them by the participants in the market themselves. This means that regulators only know about things in detail based on data they’ve been provided (and what they find out about the individual entities they conduct expensive investigations upon). Most importantly, participants themselves know what data they have given a regulator, and also what they have not given. This puts participants in an ideal position of knowing what regulations they can safely violate and what regulations they cannot safely violate. Indeed, they also know that their competitors know this information as well. If the Regulator has no further sources of detailed information and/or any ability to undertake sophisticated analytics, then there is little compliance pressure. Participants are almost forced to violate the un-monitored regulations, just to stay competitive.

But Regulators can’t simply keep asking for more participant data. Onerous regulations such as these increase the regulatory impact on the industry, and eventually the cost of compliance overcomes the ability to turn a profit. Another impact is the artificial barrier to market entry such onerous regulation creates for startups. Regulators need to exert compliance pressure on all, without penalizing the compliant and the small with onerous regulatory burdens.

The response: Big Data & Analytics

Some regulators are now beginning to build sophisticated analytics capabilities and looking for opportunities to get data from other sources (e.g. other regulators, other government agencies or even 3rd party data providers). These Regulators can begin to use sophisticated big data and other analytical techniques to predict and detect non-compliance far more accurately and efficiently. A few key busts of non-compliant behavior with their shiny new Regulatory Analytics capability and suddenly market participants can no longer be sure what their regulator knows and what it doesn’t know.

Indeed if the regulator vaguely announces that the Analytics capability is “continually improving” and that new sources of data are being continually attained and deployed (without being too specific), participants can’t even be sure they can continue to violate regulations they previously violated safely. Participants may have gotten away with breaking a rule last week, but the Regulator’s analytics capability is improving and might get them if the participant tries it again next week.

Ubiquitous compliance

Interestingly, even the regulations that are not being actively monitoring are enforced, because the participants don’t know which activities are being monitored or predicted. It becomes rational to simply just comply with the entire regulatory regime. And, most importantly, all competitors are in the same boat…there’s no strong competitive pressure to cut corners to gain competitive advantage. Congratulations, the Big Data & Analytics Fortified Regulator has now achieved “Auto-Compliance“.

Generalized cost effective compliance of all participants with the entire portfolio of regulations is the dream of regulators the world over. Overseeing a level-playing field where competitive advantage has been achieved only through innovation and industriousness and not through covert regulatory avoidance is possible with Auto-Compliance. Indeed many complex and onerous regulations are in place purely to encourage compliance with other more fundamental regulation. Auto-Compliance allows the regulatory landscape to be simplified back to core compliance goals, saving significant overhead for both market participants and the regulators themselves.

If you’re a regulator and have started the journey toward Auto-Compliance (I know some of my clients have begun the journey) make a comment below. If you want to join this latest wave in regulation, let me know.

https://www.linkedin.com/pulse/auto-compliance-regulatory-analytics-jeffrey-popova-clark/

Categories
Uncategorized

The Anti- Business Case

Business Cases are one of the most common business documents used today and yet their use is commonly misunderstood. Often business cases are seen as simply “step 4” on the path to a completed a project. Worst of all they are often written by someone who is incentivised to have the project approved. Indeed, in some cases, the quality of the business case is even assessed based on whether the project was approved.

This kind of “need-to-get-it-approved” bias leads to an underestimation of costs and risks and an overestimate of benefits. But often decision making bodies such as Boards and Steering Committees simply don’t have the time and/or expertise to delve into the detail sufficiently to detect this bias. They can ask some pointed questions, but essentially the costs, benefits and risk estimates in the business case are what the decision makers must use to make their decision. No wonder Standish Group keeps finding so many IT projects failing to achieve project benefits on-time and on-budget (over 80% failure rates for enterprise wide system implementations in a number of their annual surveys).

So what can we do to balance the ledger and ensure that we aren’t receiving an overly rose-coloured assessment of a potential project at the business case stage? Enter the Anti-Business CaseThis is a business case written by the “opposition”, who are independently trying to prove to the committee that the project should not proceed. In many cases the cost of developing a business case is only 2-5% of the total cost of a project. Money well spent, if the benefit is a clear-eyed view of the real costs, benefits and risks before investing large in a new endeavour.

The concept is borrowed from the justice area where a plaintiff and defendant provide opposing views to a judge before the judge balances the evidence and argument and makes a decision. Similarly the parliamentary system which tends to have an opposition which provides an alternate viewpoint to the voters. These institutions have stood the test of time and have proven their worth against the risks of groupthink and biased provision of information.

However, in practice, developing an anti-business case does require some practicalities. There does need to be a coordinator that ensures the options being evaluated by both the business case and the anti-business case are sufficiently simlar to be of value. It is no point if the anti-business case is arguing against a “strawman”; something the business case is not recommending. It is also important that any factual information is available to both sides (equivalent to legal “discovery”). However, it is important the two efforts develop their cost, risks and benefit estimations independently.

At the end of the business case process we should have a number of outcomes beyond a simple “Approve” decision:

  • The Sponsor asserts that they have researched the proposed project in sufficient detail using reliable approaches to accurately estimate/forecast the likely costs, risks, resource requirements and interdependencies. The Approvers accept the Sponsor’s assertion when approving the business case.
  • The Sponsor commits to deliver the benefits identified in the business case document and has determined that it is possible to do so within the documented resource, timing and cost allocations and that any risks can be mitigated to an acceptable level as outlined in the business case. The Sponsor commits to do so in the manner described in the business case, which the Sponsor asserts is the most feasible manner to achieve the benefits within the resource constraints.
  • By approving the business case, the Approvers (e.g. the Board or Steering Committee) accept the Sponsor’s assessment that the benefits are of value to the organisation and that they can be delivered within the resource, timing and cost constraints at an acceptable level of risk. The Approvers also agree that the manner of achieving the benefits outlined by the Sponsor in the business case is the most feasible approach.
  • The Sponsor asserts and the Approvers agree that the expected business benefits are sufficiently high and delivered in time to justify the expenditure of the resources required to achieve them. The Approvers obtain the right to have the Sponsor or some 3rd party demonstrate that the benefits documented in the business case have been realised in the timeframes required at the end of the project.
  • The Sponsor has asked for delegation of the resources, budgets and permissions required to undertake the project (or at least its next stage) and the Approvers have delegated those resources to the sponsor
  • The Approvers commit to not approve alternate uses of the delegated project resources in the future and will not approve future projects that presume that this project will not deliver the benefits (unless this newly approved project is altered accordingly)
  • The Sponsor commits to use the delegated resources in the manner specified and for the achievement of the documented business benefits and not for other reasons or in other ways.
  • The Approvers agree that any portfolio interdependencies of this newly approved project have been identified, resourced appropriately and that the interdependencies and their timing are acceptable to the organisation and the project/program portfolio. From approval onward, the Approvers agree to treat this newly approved project as part of the relevant program and portfolio.
  • If there are any departures to organisational norms required by the project (e.g. relaxation of architecture standards, changes to policy), then the Sponsor commits to limit the departures to those documented. The Approvers indicate acceptance of the departures when approving the business case.
  • The Sponsor commits to communicate the expected activities, resource impacts, timings, deliverables, etc of the project over the coming horizon to all stakeholders (at least to high level). The detailed project plan will provide these to a greater detail.
  • If the Approvers accept the timing proposed by the Sponsor, then they are affirming that they believe that the proposed project has sufficient priority to deploy the resources required in the timeframes outlined. Approvers may approve a business case but ask that timings be changed to fit within a portfolio prioritisation. If this is so, then the Sponsor must affirm that this change has no material impact to the achievability of the benefits.
  • If the Sponsor has come to the conclusion that the project is not advisable as a result of undertaking the analysis required to develop a business case, then the business case should still be developed to demonstrate to stakeholder the reasons why the project does not “stack up”. The Approvers are then affirming acceptance of the recommendation and agreement with the analysis and estimates in the business case. Obviously an anti-business case is not required in this scenario.

It is important that all stakeholders know what commitments they are making when submitting and/or approving a business case. It is not simply a “Go/No Go” decision. Given this range of commitments made by both the Sponsor and the Approvers it becomes clear that a reliable set of unbiased assessments of likely costs, benefits, risks, interdependencies etc are required to ensure that stakeholders can make those commitments in an informed manner. An Anti-Business Case may be appropriate in some circumstances to help stakeholders make approval decisions with confidence. Can you think of a time when your organisation should have appointed someone to undertake an anti-Business Case?

https://www.linkedin.com/pulse/anti-business-case-jeffrey-popova-clark/

Categories
Uncategorized

Over-Generalisation of Success: Implications for System Implementations

Is a very successful actor also very successful at choosing the best financial product?
COURTESY OF CAPITAL ONE FINANCIAL CORPORATION

One of the greatest problems of the business world today is the over-generalisation of success into contexts where that success is simply not transferable. We then compound that problem by stubbornly refusing to believe we have over-generalised and try to force that model where it just don’t fit.

Let’s look at a salient and virulent example: process based automation. Process based automation is the concept of (i) understanding the process, (ii) converting it into a workflow and then (iii) designing and developing machine-based automation (e.g. a software system) that ensures the process is undertaken quickly, consistently and verifiably. This is fantastic for manufacturing. Programming software to automate and accelerate highly repeated processes has delivered enormous productivity benefits. Indeed major software vendors, like SAP, grew up in chemical industries and other manufacturers, undertaking materials management, production planning and payroll: i.e. highly repeatable processes.

When we move over into other contexts we start to move further along a continuum from “frequently repeated processes” like manufacturing through to “goal driven” activities like an air crash investigation. It is very hard to automate an air crash investigation, because your initial findings will highly influence your subsequent activity. You might spend several days, weeks, months or even years simply looking for the “black box”. Most business contexts fall somewhere in between these two extremes. 

And here’s the rub. Most software implementations today consist of the following: 1. Send the business analysts out to capture the processes (possibly to the BPMN standard), 2. Undertake a business process improvement (BPI, possibly using lean six sigma), 3. Convert this into user requirements, functional specifications -> configure the software, test and go live. Regardless of context we expect to get the kinds of results achieved when this approach was successfully deployed in manufacturing. But it doesn’t seem to work quite so spectacularly in office environments, especially where the work is investigatory rather than process oriented,

There is also an issue with how we implement IT projects. We don’t often capture when the current state process goes off track or when things that are infrequent, irregular or ad-hoc occur. What happens when the form is not filled in correctly or the payment is rejected? What happens at the end of year or each budget? What happens when an unscheduled event occurs, like a strike, an earthquake, tsunami or hurricane/cyclone? What happens when the board or a government regulator asks a question about something? Very rarely are these events or processes captured and, as a result, they are very often missed in the requirements for the new system.

Even worse, the legacy systems we are planning to decommission have often evolved to handle these unusual, irregularly repeated events. But our initiating business analyses will likely view these hard learnt modifications as inefficiencies to be six-sigma-ed into oblivion. Our shiny new system, cleansed of inefficiencies, is then totally unprepared for the stream of irregularities that come down the pipe post-go-live. It is then, months after go-live that we discover why the organization used to do that seemingly inefficient process that used to prepare for that irregular event (or even more likely, we suffer in naivete and just blame the new system).

This may not hurt if your business context has large quantities of highly repeated processes that benefit from the automation. However you are likely to get no or even negative outcomes if there isn’t. But we pretend that every business context is like manufacturing. We just implement ERPs and CRMs ad-nauseum everywhere: blue collar, white collar, private, public, manufacturing, service, profit, not-for-profit. But we are overgeneralizing, especially in our implementation process.

We can also see this in the public sector with its current focus on customer service. Inspired by the customer focus efforts of private sector customer-facing organisations like telecommunications, retail, and insurers, the public sector has become obsessed with improving “customer satisfaction”… as if all they do is provide services to individuals. But a key function of government is regulatory. Does anyone ask the convicted felon if they are satisfied with the service they received from the courts? How about the citizen who has just paid their speeding or parking fine. What about the one who has just been deemed unable to provide medical or construction services to the market and has been de-certified. Unlikely to be satisfied customers there.

These “services” are to the wider public and not to an individual “consumer”. Focusing purely on “customer service” is an over-generalisation of a model that was successful in another context. There is certainly always room for improvement in the way all ( specially public sector) organisations treat the citizens they are trying to deal with. But to believe that they should become myopically centered and focused upon the “customer” is a misreading of their raison d’être. Many government organisations are there for the general public service and not simply to service individual members of the public.

Perhaps a more important function for your IT systems is supporting decision making, ad-hoc investigation, what-if scenario modelling or possibly evidentiary record keeping. Perhaps process optimisation and customer focus is only a minor component of the activities of your business.

So, before launching head long into that multi-million dollar CRM or ERP implementation, check that your organization actually fits the context in which those solutions have proven successful in the past. And if you do need to implement a big process based system, ensure you capture all of the processes including the irregular, unusual, intermittent and error-correcting processes…business users don’t normally recall these in a workshop without significant prompting.

https://www.linkedin.com/pulse/over-generalisation-success-implications-system-jeffrey-popova-clark/

Categories
Uncategorized

Neural Nets that don’t stop learning

Over the weekend, I was reminiscing over my 1990 copy of Rumelhart and Mclelland’s seminal work on Parallel Distributed Processing (its about using backpropagation to teach Neural Nets). It reminded me that most modern efforts have missed the point over the efficacy of using Neural Nets in artificial intelligence.

Unlike artificial neural nets, us biological neural nets are not taught everything during a learning phase and then released unto the world as a fully taught algorithm. Instead we never stop learning. This is enormously useful for a number of reasons, but is also enormously dangerous for others.

Consider the driverless car that was incorrectly parking in vacant disabled parking spaces. Engineers had to teach the car that it had made a mistake until eventually the AI learnt that vacant parks with the relevant symbols are not for parking (presumably unless the car itself contains a handicapped passenger). The same neural net has to learn that those same symbols are irrelevant during regular driving and only of relevance when undertaking parking maneuvers.

Us humans have a major advantage. We don’t have to keep all potential contexts in our head simultaneously, because we can hold contexts in our short-term memory. Short term memory is simply recently learned material. If we are driving past lots of parked cars and searching for vacant parks, we have recent memories of driving slowly over the past minute or so and seeing lots of parked cars. This recently learned material is invaluable in determining context.

When an AI neural net is no longer in learning mode, it must have sufficient knowledge in its net to decide a course of action in all potential contexts… parking, driving, recharging/refueling, loading, unloading etc. It’s like trying to determine a story from a photo instead of a video. So why don’t we just let our neural nets continue to learn after we feel they are performing sufficiently well at the task at hand? The AI could then take advantage of recently learned context-relevant information which should simplify the AI’s task during operation…just like it does for us humans (see Google’s recent paper: https://www.technologyreview.com/s/602615/what-happens-when-you-give-an-ai-a-working-memory/).

This sounds good until we realise that neural nets that are prevented from continuing to learn in the field are more predictable. Imagine a driverless car accident occurs where the neural net decided to crash the car, killing its passenger, rather than plough through a zebra-crossing full of school children. The car’s manufacturer can take an identically trained neural net and test it under the same conditions experienced during the accident. The responses of the test will be the same as that produced by the crashed car’s neural net. However, if we have a net which continues to learn in the field, it becomes almost immediately unique and unreplicable. We are unable to reproduce the conditions and state of the AI during the accident. In fact the decision processes of the AI become as unpredictable as us human neural nets.

A machine controlled by a continually learning AI will not necessarily perform as expected. The impact of all of the uncontrolled experiences on the neural net are essentially unknown in total. Effectively this is the case for all humans. We trust human neural nets to be airline pilots, but the odd one may decide to deliberately fly the plane into the ground. Will we be able to accept similar uncertainty in the performance of our machines? And yet, it may be that failure to accept this uncertainty may be the key reason why our neural nets are being held back from performing general intelligence tasks.

Categories
Uncategorized

Prediction: Do we need perfection or just to be better than the rest?

Prediction is a difficult art. There are some questions that include random variables that simply can’t be predicted. What will the spot price of oil be at close on June 20, 2017? You may be a fabulous forecaster who takes into account historical trends, published production figures, geopolitical risks etc etc and be more informed than anyone on the planet on the topic, but the likelihood of precisely hitting the spot price at close is very very low. This means that although you may be, on average, more accurate than a less capable forecaster, you will nevertheless more than likely to get the final answer wrong. Is this just as useless as someone who is just guessing?

So, are perfect forecasts really the golden standard we need to aim for? Or instead like the metaphorical “running away from the bear meme”, don’t we just need to be better than “the other guy” to get competitive advantage? The answer is Yes, you just need to be better at predicting than your competitors. You don’t need to achieve the impossible…i.e. perfect accuracy of your predictions. Many people simply abandon any effort to get better at prediction once they realise that perfection is unattainable. This is a big mistake…getting better at prediction is both worthwhile and eminently doable.

There is no advantage in predicting things that are perfectly predictable and no-one can predict the totally unpredictable. The competitive advantage lives in the middle. Being better than everyone else at forecasting hard to predict things gives you an edge even though you are unlikely ever to get the answer perfectly right.

In fact, as the diagram above shows there is no competitive advantage in either “totally unpredictable” or “fully predictable” events. No-one is going to get rich predicting the time of the next lunar eclipse anymore. Equations and data exist that make forecasting eclipse events to the second quite mundane. Similarly no-one can predict the next meteor strike (yet), so we are all as inaccurate as each other and no better than pure guesswork regarding when and where the next one will strike. But in between these two extremes there’s plenty of money to be made.

In the above chart the actuals are the orange dots and the blue line is a typical forecast. The typical forecast (blue line) even got the answer perfectly right in period 5, hitting the actual number of 33 precisely. But the Superforecast (orange line) is almost twice as accurate as the typical forecast and yet never got the precise answer correct in any one period. A decision maker armed with the Superforecast is going to be in a much better position than someone armed with the Typical forecast.

So the key is to be as accurate as possible and more accurate than your competitors when it comes to predicting market demand, geopolitical outcomes, crop yields, productivity yields etc etc. Although still unable to predict perfectly accurately, being better than everyone else yields significant competitive advantage when deciding whether to invest your capital, divest that business, acquire that supplier etc etc. So how do you get better at predicting the future…Well that’s where a combination of Big Data and Superforecasting come in.

Big Data is the opportunistic use of the data both internal to your organisation and available from 3rd parties which can use modern data crunching technology to make better predictions about what is likely to happen. Superforecasting is the practical application of techniques borne from cognitive science (commonly misnamed as Behavioural Economics) that overcome human’s natural cognitive biases and lack of statistical/probabilistic thinking to improve forecasting across any expertise domain. Between the two, any organisation can significantly improve its forecasting capability and reap the benefits of clearing away more of the mists of time than their competitors.

The key is not giving up simply because perfect prediction is impossible.

Do you know what activities in your organisation would seriously improve their performance based on efforts to improve their predictive accuracy and then significantly impact the bottom line?

https://www.linkedin.com/pulse/prediction-do-we-need-perfection-just-better-than-popova-clark/

Categories
Uncategorized

Traits of a Superforecaster..Hang on that’s a top Data Scientist

I have been a follower of Philip Tetlock, Dan Kahneman, Richard Nisbett, Thomas Gilovich etc for over 20 years and devoured Tetlock’s recent Superforecasting book on its release. Tetlock has led a team of forecasters using a combination of crowdsourcing, cognitive bias training, performance feedback and other techniques to outperform a suite of other teams at forecasting geopolitical events in a controlled multi-year forecasting tournament. The tournament was run by IARPA (Intelligence Advanced Research Projects Activity including FBI, CIA, NSA etc), which is the Intelligence community’s equivalent to DARPA, as part of their Analysis and Anticipatory Intelligence streams.

IARPA invited a range of teams from multiple elite US universities to forecast hundred’s of important geopolitical questions to gauge their performance against professional intelligence analysts’ forecasts (the professionals had access to classified information but not the teams). In the first year Tetlock’s team beat all their competitors handsomely but also outforecast IARPA’s forecast accuracy goals, not only for the first year but also IARPA’s initial goals for the second and third year of the tournament. The next year Tetlock’s team vastly improved again, vastly outperforming all the new goals (and competitors) such that IARPA decided to cancel the tournament and bring Tetlock’s team into to do all future IARPA forecasting. In many cases Tetlock’s team of Superforecasters, with access only to publically available information, significantly outperforms the intelligence analysts with access to classified information.

But what makes Tetlock’s team of Superforecasters so good at predicting the future? Tetlock, being an academic, took the opportunity to identify what habits and characteristics his team of forecasting geniuses had that other’s don’t. Here’s the attributes he found:

  • Cautious – they predicted outcomes with less personal certainty than others, they always kept “on the other hand” in mind
  • Humble – they didn’t claim that they knew everything or that they fully understood all aspects of a problem and would readily change their mind when new evidence came in
  • Nondeterministic – just because something has happened they don’t ascribe a “Monday’s quarterback” explanation for its occurrence. They keep in mind that events may have turned out differently for potentially unknown reasons.
  • Actively open-minded – they’re always testing their own beliefs and see them as hypotheses to be tested, not dogma to be protected
  • Naturally Curious with a need-for-cognition – love solving problems and adding new ideas and facts to their own knowledge
  • Reflective, introspective and self-critical – they are constantly re-evaluating their own performance and trying to uncover and correct the errors they themselves made
  • Numerate – tend to do back-of-the-envelope calculations, comfortable with numbers
  • Pragmatic (vs Big Idea) – not wedded to any one ideology or worldview, prefering reality and facts over opinions and untested theories
  • Analytical (capable of seeing multiple perspectives) – break problems up into logical parts and consider many different views of a problem
  • Dragonfly-eyed (value multiple perspectives) – value the inputs of viewpoints that are new and different to their own and can handle assessing differing theories at the same time.
  • Probabilistic – see events as likely or unlikely (as opposed to will or won’t happen) and are comfortable with uncertainty
  • Thoughtful updaters – willing to cautiously adjust their previous assessments as new information comes in
  • Good intuitive psychologists (aware of and able to compensate for common biases) – they understand the various cognitive biases us humans have when thinking about problems and put in the effort to overtly overcome the shortfalls the biases cause
  • Personal Improvement mindset – always trying to learn more and get better at whatever they are doing
  • Grit – simply don’t give up until they feel they can’t improve their assessment any further

When I reviewed this list of personal attributes and habits it struck me just how similar these Superforecaster attributes were to the attributes ascribed to truly excellent data scientists. A quick review of articles (e.g. Harvard Business Review, The Data Warehouse Institute, Information Week etc) about the differences between good Data Scientists and great ones turned up this aggregated list (a ✔ indicates an overlap with Superforecasters and ✗ being unique to excellent Data Scientists ) :

  • Humble (before the data) ✔
  • Open-minded (will change their mind with new evidence) ✔
  • Self-critical ✔
  • Analytical (breaks problems down, does quick back-of-the-envelope calcs) ✔
  • Persistence/grit ✔
  • Comfort with uncertainty/able to hold multiple theories at once ✔
  • Cognitive/Innate Need-for-understanding ✔
  • Pragmatic (as opposed to theoretical) ✔
  • Interested in constantly improving ✔
  • Creative (able to generate many hypotheses) ✗
  • Has innate understanding of probability/statistical concepts (conditional probability, large numbers etc) ✔
  • Business Understanding ✗
  • Understands Databases and scripting/coding ✗ 

This also works in the other direction with only a couple of Superforecaster specific attributes that aren’t covered by the Great Data Scientist attributes list. Tetlock has studied his Superforecasters with academic rigour whereas the Data Scientist list is likely to be more opinion and untested hypotheses (so I believe the Superforecasters won’t like this article :-), however one cannot but be impressed by the significant overlap of the two lists.

Is it possible that the big successes provided by Data Science to date have actually not been due primarily to the massive data crunching capability and the ubiquity of IoT and social media data collection, but is actually primarily the result of applying the personal habits and attributes of Superforecaster-like Data Scientists to business problems? If so, this says we could get a lot of business value out of applying Superforecaster-like people and approaches to many business problems, especially those that may not have a lot of data available.

Do you know of any applications where a Superforecaster approach might help your organisation?

List of articles:

https://hbr.org/2013/01/the-great-data-scientist-in-fo

http://www.informationweek.com/big-data/14-traits-of-the-best-data-scientists/d/d-id/1326993?image_number=1

https://infocus.emc.com/william_schmarzo/traits-that-differentiate-successful-data-scientists/

http://www.cio.com/article/2377108/big-data/4-qualities-to-look-for-in-a-data-scientist.html

http://www.boozallen.com/insights/2015/12/data-science-field-guide-second-edition

https://upside.tdwi.org/articles/2016/06/13/five-characteristics-good-data-scientist.aspx

Note that I had to edit the aggregated list due to the fact that some articles were comparing Data Scientists to general IT people, or general executives, as opposed to less effective Data Scientists. This led to some articles concentrating on the very technical capabilities required by every Data Scientist as opposed to the personal characteristics and habits which resulted in excellence.

Categories
Uncategorized

Neural Nets: Use with caution

There has been significant hype surrounding artificial intelligence and neural nets recently and with good reason. AI is superb at learning to respond appropriately to a set of inputs like a stream of video or audio data and handling complexity beyond the capacity of us mere humans. Recent efforts in natural language processing and driverless vehicles have been nothing short of astounding.

As the tools of data mining have morphed into big data, the capabilities coming from these global leading AI projects are being incorporated into the toolkits available to the corporate data analyst. Many tools such as R, Oracle Data Mining, Weka, Orange and RapidMiner as well as libraries associated with languages like Python, make neural nets readily available to a vast range of analytic endeavours.

But caution is advised. The great results achieved by these major AI projects have been achieved with a few little noticed advantages. Firstly, the humans have already determined that there is indeed a signal in the data that they are trying to model. When the video stream comes in that the parking sign says “Handicapped Only”, the AI Neural Net quickly learns that parking the driverless vehicle there is an error. But the fact that the parking sign is verifiably there in the data is known by the modelling Data Science team apriori. A neural net will find the signal that we knew was there. Another advantage is the sheer quantity of data. Teams working for Google, Apple and the like use billions and even trillions of data records too train their neural nets. This makes up for the relatively high number of “degrees of freedom” inherent in their neural net models.

However in many real world situations we are looking for signals in the data that may or may not be there. For instance if we are trying to predict the future performance of one of our regional sales locations based on attributes of the centre and its surrounding catchment we may or may not have sufficient information in the data to detect a signal. Perhaps a natural disaster will impact the performance or there will be industrial action, or a terrorist act or… The key is that the data we possess may or may not be sufficient to make a reliable prediction. If it is sufficient, that’s great, but, if it isn’t, we need to know that fact.

Unlike more traditional predictive analytic techniques (like regression, survival analysis and decision trees), neural net models are difficult for us mere humans to interpret. But the powerful data tools I mentioned above will let our corporate data science team casually throw a neural net model at the data. What’s more they will invariably produce seemingly more accurate results than the traditional models. The old “this model makes no logical sense” is not available to neural nets and leaves our data science team at high risk of modelling simple noise. This is particularly a problem when we re-use the same holdout sets when trialing hundreds of slightly differently configured neural net models.

So who cares? Essentially using un-interpretable but powerful neural net models may make you feel like you are more accurately predicting the future. But in reality it may be simply capturing the noise from your input data. You may waste lots of time chasing ghosts or worse you may deploy a model into operation which performs little better than chance in the real world.

Have you seen an example of a Data Science team chasing ghosts?

https://www.linkedin.com/pulse/neural-nets-use-caution-jeffrey-popova-clark/