May 25, 2011

Likelihood of an Event

The biggest constraint in risk management, indeed the very reason for the existence of the discipline, is our inability to foresee what will happen next. In most cases where people have to manage the risk of an event, it is very common to rely on subjective estimates of the likelihood that an event will happen. It may be easy to deliver criticism of this approach, but alternative options are limited.  An improvement over such a simplistic 'gut feel' approach is to incorporate the phenomenon that events of a smaller scale occur at a higher frequency than similar events of bigger scale.  Earthquakes of low magnitude occur very frequently. Killer earthquakes occur far less frequently. The relationship of the frequency between the two types of events is described a the Power Law Distribution.  If we keep track of smaller scale events, we will be able to predict with a certain degree of confidence the frequency of the bigger scale events.

May 24, 2011

Deciding That a Crisis is Upon Us

A key challenge that needs to be met in the face of a serious situation is determining whether we are in a crisis or not? Whence is the transition from non-crisis to crisis? Is it time to initiate the crisis management plan, or not yet?

The US military gives us a good model of crisis with the DEFCON
status. It allows a staged reaction to a crisis that may be impending
or may not be impending.  It allows the military to prepare and also not to over-prepare.

As facts become known, and the understanding of the situation becomes more solid, the authorities are able to step up or step down preparations
and mobilisations for handling the crisis.

Business organisations would to well to think about a staged approach to
their crisis management plans.

Some Regulatory Business Continuity Links

By no means a complete list...

Australian Prudential Regulatory Authority

Guidance Note GGN 222.1
Risk Assessment and Business Continuity Management
http://www.apra.gov.au/General/loader.cfm?url=/commonspot/security/getfile.cfm&PageID=8532.

Guidance Note AGN 232.1
Risk Assessment and Business Continuity Management
http://www.apra.gov.au/Policy/loader.cfm?url=/commonspot/security/getfile.cfm&PageID=8529

Prudential Standard APS 232
Business Continuity Management
http://www.apra.gov.au/Policy/loader.cfm?url=/commonspot/security/getfile.cfm&PageID=8528

Prudential Standard GPS 222
Business Continuity Management
http://www.apra.gov.au/General/loader.cfm?url=/commonspot/security/getfile.cfm&PageID=8531


Commission of the European Communities (2005): Green paper on a
European programme
for critical infrastructure protection, November
http://eur-lex.europa.eu/LexUriServ/site/en/com/2005/com2005_0576en01.pdf

De Nederlandsche Bank (2004)
Business Continuity Planning
http://www.dnb.nl/en/payments/bcp/index.jsp

Beyond the Risk Register

A few months ago, someone in a program management office noticed that a new employee had taken a master's degree course in risk management.
A brief cackle burst forth, asking: "why would someone need a master's degree in risk management?"

It's a good question.

Risk management is, for many people in projects, one of the very basic things that anyone can do. It's not rocket science. To most people,
risk management is simply the risk register - often created because it
is a mandated part of the project management procedures - and not much
else.

And anyone can create a risk register. All you need is an Excel
spreadsheet and a template of the right headings, or a risk management
software, and start populating it.

Even the risk management framework is simple enough: identify the
risks, give an estimate of the likelihood, determine consequences,
identify controls, estimate residual risk, identify who is
responsible, and then rank the risks for prioritisation.

Brain surgery is equally simple: identify the area to be incised,
determine the likelihood of success, determine the risks, etc. People
know that not all surgeons are equally qualified to do brain surgery.
Even among brain surgeons, there is a qualitative difference in
experience and consqeuently, results.

Riding a bicycle is also equally simple, but everyone knows there is a
magnitude of difference in the performance of a rider at a Tour de
France level, and someone who rides for leisure.

But what about risk management? While anyone can come up with a risk
register, there can be a serious difference in the results.

Some areas where competence in risk analys would produce a marked
difference in results

* Risk identification - are we identifying the right risks? Are we
missing any? Are putting in risks that aren't risks? Missing a
critical risk can prove catastrophic to a project.

* Risk likelihood - are our estimates any good? Is there available
data we should be using? Overestimating can prove costly.
Underestimating can prove disastrous.

* Risk consequences - how credible are our estimates of consequence?
How complete is it? An inept analysis of the consequences will mean
poor preparation and mitigation of the consequences.

* Risk control - how realistic are the controls and mitigations we
have identified? How good is our decision-making on which controls to
implement? What is the impact of our controls

* Risk prioritisation - are using the right prioritisation approach?

May 20, 2011

Checklists

Checklists and questionnaires belong in the toolbox of risk professionals. A checklist works best when used by the risk professional while interviewing an information source, whom we’ll call an interviewee. 

The checklist becomes far less effective when simply handed over to the interviewee because when you let the interview work by himself,  it raises new undesirable dynamics:

  • First, the interviewee loses the chance to ask questions about the questions being asked.  He may misunderstand what is being asked, but unaware of it.  In such a case, even if you informed the interviewee that they should ‘feel free’ to ask if they have questions, will not help much, because in this case, the interviewee is not even aware that they misunderstand.
  • Second, the interviewee may not have as much interest as the interviewer in the process of gathering data.  In cases like this, you can expect that only the minimum amount of information will be written down in the checklist.
  • Third, the interviewee may not see the whole point of the interview, and why they must fill in the checklist. As in the second dynamic above, this results in lacking information.
  • Fourth, a large number of checklists and forms are very badly designed, which can easily lead an interviewee to confusion. Many forms ask for too many things. The interviewer may have energy to fill in the first few entries, but a noticeable drop in energy due to a drop in interest can often be seen.

A well designed form helps much toward eliciting good information.  At the very least, the following should be addressed when designing questionnaires and checklists:

  • Who is going to use the contents of the checklist?
  • To what purpose are they going to use the contents?
  • Who is going to provide information to the checklists? (That is, who are the interviewees)
  • What kind of questions and prompts should the checklist contain in order to elicit the information required?
  • What kind of information does the current version of the checklist contain that are not needed?
  • In what ways can the questions and prompts be misunderstood?

It is vital that a checklist be tested on several interviewees first before finalising it use.