Are we doomed to live in an oppressive safety culture?

Martin Parkinson raises an interesting question (comment on previous post): what should be the reaction to an accident that, a priori, was an extremely low probability event? He suggests that any attempt to reverse the counterproductive aspects of ‘health and safety culture’ is doomed to failure”. After an accident he argues that most people will say, “well that tragic accident which just occurred could have been easily avoided – it would be unthinkable not to make the obvious small change which would avoid a repetition”. Further, he adds, “the accident might have been freakish and unforeseen, but if there is a seemingly easy and cheap “fix”, then no responsibly-minded person would fail to make that fix …there is no logical point at which you can say your environment has “just the right amount of apparent danger”.

This is an issue that I probed in a recent essay “Dangerous trees?. See the section entitled “Fault trees, event trees and trees”.

Britain’s Health and Safety Executive declares risks of death of less than 1:1,000,000 to be “acceptable” – defined as “generally regarded as insignificant and adequately controlled”. But how should such risks be calculated? If one divides the number of people killed by trees in Britain every year by the population, the risk works out at about 1:10,000,000. Acceptable? Only until someone is killed.

After the event it is usually possible to identify the cause and the person(s) responsible. A risk worth taking becomes culpable negligence. Hindsight transforms an “acceptable” risk with a probability of 1:10,000,000 into one with a probability of 1:1.

The fear of becoming the legally-liable victim of such a transformation, assisted by no-win-no-fee lawyers, is perhaps the main driver of the excessive risk aversion that bans hanging flower baskets and forbids conkers without goggles. For most institutional risk managers, outside hedge funds, there are no rewards for taking risks, only costs for failure. For them, one accident is one too many. No set of circumstances for which they might be held responsible can be too safe.

Escape from the suffocating safety culture that such reasoning produces can be sought in a “blame-free” culture. After a low-probability “freakish” accident, emphasis should be placed not on establishing guilt and punishment, but on lessons to be learned. Judges, juries and the Health and Safety Executive have important roles to play. Reconstructed foresight, not 20/20 hindsight, should be the standard against which culpability for freakish accidents should be judged.

Thanks Martin for your highly pertinent comments.

No comment yet

2 pings

  1. Martin Parkinson says:

    Me again. Couple of quick comments.

    I’ll admit that your point – (that an important driver for overcaution is the fear of being sued) – had not occurred to me when I posted. This tendency – to underestimate the power of external and social pressures and to attribute behaviour primarily to individuals’ internal dispositions – is known to experimental psychologists as “the fundamental attribution error”. I’m fond of citing the FAE as an example of a universal cognitive bias, so I feel sheepish but amused when I make that same error myself!

    Even so, I still think that there are ‘internal’ factors which sometimes come into play – though perhaps these are less prominent in an organisational context. Blame is very popular (I think it might even be connected with the FAE). So is cheap helpfulness. Telling someone what to do, giving them an order, can sometimes be a power-play, a small attempt at domination and therefore likely to be resented and resisted. So how attractive to give orders in the context of safety, where your motives are unimpeachable! (An example: the repeated outbreaks of ‘helmet wars’ on cycling e-lists, often starting with something like “You should wear a helmet – you are really stupid if you don’t!”).

    So anyway, apprehension of being sued and blamed. How might one create a ‘low-blame culture?’. How about a ‘no-fault compensation system’ as opposed to our adversarial tort-based system. There are a few no-fault systems in the world (notably in Canada and New Zealand) and interestingly a quick google of ‘no-fault compensation’ brought up many references from people arguing for such a system here in a medical context. This makes sense as far as I can see. Fear of being sued, as people become more litigious, forces clinical decision-making onto a more aggressively interventionist path which might not be to the ultimate benefit of the patient (because all interventions have side effects). Sometimes it really is better to *do nothing* – but often that’s a rather difficult thing to do.

  2. Martin Thomas says:

    In short, yes, it will continue to get worse until by wisdom, riot or happy accident the rational myopia of our safety approaches are exposed.

    Even then the close alliance of mindsets between legal, financial/actuarial and regulatory authorities will, no doubt, confirm another handy ally, and carry on regardless anyhow.

    After all, the ‘vox pop’ have been calling the law an ass, with some deep cause, from long before I was born and nothing much has changed (even after we were joined at the hip with the continental legal structures of the EU/EC and Rome.)

    The root cause as I see it? Well of course its the treatment of risk as deterministic, and the rush to capture life and living complexity by reductionism.
    Everything into a cause-effect chain… and the representation of the ebb & flow of civilisation and the universe by a series of ‘dots and dashes’ called events.

    This is the realm of the Newtonian empirical straight jacket and observational methods. And of pure science. A rock on which our enlightenment and civilisation is founded for certain. But what, no really guys, what are you doing over here in the societal?

    If the search for truth and honesty where paramount, then the boundaries, the limitations of these formalities of risk would be stated. But mostly they are not. There is a usual practitioner denial.. ‘make of my results what you will, for I cannot be responsible for the workings of your head’. Allied to the lack of any alternative approach. To often the effect is to blind like a night moth to the lighted window. How can the audience be expected to avoid its weighty inevitability?

    Professionally (for a goodly part of my work involves risk management) its mostly more mess than clarity when risk crosses into the human domains. Still, the safety and risk orthodoxy will press on regardless.

    For brevity I feel a list of examples is needed….
    P1. If risks (whether eminating from hazards or not) cannot be uniquely defined then don’t try to. Represent the fuzy as ‘blobs of many’
    P2. If the context of any situation is vital represent it(/them) and not the risks that depend on that viewpoint.
    P3. Never attempt to decompose systemic risk, or conversely apply reductionist risk control techniques to systemic influences (Basel II capital adequacy practitioners please note this will apply to all forms of portfolio risk management techniques too!)
    P4. If you’re expressing the systemic from the systematic in terms of probability of occurance then stop – your wrong – you don’t know enough about the system and its complexities to go on.
    P5. Cause-effect chains and trees can’t capture heavy interdependency or outcome critical feedback You will need a different form of risk model.
    P6. Also in situations where the most important outcome relates to improved learning, adapted human reponse, or applied decision making.. then these are the success criteria of your intervention. You will need an approach to match.

    So what about the oppressive safety (risk) culture?
    P1.& P2. will result in spurious accuracy – the risk trees, mappings and listings represent only one of many many alternatives, but they take on absolute truth as this message is too inconvenient/ embarassing
    P6. Practitioner (so too decision makers) rams an alien view of risk down everyones’ throat.. nothing changes even despite legislation etc. Instead of using people style risk management. This is called loosing the people (and the point).
    P4.& P5. Practioners carry on regardless (a bit like Chaos Theory quoted in a blog above) lots of people still keep on riding the old way – even without a horse! Under situations where adaptive learning, innovation, alternative options from changed conditions, new effects through changes, where large dynamic modes of any sort can kick in…. you have no business applying a static model, the word ‘bias’ has no meaning, in fact most normalised statistics & parameters will invalid.
    P3. This is another form of ‘physician heal thyself’.

    The oversimplified, reductionist and static risk construct at the heart of the safety analysis fits a ‘one best way’ style of linear thinking. It fits well with the legal and legislative/ authoritarian view abroad in our culture.

    Whereas we instinctive, thousand generations tall social predaters called homo sapien couldn’t possibly have got here without it could we?

    My single thought? Leave the scientific risk dross when it really doesn’t work and build a more general people friendly alternative… we won’t get rid of the safety culture, but atleast we might manage to shame the safety mafias into a less oppressive and more people effective path.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>