Search

The ‘4D’s’ and how they can support meaningful process safety discussions.

The ‘4D’s’ and how they can support meaningful process safety discussions.

Whether you’re organising your next hazard study or risk assessment session, having an experienced facilitator can be the difference between a simple review and a thorough, actionable analysis that significantly improves safety outcomes.

Posted

14.08.2024

Written by

Richard Bowen

A good facilitator will be an expert in process safety techniques and possess a deep engineering knowledge, however the best facilitators have the skills to ask the right questions and create an environment where open dialogue is welcomed, and honest discussions can be had.

A technique known as the ‘4D’s’ that has it’s origins in the US Air Force and was later adapted by Human Organisational Performance (HOP) expert, Jeffrey Lyth, is a method to better understand everyday work and provide context when failures occur. It can also be used as an excellent facilitation tool that can be easily applied to many process safety techniques, as we will explore in this article.

One of the key principles of HOP is learning and adaptation, specifically collecting operational data to identify areas of potential learning and improvement. In a process safety context, this data is usually available in technical drawings, operating manuals etc. but what about the everyday experiences of those who regularly engage with the systems, whether operators, engineers or technicians? These roles often have great insights into how the process operates in reality, being sensitive to any weak signals that may provide clues as to how healthy a system is. Such information can be vitally important operational data, particularly for older equipment that may have undergone modifications over it’s lifetime that haven’t been captured in updated operational and maintenance manuals. So how does a facilitator extract this often anecdotal intelligence? The 4D’s may provide a roadmap.

The 4D’s have traditionally been used to describe those tasks or conditions where the risk of failure from error is increased.

They can be posed as questions to an operator and they stand for:

  • Is it Dumb – where something doesn’t make sense
  • Is it Dangerous – where the risk is either unacceptably high or where there aren’t sufficient controls in place
  • Is it Difficult – where maintaining normal operating conditions, or successfully completing a task is complicated and could be simplified
  • Is it Different – a task or condition that is unfamiliar and outside established procedures or normal performance parameters

Let’s look at them in closer detail.

Dumb

What may seem Dumb to operators working within a system will rely on their collective experiences. Tasks and equipment may have been designed with the right intent, but in practice may not achieve the operational objective for which it was originally intended. Similarly, operating equipment and conditions may change and the original design intent may no longer be valid. In such situations, workers are excellent at adapting to work around things that don’t make sense to them. Although their intentions may be admirable, such as maintaining production throughput, work-arounds are seldom risk assessed or proceduralised, meaning they may pose unforeseen risks to the worker or their colleagues. Understanding any issues workers perceive as ‘Dumb’ can reveal hidden risks that documentation and drawings won’t include.

The 2005 Texas City Refinery explosion is a key example of how workers’ were forced to adapt due to outdated and overly complex start-up procedures and malfunctioning alarms. Informal workarounds, that by-passed cumbersome safety systems became normalised, but these adaptations were not risk assessed and ultimately played a significant role in the overfilling of a raffinate splitter tower, resulting in a massive explosion that killed 15 workers and injured over 180 others. The disaster highlighted the dangers of ignoring worker concerns and relying on outdated procedures, emphasising the need for continuous review and assessment of process safety practices.

Dangerous

HOP guru Todd Conklin has said that “death hides in normal work”, for example, where risk has been normalised (maybe over a long period of time), or creeping change has introduced unforeseen hazards, in other words, how normal work can be highly variable. What a worker perceives to be ‘Dangerous’ therefore is a very important question to ask in any risk assessment session. Asking what a worker considers to be Dangerous in their work is not intended to uncover a list of hazards, but rather to obtain an understanding of what hazards they care about, those which they feel present a clear and present danger to themselves or their colleagues because they are not sufficiently controlled.

For years prior to the 2010 Upper Big Branch Mine disaster, workers at the mine had been dealing with pervasive safety issues, such as inadequate ventilation and poor dust control. These hazards were known to be Dangerous, but had become accepted as part of the routine work environment of the mine, leaving it to the workers to manage the risks as best they could. These conditions led to an explosion that resulted from a deadly build-up of methane and coal dust, killing 29 miners. Asking workers what they perceive as ‘Dangerous’, can reveal critical, often overlooked hazards that pose real and immediate threats to safety.

Difficult

When assessing systems, it’s important to differentiate between complex tasks and complicated tasks. Complex tasks are those which are intercoupled with other systems and that interdependency can be a point of weakness in the systems reliability. Complicated tasks are Difficult to complete successfully and it’s these tasks that this ‘D’ is particularly interested in. In their work into High Reliability Organising, Weick & Sutcliffe researched complex systems and realised the best way to manage complexity was resisting the temptation to simplify and instead understand the nuances and subtle cues in the operating environment, and where possible build additional assurance around those points of weakness where complex systems interface. In comparison, complicated tasks are Difficult and it’s important to understand why this difficulty exists and how it can be simplified. Those workers who are responsible for completing Difficult tasks are best placed to contribute potential solutions so consulting them during the risk assessment session can help to identify potential improvements.

The failure to distinguish between complex and complicated tasks was a key theme from the 2010 Deepwater Horizon oil spill.

The disaster occurred due to failures of multiple interdependent systems, revealing weaknesses in how these interactions were managed. Additionally, critical tasks, such as interpreting well integrity tests, were complicated and difficult to perform accurately. Inadequate communication, training, and a focus on speed over thoroughness led to errors in completing these tasks that contributed directly to the blowout. The incident reminds us that while complex systems require careful management of interdependencies, simplifying complicated tasks through consultation with experienced workers is essential for improving safety outcomes.

Different

In his book ‘Normal Accidents’, Charles Perrow introduced the concept of ‘weak signals’ as a precursor to failure. In simple terms, a weak signal is something Different to what is expected, that is, outside normal operating parameters. Seeing something Different can be an important indicator that something is not right in the system and if ignored, could escalate into a significant problem. For example, prior to the 1986 launch of the Space Shuttle Challenger, deviations in the performance of the O-rings used in the solid rocket boosters had been reported after they had shown signs of failure in colder temperatures. These deviations from expected performance were not fully addressed, and the decision to proceed with the launch despite these “Different” conditions ultimately contributed to the disaster. Identifying and addressing anything that seems Different can prevent problems from becoming critical, an important aspect of any hazard study.

Conclusion

Although the 4D approach has been so far utilised as a useful and easy-to-use HOP technique to adopt in safety conversations, and incident investigations, it is also readily transferable to any risk assessment or hazard study session as a way to look beyond the system elements and gain an understanding into how the system performs in reality. The 4D’s considers issues arising in everyday work that may go undetected but could be indicators of potentially serious problems that the assessment should consider and address. Having a facilitator lead your hazard study or risk assessment sessions who is skilled in examining those hidden risks and hazards could lead to significantly reducing the risks to your people and business.

Related insights