David Hatch

Food Safety Risk Assessments are “Data Hungry”

By David Hatch
No Comments
David Hatch

Without data, risk scoring yields only a “perceived risk” score. An actionable risk assessment should be based on actual outcomes and experiences. For this, we need real world data.

This past year, I was invited to participate in a risk assessment workshop led by a third-party consultant at a food safety event. During my 30+ year career, I have been through many different types of risk assessments across several industry segments. I have been a participant seeking to define and address risk at my own organization, as well as a consultant helping my clients perform their own risk assessments. Each time I experienced a risk assessment exercise, I learned something new, and this time was no different. The key learning for me in this case is encapsulated in the title of this blog: Food Safety Risk Assessments are “Data Hungry.”

What Does This Mean?

As we went through the workshop exercise, we explored the elements of risk. Specifically, risk is defined as a combination of three factors: Is something POSSIBLE, how PROBABLE is it to occur, and what is the potential SEVERITY if it were to occur?

  • The first element is a yes or no question. Anything that can possibly happen should be included in the assessment.
  • The second element, probability, is measured on a scale. In our exercise, we assigned probability to a scale of 1–5 (least to most probable). A subset of probability is the expected frequency. This is a tricky one. If something has been occurring over time, then the frequency is known and can be easily factored into the probability scale. If it is a newly discovered issue, then “expected frequency” becomes an exercise in guesswork — one that must be refined over time. In our exercise, frequency was measured on a scale of 1–5 (least to most frequent).
  • For the third element, severity, we also used a 1–5 scale (least to most severe).

The room then proceeded to use these elements and measurement techniques to assess risk across 10 different scenarios. These included descriptions of foodborne illness, food safety testing outcomes, discovery of allergens, labelling mishaps, chemical contamination, food fraud, supply chain disruptions, and other risks.

The risk assessment included a worksheet laid out as a table, where each scenario could be prioritized and scored according to the risk measurement elements (Figure 1).

Example Risk Scoring Table]
Figure 1: Example Risk Scoring Table

The room was divided into three teams, and each was asked to prioritize the various scenarios in order of highest to lowest risk. Each group completed this task, and here is where things got interesting — each team had different results!

As shown in the example table, a lower priority may yield a risk score above that of something that was originally considered a higher priority. Each team’s tables looked significantly different from the others. To be clear, these were not strangers performing the exercise with no knowledge of each other’s priorities. In fact, the three teams comprised the global food safety leadership of one company — yet each team seemed to have very different ideas on risk prioritization. This unexpected result caused some lively discussion; meanwhile, the consultant leading the exercise was the only one in the room who was not surprised at all by the results. Here’s why:

There was one more factor to consider — one that was on the minds of each team, but not openly expressed as a factor for prioritizing risk: The TYPE of risk.

The consultant then asked the room to describe what type of risk they were thinking about from the following four categories:

  • Public Health
  • Reputation
  • Regulatory
  • Business Operations

The room concluded that the type of risk had a significant impact on how the risk was originally prioritized. Each team had set out their prioritization criteria based on a preconceived risk category, and it turned out that each team’s selected category was different. Depending on which of the four risk types or objectives was dominant, a different prioritization and risk scoring resulted.

This is where the “data hungry” concept factors in. The final analysis revealed that a risk scoring exercise conducted in this manner is capable of yielding only a “perceived risk” score. While perception is a good start, an actionable risk assessment should be based on actual outcomes and experiences. The availability of real-world data, collected over time, has a dramatic impact on validating perceptions.

For example, the availability of pathogen testing diagnostic data, along with the probability, frequency, and likeliness of occurrences, would allow a risk assessment score to be based on a historical trend, rather than a perceived level of frequency and probability. A risk assessment exercise would be informed by the data, and a score of 1–5 could be applied with far more confidence.

Data, in the words of one of the participants, “removes the guesswork and assumptions” within a risk assessment. I learned that data is the necessary element to transform risk perception into risk knowledge. While it is useful to perform a risk assessment based on perceived scoring and prioritization, it is essential that a risk assessment be validated with real data.

Related Articles

About The Author

David Hatch

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.