The results of a risk assessment are often summarised in terms of a few numbers, such as frequencies of accidents of a certain magnitude, expected number of fatalities per year, etc. These quantities are risk measures – a mapping of some attribute of future consequences onto a measurable scale. However, risk exists whether we assess it or not – independently of the scales we construct to measure it. Furthermore, a specific numerical value is conditional on the knowledge and understanding of those who conducted the assessment and on choices and assumptions that they made, and different risk assessments for the same case may not yield identical numbers.
The purpose of a risk assessment is to support risk-informed decisions. In this regard, a quantitative risk measure, if informative with respect to the question at hand, can be an invaluable tool for comparing alternative decision options. However, in order to have confidence in a decision, the decision maker must understand how conclusions were reached. To facilitate this, we present a framework where the link between risk (i.e. what we are exposed to) and risk measures (i.e. the numbers presented to us) can be understood in terms of the choices and assumptions that are made during a risk assessment. We also explain why probabilities by themselves are not sufficient to express all uncertainty, and that a qualitative ‘strength-of-knowledge’ evaluation is required to assess how well probability assignments are supported by available knowledge. By evaluating strength-of-knowledge, together with our belief in deviations from assumptions and the sensitivity of results with respect to assumptions, it is possible to expose the uncertainty which is not captured by the quantitative risk measures, but which, nevertheless, can have influence on decisions.