Relative Risk or Risk Ratio (sometimes also called Hazard Ratio or Odds Ratio, but as the meaning of odds is quite different, especially in, for example, racing circles, this is to be avoided) is at the very heart of the dispute between epidemiology and real science. If X% of people exposed to a putative cause suffer a certain effect and Y% not exposed to the cause (or alternatively the general population) suffer the same effect, the RR is X/Y. If the effect is “bad”, then a RR greater than unity denotes a “bad” cause, while an RR less than unity suggests beneficial cause (and likewise if they are both “good”). An RR of exactly unity suggests that there is no correlation. There are a number of problems  in a simplistic application of RR. In particular:

1.      Even where there is no correlation, the RR is never exactly unity, since both X and Y are estimates of statistical variates, so the question arises as to how much deviation from unity should be acceptable as significant.

2.      X and Y, while inherently unrelated, might be correlated through a third factor, or indeed many others ( for example, age ). Sometimes such confounding factors  might be known (or thought to be known) and (sometimes dubious) attempts are made to allow for them. Where they are not known they cannot be compensated for, by definition.

3. Sometimes biases are inherent in the method of measurement employed.

4.      Statistical results are often subjected to a chain of manipulations and selections which (whether designed to or not) can increase the deviation of the RR from unity.

5. Publication bias can give the impression of  average RRs greater than 1.5 when there is no effect at all.

For these reasons most scientists (which includes scientifically inclined epidemiologists) take a fairly rigorous view of RR values. In observational studies, they will not normally accept an RR of less than 3 as significant and never an RR of less than 2. Likewise, for a putative beneficial effect, they never accept an RR of greater than 0.5. Sometimes epidemiologists choose to dismiss such caution as an invention of destructive sceptics, but this is not the case. For example:

In epidemiologic research, [increases in risk of less than 100 percent] are considered small and are usually difficult to interpret. Such increases may be due to chance, statistical bias, or the effects of confounding factors that are sometimes not evident .[Source: National Cancer Institute, Press Release, October 26, 1994.]

"As a general rule of thumb, we are looking for a relative risk of 3 or more before accepting a paper for publication." - Marcia Angell, editor of the New England Journal of Medicine"

"My basic rule is if the relative risk isn't at least 3 or 4, forget it." - Robert Temple, director of drug evaluation at the Food and Drug Administration.

"An association is generally considered weak if the odds ratio [relative risk] is under 3.0 and particularly when it is under 2.0, as is the case in the relationship of ETS and lung cancer." - Dr. Kabat, IAQC epidemiologist

This strict view of RRs may be relaxed somewhat in special circumstances; for example in a fully randomised double blind trial, as opposed to an observational study, which produces a result with a high level of significance.

Return to FAQs