The last epidemiological study on passive smoking what are the conclusions?

This chronologically is the last study done in October 2010 by Brenner et al  into passive smoking and lung cancer what were the numbers? RR then ORs.

ETS Exposure

At home Adult and/or Child   1.1 0.6-1.9

Childhood 1.0 0.6-1.8

Adulthood  1.0 0.5-2.0

At work 1.2 0.7-2.1

< 10 year   1.3 0.8-2.2

> 10 years  1.2 0.7-2.0

At both home and work 1.2 0.7-2.1

It concluded: “Among never smokers in our population, we observed no association between either exposure  to  ETS  at  home  or  at  the  workplace  and  lung cancer risk (Table 2). In general, the effect estimates for ETS exposure were similar between the total population and only among never smokers.”

It then goes on to apologise: “ETS exposure was not found to significantly increase risk among never smokers in this study, however, several  potential explanations are possible. ETS exposure either as a child or an adult in the home or the workplace has been evaluated in numerous studies [53]. The results, however, have been inconsistent as to the significance and magnitude of the effects among never smokers.

When estimates were pooled in a meta-analysis of 34 case-control studies of non-smokers, a pooled relative risk of 1.2 (95% CI 1.1-1.4) was observed, although only seven out of 34 studies reporting significantly elevated risk [6]. It was suggested that the inconsistency in the significance of findings across studies could be due to issues of sample size, measurement error, recall bias and confounding.”

None are statistically significant so the search goes on.

This entry was posted in Lung cancer, Smoking and tagged . Bookmark the permalink.

4 Responses to The last epidemiological study on passive smoking what are the conclusions?

  1. Joanne says:

    I wish I understood the figures more. I seem to be reading that ‘second hand smoke’ has nothing to do with lung cancer.
    If that is correct, why is smoking banned everywhere ?

  2. daveatherton says:

    Hi Joanne thanks for dropping by, let me try and explain. For example in this example of childhood exposure , the relative risk is 1.00. The odds ratio is 0.6-1.8. What it says is that the possible range of contracting lung cancer from passive smoking is 0.6 to 1.8 and at 95% confidence interval (CI) that is the maximum theoretical level of proof in science.

    However the average result is 1.0, the null hypothesis. I hope this helps.

    Childhood 1.0 0.6-1.8

    • BrianB says:

      Hi Dave – another blogger ‘coming out’ of F2C I see!

      Can I correct you on your interpretation of the statistics here, as you are confusing two different measures.

      The numbers that you quote (from this study) are Odds Ratios, (ORs) not Relative Risks (RRs). What you describe as ‘odds ratios’ are in fact the bounds of the 95% confidence intervals around each odds ratio. No RRs are quoted in the study (apart from the ‘pooled’ meta-analysis result – about which, more below).

      OR and RR are used variably and variously in pretty much all such epi studies, and they are alternative forms of a measure that summarises a 2 x 2 contingency table. Whilst they are mathematically similar, they do differ in the algorithms used, viz:

      In this study sample, if p = the ratio of lung cancers occuring in those exposed to ETS, and q is the ratio in those not exposed to ETS, then the following formulae apply:

      RR = p/q

      OR = p*(1-q)/(q*(1-p))

      When large samples are analysed, the OR and RR tend to converge to very similar values, but the OR will always be larger than the equivalent RR – this is a mathematical inevitability.

      So, whenever someone quotes an OR of (say) 1.2, the likelihood is that the corresponding RR would be of the order of 1.1 – or even less. It should come as no surprise that anti-tobacco ‘researchers’ are happy to confuse their audience by giving the impression that OR and RR are the same. They are not.

      Mind you, I don’t believe for one minute that those involved in anti-tobacco research have a scooby-doo about the mathematics behind statistical methods, and how to interpret their outcomes, as they regularly misuse (abuse?) them, thus rendering the results invalid.

      And one of the biggest epi abuses of statistical methods is in the use of ‘meta’-analysis to derive ‘pooled’ RRs or ORs. These pooled results will aways result in confidence intervals that are narrower than any of those applying in the original studies. Why? Because they now have a much bigger ‘pooled’ sample, so narrower CIs are inevitable – hence less likelihood that they will contain unity (1.0). But, unless the various ‘pooled’ samples are homogeneous (ie they can be assumed to have been drawn from the same population – or are mathematically adjusted to appear so), then the end result is completely invalid, and thus worthless.

      Its like trying to make a strong chain out of weak links. You can’t!

      Never take epidemiological statistics at face value. Most (and I mean by far the majority) of them are just plain wrong.

      I hope this helps to clarify things (although it may cause more confusion!).

      Oh, and Joanne is dead right in her interpretation. That’s exactly what this study shows!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s