Sunday, July 7FROM THE RIVER TO THE SEA, PALESTINE WILL BE FREE

The Hiroshima/Nagasaki Survivor Studies: Discrepancies Between Results and General Perception

by Bertrand R. Jordan 

Christopher Busby

Genetics, Volume 204, Issue 4, 1 December 2016, Pages 1627–1629, https://doi.org/10.1534/genetics.116.195339

In his recent article Jordan (2016) addresses the public’s “unreasonable” fears of radiation. He claims that the lifespan study (LSS) of Japanese A-bomb survivors in Hiroshima and Nagasaki has given definitive information on the relation between exposure and genetic damage, expressed as cancer and heritable effects in offspring of those exposed. He presents the LSS as the gold standard in radiation epidemiology, and he is not alone in this (Kamiya et al. 2015). The LSS results are the basis of legal limits for exposure and are employed to dismiss evidence showing that health effects from Chernobyl (Yablokov et al. 2015), Fukushima thyroid cancers (Tsuda et al. 2016) and child leukemias near nuclear sites (Kaatsch et al. 2008etc., somehow cannot be causal because the “dose is too low.” Jordan observes that according to the LSS study one must receive a dose of 1 Sv (1000 mSv, 500 times natural background) to have a 42% excess chance of cancer, and as for the offspring, there have been no increased frequencies of abnormalities or genetic effects detected. Unfortunately there are some worrying problems with the epidemiological methods that were employed, specifically with the key issue of the choice and then abandonment of the control group.

The common understanding of the LSS study is that groups of individuals with known doses are compared over their lifespan with zero dose control groups who were not there. Jordan explains:

The ABCC and later RERF assembled a lifespan study LSS cohort of 120,000 individuals [100,000 exposed at various known levels and 20,000 controls not in the city (NIC) at the time of the bombing].

What is not generally known is that the NIC controls were discarded in 1973 because they appeared to be “too healthy.” The 1973 ABCC report wrote:

In order to ascertain the effects of radiation exposure it is necessary to compare the mortality experience of the population exposed to ionizing radiation with a comparison control population. For this purpose a group of people who were not present in the cities was included in the sample. . . .

The mortality experience of the NIC comparison group has been very favorable. . . [and] would have the effect of exaggerating the difference in mortality between the heavily exposed population and the control group. . . [(Moriyama and Kato 1973) pp 6–7, ABCC LSS Report 7, 1973].

At that point, in 1973, the original control was discarded in favor of shifting to the lowest dose group as the control, something which should never be done in the middle of an epidemiological study. The substitution with a new lowest-dose control group was followed by the use of mathematical regression methodology. This approach is questionable because of assumptions listed below many of which are now known to be wrong:

  • The concept of “absorbed dose” employed by the study is a legitimate measure of biological damage from internal exposures. i.e., that internal exposures can be translated into a “dose” that carries the same biological hazard as the identical external exposure dose.
  • The dose response relation is linear or at least monotonic, a necessity for regression.
  • There was no fallout, which would have contaminated all the exposed groups equally.
  • Acute exposures carry the same proportional hazard as chronic exposures.
  • The Japanese survivor population is representative of the general (Western) public.

These assumptions have been reviewed elsewhere (Busby 2013).

The use of the lowest dose group as control is now standard in all nuclear worker studies (Richardson et al. 2015) which, like the LSS, employ linear regression to establish risk factors. This is because if the national population is employed as a control, the nuclear workers show a healthy worker effect (HWE): their relative risks for cancer are lower than the general public. But this does not permit the lowest dose group to be a valid control unless it is also known that there is a linear or monotonic dose response. Also, the true value of the HWE is unknown. The risk factor for cancer obtained from regression is the slope of the best straight line that can be fitted to the excess cancer risk in groups aggregated according to their external dose measured by a film badge. The bigger the dose, the bigger the effect, is the assumption, though the data do not show this.

Another problem is that nuclear workers are from a different social class than the national population, and are fundamentally healthier (as are, e.g., physicians, optometrists, soldiers, university lecturers, etc.). So their relative risk for cancer should be lower. But how much lower? The epidemiological method used now makes the (unfounded) assumption that the effects of radiation on the lowest dose group can be set at zero. It is the point (0,0) for the regression line. Two observations are relevant here. First, the lowest dose group (usually with the most individuals in it) is a group of workers who mostly work on the contaminated sites (rather like the Hiroshima survivors did), perhaps inhaling radioactive particles. So they should be compared with similar workers who are from a completely different industry with no radioactive contamination (or with the national population, adjusting for the HWE).

There is some evidence about the real HWE value from data published by the UK National Radiological Protection Board of the relative risk of cancer in UK nuclear workers stratified by length of time working in the nuclear industry (Muirhead et al. 1999). The level of healthiness (HWE) shifted from ∼64% of the national rate at start of employment to nearer 90% after 10 years, i.e., the HWE rapidly disappeared. This could be seen as an effect of exposure. Use of 64% for the HWE results in significant 30–40% excess risk in the lowest dose group for nuclear workers.

To return to the linear dose–response regression point, all the published data stratified by dose group define a dose response that is biphasic: it goes up at the lowest dose, then comes down, then goes gently up again at the high doses. There are plausible biological reasons for this (especially in the case of congenital effects where the end point is seen only after birth, and at some dose level prebirth viability stops). Drawing a straight line though these data points results in the wrong answer to the question of risk: the risk factors at low dose, medium dose, and high dose are different. Thus it is not epidemiologically valid to employ regression methods for nuclear workers, any more than it is for the Hiroshima survivors.

The LSS populations, like the nuclear workers, lived on the contaminated sites of the bombed towns for many years after the bomb. Contamination was a consequence of the black rain (Abdale et al. 2016). The updraft from the rising fireball at Hiroshima and Nagasaki sucked in moist maritime air, which cooled with altitude and condensed on the 95% unfissioned uranium nanoparticles created in the plasma. This produced black rain over an area that included all of the dose groups used for the LSS study, for which dose was calculated by distance from the hypocenter. Uranium was measured later in the contaminated areas (Takada et al. 1983). The existence of any fallout was denied, and external acute doses were calculated based on distance using data from experiments carried out in the Nevada desert. The last 20 years has seen changes in the understanding of the biological effects of radiation. This includes realization that for internal exposures to elements that have chemical affinity for DNA, and to nanoparticles, the concept of absorbed dose is worthless (CERRIE 2004). Uranium has a high affinity for DNA and a large number of studies have now shown effects that define large errors in the “dose” based approach (Busby 2015). The European Union has recently funded research on this issue (Laurent et al. 2016).

The black rain contamination of Hiroshima and Nagasaki resulted in continuous chronic internal exposure of all the dose groups and controls by inhalation and ingestion of uranium particles. The only accurate way to establish the real effects is to employ a truly unexposed group and abandon regression methods. In 2008 Wanatabe et al. compared age- and sex-specific cancer rates between 1971 and 1990, using the adjacent Okayama prefecture as a control (Wanatabe et al. 2008). This period was chosen because cancer data prior to 1971 is insufficiently accurate. It was found that there were significantly greater levels of cancer in all the exposed groups, including the LSS lowest dose controls, compared with the Okayama control group but also (to a lesser extent) compared with an all Hiroshima control group. When compared with the Okayama group, the highest cancer effect per unit dose was seen in the 0- to 5-mSv group, the lowest dose LSS group, where there was a 33% excess risk of all cancer in men at external doses estimated at 0–5 mSv. The authors write: the contribution of residual radiationignored in LSSis suggested to be fairly high. This falsifies all the LSS epidemiology. Similar criticisms were made by Sawada (Sawada 2007; Abdale 2016) who examined immediate deterministic effects of radiation (epilation, diarrhea), which were reported from areas more than 5 km from the hypocenter where black rain fell but where the prompt gamma doses were effectively zero.

Similar control group errors in the LSS genetic studies were addressed long ago by de Bellefeuille, who questioned the sex-ratio results (Paul 1961). The LSS researchers focused on sex ratio, the number of boys born to the number of girls, a well-accepted measure of genetic damage (Scherb 2011). The direction of the effect depends on whether the mother (egg) or father (sperm) is irradiated. The LSS geneticists reported no apparent genetic damage, but they analyzed results from families in which both parents were irradiated, and thus the effects cancelled, and they employed the wrong controls. Use of the NIC controls gives a sex-ratio effect in the expected direction (Padmanabhan 2012). This issue is discussed in a recent review by Schmitz-Feuerhake et al. (2016) of heritable effects reported at very low doses of internal exposure. Results from Chernobyl studies clearly demonstrate that the current genetic risk factor is in error by ∼1000-fold, and that the dose–response is not linear. There are significant increases in major congenital malformations in offspring of those exposed to internal doses <1 mSv (Schmitz-Feuerhake et al. 2016).

I suggest that this adherence to the LSS as a definitive answer to the public’s fears is a result of a scientific culture of acceptance that goes back over a long period of time, and that few researchers have had the time or funding to forensically examine the many (often obscure) reports needed to open up the methodological black boxes. I submit that Jordan’s (and legislators’) belief in the validity of the Japanese A-bomb studies, I am sure innocently held, is unsafe, and that the health effects of low level internal exposures to radioactivity should be reevaluated.

Leave a Reply

Your email address will not be published. Required fields are marked *