The problem is, it doesn't show up in multiple datasets...
large or small. Or, more strictly, it's in some, and not in others, and is flatly contradicted in others still.
Which begs the obvious questions - what makes you think it's a real effect at all? You seem to be holding those who argue for a lack of effect to satisfy the burden of truth - surely occams razor, consepts like the need for hypotheses to be disporvable, etc. places the burden of proof on those saying there IS an effect?
One of the better and notable even handed summaries I've come across is the US GAO report, done to evaluate the implications of differing models operated by the two US regulatory bodies that would have been in charge of Yucca Mountain - the EPA and the NRC.
http://www.gao.gov/new.items/rc00152.pdf
"Epidemiological research has been part of the scientific basis for the linear
model and radiation standards. However, epidemiology may not soon fully
verify or disprove low-level radiation effects. Specific epidemiological
research correlating natural background levels in the United States and
around the world with cancer rates has been generally inconclusive,
showing mixed results. Much of this research has used methodologies that
have been widely considered too limited for the research to be influential in
setting radiation standards....
"...For example, historically DOE has funded over 40 epidemiological studies of radiation effects on workers at sites in the U.S. nuclear weapons complex. According to DOE, the results
have shown elevated cancer levels from chronic exposure at some sites, among the most highly exposed workers, although the results have been inconsistent and, looking complex-wide, DOE has not found a clear pattern of excess risk for any specific cancer type...."
(So, evidence at hih dose, no consistent evidence at low dose)
"For this review, we hired a consultant, Dr. Thomas Gesell, Professor of
Health Physics, Idaho State University, a recognized expert in the field of
environmental radiation, to identify and summarize worldwide ecologic
and analytic studies of natural background radiation or radon. Through his
work, we found that many ecologic and analytic studies have been done in
the United States, Europe, Asia, and South America. Some focused mainly
on radon effects, and others focused more broadly on overall natural
background radiation effects. The results of such studies differ and are
inconclusive overall. Most showed no evidence of elevated cancer risk, but
a minority did show slightly elevated cancer risks. Taken together, the
studies may suggest that low-level radiation effects are either very small, or
nonexistent."
(so, more show no effect than suggest there is one)
"..With the help of our expert consultant, we examined 82 ecologic and
analytic studies of natural background radiation or radon, in the United
States and around the world. Of these studies, 45 were directly radon
Appendix IV
Overview of Epidemiological Research on
Low-Level Radiation Effects
Page 39 GAO/RCED-00-152 Radiation Standards
related. The studies examined a variety of different types of cancer, and
some examined cancer effects on children, while others examined genetic
effects. Results of the studies varied, and we did not independently assess
their quality. Some reported statistically significant results—elevated
cancer rates, no elevation in rates, or a negative correlation—and others
reported inconclusive results. (Some lacked basic information for
assessing their quality.) Of 67 radon-related cancer studies, 22 reported
results indicating a statistically significant correlation between natural
background radiation or radon and cancer rates, while 45 found no such
correlation (including 8 that found a negative correlation), and 4 were
inconclusive."
(so a few positive, most no correlation, and some showing a negative correlation).
Now, that may just be me, but that sound awfully like what you'd see when there was in fact, no effect, rather studies were trying to extract a non-existent signal from background noise.
Wade Allison goes rather further - he references not only animal studies that show a clear cut-off, but digs further into some of the "classic" studies. He points out that within the Japanese bomb survivors, the incidence shows no increase in leukemia risk at under 200msv, and for solid cancers, no statistical significance under 100mSv. For those tracked for exposure post Chernobyl (i.e. where longditifnal studies were done), the mortality rates almost perfectly track the sigmoid curve that's shown up in the animal studies.
I'm just wondering what makes you so certain there's an effect to be claimed - to the effect that you can say "4000 deaths caused by chernobyl". Especially when UNSCEAR themselves say
"The Committee has decided not to use models to project absolute numbers of effects in populations exposed to low radiation doses from the Chernobyl accident, because of unacceptable uncertainties in the predictions. It should be stressed that the approach outlined in no way contradicts the application of the LNT model for the purposes of radiation protection, where a cautious approach is conventionally and consciously applied [F11, I37]."
Curiously, they are willing to make very specific statements on observations of non-human populations:
"For acute exposures, studies of the Chernobyl accident experience had confirmed
that significant effects on populations of non-human biota were unlikely at doses
below about 1 gray."
That's 1000 millisieverts (for beta and gamma exposures, which are primarily what's relevant here).