Million to one chances don't crop up 9 times out of 10?
Police are just going to have to revert to their original style of proving guilt, then:
"It was you what done it, wasn't it?"
A judge in a (sadly unnamed) British case has decided that Bayes' Theorem - a formula used in court to calculate the odds of whodunnit - shouldn't be used in criminal trials. Or at least, it shouldn't be relied upon as it has been in recent years: according to the judge, before any expert witness plugs data into the theorem to …
The other thing is that you are comparing the realtive likelihood of a double cot death (in itself rare) to that of a double murder (also a rare event). So as I understood it from reading Bad Science (by Ben Goldacre) you have to think in terms of which is more likely - double murder or double cot death, i.e. given that one of two very unlikely things have happened what are the chances it was one or the other.
If I recall correctly the upshot was that it was twice as likely to be a double cot death than double murder.
> you have to think in terms of which is more likely - double murder or double cot death
That sounds fishy.
This would mean your sample space is the space of "two dead people in a row". This doesn't sound well-defined. Those two dead people - are they grouped according to the same court case? Did they die in the same house? Of the same murder weapon? Did they have the same age? Doesn't really make sense.
The question is "what is the probability of double baby death by natural causes given that there is no other indication of foul play like baby skewered by a fork".
Your sample space is the set of families in which at least one baby died of apparent natural causes that look like cot death.
Turns out that in that set, a family has a high probability of seeing another cot death.
> That sounds fishy.
>This would mean your sample space is the space of "two dead people in a row". This doesn't sound well-defined.
*sigh* - yes there is a bunch of stuff missing from my post because I a) didn't have all the details to hand, and , b) assumed that intelligent people would go digging with the details I'd given to get the full picture*.
To clarify, head on over to badscience.net and search for "The Prosecutor's Phallusy" to get all the info. To quote, what I believe is the main point in this, I'll quote from the original post:
"Two babies in one family have died. This in itself is very rare. Once this rare event has occurred, the jury needs to weigh up two competing explanations for the babies’ deaths: double SIDS or double murder. Under normal circumstances – before any babies have died – double SIDS is very unlikely, and so is double murder. But now that the rare event of two babies dying in one family has occurred, the two explanations – double murder or double SIDS – are suddenly both very likely. If we really wanted to play statistics, we would need to know which is relatively more rare, double SIDS or double murder."
I will point out that my initial statement of twice as likely was wrong - it appears to be between 4.5:1 and 9:1 in favour of it being SIDS.
* I've deliberately tried to be a bit vague as, I assume, Tim Worstall would have given the details if he could/felt it appropriate.
An enlightening book, and also relevant due to the case the BBC is bringing up over that nurse convicted of killing elderly patients via insulin overdose, based primarily on the fact he was the only one of the staff who was on shift all the times the 'suspicious' deaths occurred. The only factor in that 'suspicion' being he was on shift at the time... a sort of self-fufilling prophesy made evident by the fact a death was later ruled out as not being suspicious when they double checked the logs and found he wasn't actually around. Sigh. Perhaps some defense lawyers out there should read it, so they know to recognise when statistics are be misused?
That case the BBC is raising is indeed madness. What is the world coming to when the default view is that someone must have committed some crime if we cannot otherwise explain a sequence of events?
There's already a couple of criminal offences (purgery and perverting the course of justice) on the books that are woefully under applied when it comes to considering the care with which some experts have assisted the legal system. With expertise comes responsibility, especially in relation to understanding the true limits of their own knowledge. If the scientific experts involved in the legal system can't be relied upon to remember that most important of scientific tenets, perhaps the thought of facing criminal charges might focus their minds somewhat.
For example, imagine you have performed no research on the exact question at hand (e.g. can the environmental / biological factors causing SIDS persist in the family home?). Imagine further that you have no peer reviewed work to back up a statistics-backed assertion that you're about to make and are not formally qualified as a statistician. How hard is it to stop, think a bit and say "I don't know."?
Similarly, if the Court and legal officials don't understand the scientific process, why are they allowed to accept an act on the word of single expert witness? Have they never heard of scientific concensus?
"expert witness... "I don't know."
really?
i don't know, but i suspect not"
Which is exactly what the quake experts in Italy should have said when asked how likely a serious earthquake was instead of saying what they thought based on experience and statistics. We are dealing with 2 different beasties here, people who understand how things work, and the general populace. Oh yes, and I include in "general populace" experts who make comments outside their area of expertise.
Bayesian stats are misinterpreted for the double cot death vs double murder scenario.
Cot deaths are not stochastically independent. Bad heredity is bad heredity, so is bad environment and so are unfortunately bad parenting habits.
If you feed _ALL_ factors into Bayesian stats you will see quite correctly a very high probability of second cot death. If you do not, the second death becomes very unlikely and the probability of double murder seemingly exceeds the probability of cot death.
Coming back to Bayesian stats, the judge in this case may have more clue than we think (or has read more than we think). Bayesian stats require very _CLEAN_ data which has not been contaminated by a deterministic bias. If your data is biased (no pun intended), you Bayes will be way off because it will be revealing the bias in your data which you may interpret wrongly.
My particular dislike of the use of DNA is that everyone is told that DNA is unique, except for identical twins (or clones). What people in court are not told is that they don't sequence the whole DNA (it can't be done, and has never been done).
I think that for the layman, they should say something like; everyone's credit card number is unique to them. We found the credit card number of the person who bought the gun, and that 3 of the 16 digits in the defendants credit card number are the same, so therefore it was him.
>>"My particular dislike of the use of DNA is that everyone is told that DNA is unique, except for identical twins (or clones). What people in court are not told is that they don't sequence the whole DNA (it can't be done, and has never been done)."
Though if people are given probabilities of a random person matching as well as a defendant does, that does at least imply that there isn't uniqueness in the matching process.
And how are you sure what people are told in court ?
Presumably a half-decent defence lawyer could get someone to go into details if they thought it would do some good?
>>"I think that for the layman, they should say something like; everyone's credit card number is unique to them. We found the credit card number of the person who bought the gun, and that 3 of the 16 digits in the defendants credit card number are the same, so therefore it was him."
But that would be highly misleading, since it's close to implying, if not actually implying, that the other 13 digits are different.
Even if someone said the *quite different*
"We found 3 digits from the purchaser's credit card number and the corresponding numbers on the defendant's card match them",
it would still be a fairly poor analogy, since the chance of a random card matching would be 1 in 1000.
Actually, I was the one being pedantic. There are many long-repeating sections in DNA. You can only section a certain amount in one go, so what you do is sequence lots of bits, and guess that you have sequenced the whole lot; but, and this is a big but, you can't know that you have done it all, and put it all together, because you don't get enough overlap between the different bits to ensure that you've got it all and in the right order.
I know it's bad form to quote Wikipedia, but where I am right now, I can't access much else: http://en.wikipedia.org/wiki/Human_Genome_Project
As to DNA being unique to a person, try telling that to Karen Keegan and Lydia Fairchild (see http://en.wikipedia.org/wiki/Lydia_Fairchild) who got into legal and medical problems since as Chimeras (see http://en.wikipedia.org/wiki/Chimera_%28genetics%29) they each have two separate sets of DNA (depending on which organ the DNA sample was taken from).
A colleague of mine used to work in academia doing stochastic calculus and was therefore quite good at stats and had a professor who was even more so. In one prominent cot death case, when this prof heard the reasoning cited in the article (that two or more cot deaths in one family were so unlikely as to be sufficient evidence of guilt), he wrote to the judge and the defence legal team laying out why this was a crazy perversion of logic and offering to act as an independent expert witness at a hoped for appeal/retrial.
Far as I know, neither the judge nor (more shockingly perhaps) the defence ever got back to him on it. That said, the conviction was eventually quashed.
This post has been deleted by its author
Agreed. I don't see any use of Bayes' Theorem in the two-deaths example as the author presented here. That was a straightforward case of the probability of an event and the probability of the event occurring twice, with the accompanying question of whether the two events are independent.
Possibly Bayes was used to compute the probability of the event in the first place - but if so, that's irrelevant to the case as described in the article.
Bayes Theorem says that if you know the probability of B given A, and you know the probability of A and B on their own, you can compute the probability of A given B. It's pretty straightforward, and it's also irrelevant to the probability of a second "cot death" following a first one. Here the thesis proposed by the prosecution is that A and B are in fact two independent occurrences of the same event; thus P(A|B) = P(B|A) = P(A) = P(B). No need for Bayes at all.
If anything, in that case, it's the defense that should have brought up Bayes, after explaining independence and correlation and other basic concepts.
I haven't read the decision, but I suspect the real issue at hand is the abuse of Bayesian Inference, which is an aspect of the Bayesian interpretation of probability theory. Essentially it's a way to answer the question "how likely is this interpretation of the data to be true, based on our initial probability estimates and the data we've collected since?". That makes more sense, in the context of the first example, where the defense would want to challenge a Bayesian interpretation that misstated the actual posterior confidence.
I've always thought the X in a million chances of a DNA match was one of the dodgiest statistics around.
DNA is not randomly distributed. So to say "picked at random from the popualtion there is a 1 in X chance of a match" is not the same as "the chance of DNA picked at random from the town in which you and your ancestors have lived in for over 1000 years is 1 in X".
"A DNA match to one in a million does not mean that it's a million to one against the bloke 'aving done it, m'lud. Rather, it means that in a population of 65 million that 65 people, based purely on the DNA, could have done it. Our DNA tests thus mean that we now have to go and exclude those other 65, or at least regard them as the prime pool of suspects, not convict our man in the dock purely on the basis that one in a million is beyond that reasonable doubt. Yes, these sorts of mistakes are made in the chain of reasoning."
Yes, they are.
THAT'S WHY WE USE BAYES' THEOREM. It's ONLY by applying Bayes' Theorem that you obtain paragraphs like the above.
Bayes' Theorem is not an "option", it is a necessity, otherwise travesties like Professor Sir Roy Meadow will happen constantly.
So what is the author arguing?
>>"THAT'S WHY WE USE BAYES' THEOREM. It's ONLY by applying Bayes' Theorem that you obtain paragraphs like the above."
No, it's perfectly simple to go from "1 in a million" to "there should be roughly 65 matching people in the UK population" by simple logic and extremely basic maths.
Bayes' theorem might be an expression of that, but the underlying logic would be there with or without any theorem, and for a court case, it would seem better to describe something simply in English than start chucking formulas around.
>>"Bayes' Theorem is not an "option", it is a necessity, otherwise travesties like Professor Sir Roy Meadow will happen constantly."
But surely the first problem there was an expert making the *medical* claim/assumption that cot deaths happen at random.
Given that assumption, if the assumption was wrong, wouldn't *any* maths be bound to give the wrong answer?
The allegation in the Norris case is slightly different, but still an interesting abuse of statistics. It seems that someone spotted a correlation between when his shifts were and when some of the people died. They then got a list of his shifts and looked for other deaths that might be considered suspicious at those times. Then, they claimed that it could not be a coincidence that all the suspicious deaths had occurred when he was around. This is, of course, not valid unless they had looked equally thoroughly for suspicious deaths at other times and found none.
From http://www.bbc.co.uk/news/uk-scotland-15127072 :
"The BBC has uncovered evidence of other similar cases of hypoglycaemia which occurred in the hospital where Norris worked but while he was off duty.
His lawyer, Jeremy Moore, believes there were serious flaws in the investigation and the convictions need to be quashed.
He said: "It seems that they trawled through hospital records looking for evidence of patients that might have died suspiciously but it seems they only cherry-picked those cases when Colin was on duty and ignored any others that might have occurred in the hospital."
There was definitely an issue with the stats in the SIDS case cited in Bad Science. However, it was nothing to do with Bayesian inference. In fact a prerequisite in Bayesian inference is that all observational evidence is independent. The 1 one in 73 million probability came from a 1 in 8,543 probability taken from the occurrence in a population, squared. The is bog standard probability theory.
Bayesian inference relies on a prior which is the probability of a particular hypothesis without further evidence. This is then iteratively modified by the ratio of evidence given the hypothesis to the probability of the evidence without an hypothesis (which actually gives a likelihood ratio not a probability or 'odds').
I work with Bayesian inference and applied correctly it is atonishingly good at predicting likelihood. This ruling is a classic case of baby and bath water.
This post has been deleted by its author
Sorry, are you related to amanfromMars, I didn't understand a word of what you said :-)
I always remember Bayes' theorem as the 'probability of an event B happening/not happening given that event A has happened/not happened, where A and B are independent events'
What a lot of people do is that they attach meaning to statistical results that don't exist e.g. the probability of the numbers 1 2 3 4 5 6 being the winning numbers in the national lottery is the same as any other random 6 numbers, but people attach meaning/ probability to the sequence 1 2 3 4 5 6.
perfectly rational statistics can also give the wrong impression, for example saying 40% of all sick days are taken on a Monday and Friday give the impression that there are excessive sick days taken on Monday and Friday, but Monday and Friday comprise 40% of the working week.
Having calculated a likelihood ratio, the scientist in R -v-T translated that likelihood ratio into an “expression of support”, using a standard scale:
Likelihood ratio within range 1 to 10 = “Weak support”,
10-100 = “moderate support”
100 to 1000 = “moderately strong support”
1000 to 10,000 = “strong support”
10,000 to 1,000,000 = “very strong support”
>1,000,000 = “extremely stroing support”
The judgment has caused a minor panic amongst “police” forensic scientists because they have been calculating likelihood ratios and translating those likelihood ratios into expressions of support even in cases where there is no objective data on which to base calculations.
For example, a scientist might guesstimate that the probability of observing “lots of blood” on a defendant’s clothing (as opposed to “small amounts of blood”) given that defendant is the attacker as 0.75. The probability of observing this finding if he was not the attacker, but merely came to the aid of the victim after attack, might be guesstimated as 0.25.
The process will be applied to different - hopefully, but not always, - independent findings (eg. “lots of spattered blood”, “lots of blood on the cuffs”). A likelihood ratio is then calculated.
This LR is translated into a phrase using the table: “the scientific findings provide strong support for the view that Mr Defendant attacked Mr Victim rather than Mr Defendant having helped Mr Victim after the attack”.
The Court of Appeal judges criticised this process for its lack of transparency. Sometimes it's a scientifically rigorous approach that supports and documents an expert opinon. All too often it's pseudo-scientific claptrap.
What is very worrying is the idea that you can convict something based on probability. Improbable things *do* happen. That is why they are improbable, not impossible.
And considering how many people there are in the world, if it is improbable that an event happens to one person, it is much more probably that it happens to *a* person.
What if someone next to you gets struck by lightning, and you are convicted of assault because it's more likely that you assaulted them than them getting struck by lightning? Bad example, but justice? Pah.
>>"What is very worrying is the idea that you can convict something based on probability. Improbable things *do* happen. That is why they are improbable, not impossible."
Thanks for that lecture on probability. I'm sure we all needed it.
It seems you think that 'reasonable doubt' is wrong and courts should only convict based on absolute certainty.
Well, I guess that *would* save a lot of money in the judicial system.
Though on the other hand, when people start taking the law into their own hands, compared to the numbers of people wrongfully convicted, I wonder how many innocent people would get hurt in escalating vendettas, or as a result of wrongful accusations?
But all those examples, like the cot death, are from NOT using Bayesian.
Given that the chance of cot death isn't independent in two children with the same parental genes you need to use a Bayesian approach.
Using "proper" statistics is like saying that 1:4 people are chinese, my mum,dad and brother aren't - so I must be!
No, statistics would say there is a probability of 1 in 4 that you are chinese.
Similarly statistics might say that there is a 1 in 1 million chance that you will be killed in an airplane crash i.e. if you take 1 million flights you _will_ be killed in an airplane crash, however there is nothing in that statistic that say it won't on the first of those 1 million flights.
"i.e. if you take 1 million flights you _will_ be killed in an airplane crash"
An appropriate comment for an article about the abuse of statistics. In fact, if the risk is one in a million, the probability that you will be killed after a million flights is about 63%. You calculate this by raising the odds that you will NOT be killed on a particular flight (999999:1000000) to the power of the number of flights, and subtracting from one. (0.999999)^1000000 = 0.368.
It's easier to see it with a smaller number. If the odds of rolling a six on a dice is 1 in 6, the odds of rolling a six at least once in six rolls is 1 minus the odds of _not_ rolling a six, six times in a row - 1-(5/6)^6, or about 66%.
This is an interesting article but it is important not to mix up the use of Bayes theorem with poor use of probability. The DNA 1 in a million example cited is simply a classic case of the prosecutor fallacy where the 1 in a million probability of seeing the DNA match evidence in an innocent person is wrongly asumed to the same as the probability of innocence. Bayes theorem actually helps avoid this kind of probabilistic fallacy. The problem with the RvT ruling is that it will have the impact of giving jurors less information than is actually avialable. So, instead of giving some useful probability information about the likelihood of a random match experts will be reduced to vague statements like 'a random match is possible' or 'is unlikely'.
I was interviewed for an article in the Guardian about this case, see:
www.guardian.co.uk/law/2011/oct/02/formula-justice-bayes-theorem-miscarriage
Here are some links to more detailed information about the issues raised:
About the RvT ruling and the probabilistic issues raised:
www.eecs.qmul.ac.uk/~norman/papers/likelihood_ratio.pdf
The draft proposal for a project to improve the current state of practice:
https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxiYXllc2xlZ2FsfGd4OjY4MzljYzNiYjNhYjI5MDA
A report describing common legal fallacies involving probability:
/www.eecs.qmul.ac.uk/~norman/papers/fenton_neil_prob_fallacies_June2011web.pdf
A blog addressing all of these issues:
http://probabilityandlaw.blogspot.com/
Norman Fenton
Many cases besides high-profile miscarriages of justice point to the need to improve the accuracy and comprehension of evidence wherever possible and to avoid false reasoning. However, Prof. Fenton's approach has the potential to make matters worse than they already are. There is no guarantee that computer assisted Bayesian evaluation and presentation tools will provide a panacea for the warts and wrinkles of the criminal justice system.
In the paper he cites, outlining a proposal to use simple visual representations of Bayesian trees to convey the impact of forensic evidence, he claims that,
"... there should be no more need to explain [to the court] the Bayesian calculations in a complex argument than there should be any need to explain the thousands of circuit level calculations used by a calculator to compute a long division. Lay people do not need to understand how the calculator works in order to accept the results of the calculations as being correct to a sufficient level of accuracy. The same must eventually apply to the results of calculations from a [Binary Network] tool."
It would be uncharitable to suggest that he is set to undermine the principle of trial by one's peers and replace this with a computerised evidence evaluation machine, but it would be equally wrong to ignore the common law principles that should be at the basis of what goes on in court.
"A judge in a (sadly unnamed) British case has decided that Bayes' Theorem - a formula used in court to calculate the odds of whodunnit - shouldn't be used in criminal trials."
This makes me feel queasy.
I don't see how going back to fuddy-duddy reasoning and Bushian "gut feeling" is going to help.
Next: "A judge in has decided that Aristotelean logic - a formula used in court to decide whether a defendant belongs to a given set - shouldn't be used in criminal trials."
http://en.wikipedia.org/wiki/Prosecutor%27s_fallacy
It is precisely the failure to apply Bayes' theorem that is the most glaring abuse of statistical "reasoning" in most trials. But applying it requires lots of numbers, and it you estimate them it is far too easy to put a thumb on the scale, subconsciously or otherwise.
the point made by "xlq" is also a hot topic in statistical circles. One aspect of it is the Birthday paradox: if you have a large enough population, coincidences will happen.
A related problem does not have such a catchy name. If you do a whole lot of tests for statistically significant relationships among many variables that are actually not related, you will get some that test as significant at random. See recent articles about the problem of observational studies in health and medicine.
In criminal investigation, it is one thing to get a "five point match" on a fingerprint AFTER identifying a likely suspect, it is something else entirely to have a computer spit out a five-point match from a database of millions of people's prints.
"A related problem does not have such a catchy name. If you do a whole lot of tests for statistically significant relationships among many variables that are actually not related, you will get some that test as significant at random. See recent articles about the problem of observational studies in health and medicine."
Or just see XKCD, the world's premier stick figure summary of interesting statistical trivia.
https://www.xkcd.com/882/
"the point made by "xlq" is also a hot topic in statistical circles. One aspect of it is the Birthday paradox: if you have a large enough population, coincidences will happen."."
The "Large Population" is 23 people by the way. Once you have 23 people the odds of two having the same birthday is over 50%.
Note that it is not a coincidence but a statistical requirement. A coincidence would involve matching birthdays with a designated person not matching everyone's birthday against that of everyone-else's. As you have more matches the probability goes up of having a match. Once you have a person 23 they have an over 50% chance of matching the birthday of one of persons 1-22 (none of whom share a birthday).
Your DNA statistics are wrong. In DNA match of 1 in 1 million probability means that the particular DNA profile is present in 1 out of every 1000 people in the general population, not one in every million. In DNA the probability is the probability that the suspect, the sample, and a random person from the population will all three have the same DNA profile. To find the DNA odds you think they are saying, take the square root of the DNA expert's probability. So, for your example, a one in a million match means that there are about 65,000 people in the population with that DNA profile.
You know what Samuel Clement's said about statistics.
Mark Twain attributed it to Benjamin Disraeli, but noone really knows where it came from. Check out http://www.york.ac.uk/depts/maths/histstat/lies.htm
Its more than likely a very old saying and id be surprised if it really did originate as recently as the late 19th Century.
Anyway, the article's good and all, especially having worked for the US Economics and Statistics Administration in the past I have a bit of an insight. In my opinion Statistics probably shouldnt be considered evidential. Far too often Prosecutors twist statistics to cover up for a lack of evidence. Its just how it is. And they get away with it far too often because Defense attorneys are either too inept or simply uneducated in Statistical Theorems to raise the inherent doubt about how sound the said theorem is. Its a shame really.
But really, what is the IT angle on this story? It really sounds like it should have been a sidebar in an Economics and Statistics or even Legal magazine/newspaper.
Sally Clark's wrongful conviction was not because Bayesian statistics had been used. To the contrary, it was because they hadn't; and because the use of the so-called prosecutor's fallacy which in part led to her conviction went unchallenged.
Neither is the article quite accurate in saying that the fallacy stemmed from the "[il]logic used by one eminent expert witness." Professor Sir Roy Meadows's presentation of the invalid multiplication of probabilities actually came from a government publication which he read out in court. After his earlier involvement in real cases of infanticide, Meadows may well have become somewhat biased, and quite likely he should have known better. But he seems to have been made into a scapegoat for a practice that is actually quite widespread.
http://www.bmj.com/content/324/7328/41.1.full.pdf
In 2005, the HoC Select Committee on Science and Technology looked into the use in court of expert witnesses, statistical probabilities, and the prosecutor's fallacy. I wrote to them to say that the publication used in Sally Clark's prosecution, which endorsed the invalid multiplication of probabilities, was still available from the Stationary Office and suggested they might see fit to have it removed it from circulation or at least ensure that it was properly amended. A curt reply informed me that this was not within their remit.
http://www.publications.parliament.uk/pa/cm200405/cmselect/cmsctech/96/9610.htm
I don't think that the adverarial system here in the UK works at all for scientific evidence. It's too easy for both sides to put forward 'experts' who are disagreeing, and where does that leave a Jury?
The legal system should be asking the scientific world how evidence is assessed. The scientific world would rightly say 'peer review and concensous'. No consensus, no conviction. That is an inquisitorial process, which the people involved in the legal system don't like that one little bit.
The SIDS cases were appalling. A non-expert in statistics presented unchallenged 'facts' that were in fact horseshit. At no point was he required to show that he was qualified to do so. Why wasn't he charged with purgery? Why weren't the Court's officials charged with gross negligence???
This post has been deleted by its author
The more I have to do with the legal system the more appalled I am by the poor standards that seem endemic in every part of it. That is sometimes due to the inadequacy of the indivifuals but more often it is down to resources that are completely inadequate to deal with the complexity of the law.
The problem with legal aid is not that there are too many cases, or too many clamimants, it is that the law costs so much and it costs so much because it is often so complex and so poorly defined that the outcome of trials depends more on how much can be afforded than the evidence.
This is a well known example of the missuse of probability, but well worth repeating, because this error is still made, in all sorts of contexts including both court cases and politicians making policy. A DNA match is say one in a million and state that way it is enough to make most people certain. However try putting it another way. There are probably about 60 people in the UK that that DNA could belong to.
99% of serious sex offenders share a genetic trait and a previous criminal offence so the two together are an excellent indicator of criminal propensity and anybody with that genetic marker and committing that offence must be placed on thes sex offenders register. That logic, or very similar, has appeared in the supporting evidence for green papers. The fallacy is made obvious by two words. Male and speeding.
Another all too common error. 80% of people doing X will go on to commit Y. Politicians or axe grinders response is that X muist be treated very seriously because Y is so horrendous but that conclusion can not be justified. Missing information. 90% of the general population will go on to commit Y. The correct deduction is the opposite, X is actually preventative.
These and other failures of logic are endemic in policy making where children are concerned. The issues are often very emotive, and the erroneous conclusions may support popular prejudice, so nobody thinks to question them. Important aspects of the recent Bailey Review lack a verifiable evidence + logic chain, and there is strong evidence that the onconsidered consequences are highly undesirable, but the government intends to implement "in full". Our children deserve better.