back to article What can The Simpsons teach us about stats algorithms? Glad you asked...

When his class is asked to give an example of a paradox in The Simpsons, Bart offers: "You're damned if ya' do, and you're damned if ya' don't." The dictionary defines a paradox as an absurd or seemingly absurd or contradictory statement that might prove to be true and when it comes to data a seemingly contradictory situation …

COMMENTS

This topic is closed for new posts.
  1. James 51

    Must read for goverment officals everywhere.

    1. Michael H.F. Wilkinson Silver badge
      Thumb Up

      Not just government officials, but many others in big data. Too often they assume that arbitrary aggregation will result in better statistics (e.g., because the standard error in the mean is reduced), whereas all too often piling up data from different sources in fact obscures certain effects.

      Well-written article again.

  2. Zog_but_not_the_first
    Thumb Up

    More, MORE!

    Need input!

    (Wot, no Number Five icon?)

    1. Bob Wheeler
      Joke

      Re: More, MORE!

      I am not a number, I'm a free man

      1. Evil Auditor Silver badge
        Trollface

        Re: More, MORE!

        "Free man" = real life trolls.

      2. earl grey
        Trollface

        Re: More, MORE!

        Well played, Number Six.

  3. foxyshadis

    So... what's the options, then?

    Deciding how to normalize this disparate data into something that can be combined into a single dataset is basically why Mr. Fancy Math gets paid to crunch numbers. How about expanding the article with that?

    1. Archimedes_Circle

      Re: So... what's the options, then?

      You could do multilevel modelling as a solution. Multilevel modelling is, in general, a solution to when subjects are clustered, like when you want to explore the effects of spending on student performance. Typically you would run a regression, however in this case, if you have multiple schools, the spending levels are not independent observations: they are correlated with each school and that typically inflates the wald test results, which makes smaller p-values.

  4. Erebus
    Headmaster

    Monty Hall, anyone?

    http://en.wikipedia.org/wiki/Monty_Hall_problem

  5. Androgynous Cupboard Silver badge

    Presumably there's a corollary - if you start with a big dataset and don't get the trend you want, selectively exclude some results until you do. Hooray for statistics!

    1. Michael H.F. Wilkinson Silver badge

      Or you plot the data or residuals at check out if the linear trend you are assuming is evident.

    2. fch

      "prove causation" != "observe a [certain amount of] correlation"

      If even the author of the article makes that basic mistake, then there's nothing left to conclude but you're correct - the purpose of statistics is to prove someone's point / back up someone's claim, not to ... learn anything from the data.

      Go ... [massage] figure[s] !

      1. MrT

        Re: "prove causation" != "observe a [certain amount of] correlation"

        Bang goes the secret behind soooo many tabloid headlines...

        Still, while we're having fun with correlations, have a look at this collection... the one I like best is 'German passenger cars sold in the US' correlating with 'Suicides by crashing of motor vehicle' - "It's just not as good as everyone told me - goodbye cruel world!" - although the Divorce rate in Maine having anything to do with Per capita consumption of margarine (US) surely must have some mileage with the dairy industry...

        1. This post has been deleted by its author

          1. Anonymous Coward
            Joke

            Re: "prove causation" != "observe a [certain amount of] correlation"

            Hey, sucka! I saw that on the BBC a couple of days ago. My favourite was "Per capita consumption of cheese (US)" correlates with "Number of people who died by becoming tangled in their bedsheets" with a correlation of 0.95.

            But that is causation - it's well known that eating cheese at night causes bad dreams and so more tossing and turning and so more chance of getting caught up

        2. Florida1920
          Coat

          Re: "prove causation" != "observe a [certain amount of] correlation"

          although the Divorce rate in Maine having anything to do with Per capita consumption of margarine (US) surely must have some mileage with the dairy industry...

          It's an udder mystery.

          One with Milk Duds in the pocket.

      2. LionelB Silver badge

        Re: "prove causation" != "observe a [certain amount of] correlation"

        To be fair, I think the author of the article is quite aware that causation != correlation - hence the statement "Let’s take it as read that not only is there a correlation ... but that causation is also at work", which I read as him saying that causation has been established by some other (unstated) means.

        1. fch

          Re: "prove causation" != "observe a [certain amount of] correlation"

          I'm almost tempted to say "QED" ... as I've quoted without context to make my point more obvious. The mere mention of "cause" in the same context [ sentence ] as "statistical correlation" is huge b*llsh*t honeypot. People have gotten it into their heads somehow that statistics prove causation, and the explicit mention (as you did) about first needing a testable theory with a claim of causation and a prediction of measurable changes - that's so conveniently left out.

          I do wonder who started down that slippery slope ... when I retire [ never ... ] I may do a PhD on the history of politics/economics to find out :-)

          1. LionelB Silver badge

            Re: "prove causation" != "observe a [certain amount of] correlation"

            "I do wonder who started down that slippery slope ... "

            Probably cavemen... X seems to be associated with Y, ergo X causes Y (or Y causes X, so e.g. being fat causes you to eat a lot).

            Anyway, there are statistical techniques for dealing with causation - they inevitably involve time precedence (as in cause precedes effect), and inevitably generate controversy as to their interpretation/usefulness. Surprisingly (or perhaps not), there is little consensus in philosophy/science/mathematics/statistics over what "causation" even means. Here's a neat quote from Clive Granger, economist, statistician, Nobel laureate and inventor of the "Granger causality" statistic, (from his Nobel acceptance address):

            "At that time, I had little idea that so many people had very fixed ideas about causation, but they did agree that my definition was not “true causation” in their eyes, it was only “Granger causation.” I would ask for a definition of true causation, but no one would reply. However, my definition was pragmatic and any applied researcher with two or more time series could apply it, so I got plenty of citations. Of course, many ridiculous papers appeared."

            1. Michael Wojcik Silver badge

              Re: "prove causation" != "observe a [certain amount of] correlation"

              Probably cavemen... X seems to be associated with Y, ergo X causes Y (or Y causes X, so e.g. being fat causes you to eat a lot).

              Ample psychological research1 provides strong evidence that mistaking correlation for causation, along with related logical fallacies such as post hoc ergo propter hoc, are basic features of human cognition. Plenty of people have argued that these are useful survival traits2 and so were selected by evolutionary processes, though that's just speculation.

              Certainly, given the apparently universal nature of the habit, it looks unlikely to be cultural.

              On a related note, McRaney's nice collection of short essays on various forms of fallacy, mental error, delusion, etc, from his website, have been collected into a book, and there is now a sequel. At least the first volume should be required reading for, well, everyone. (Even if you're already familiar with all the phenomena he describes, it's full of nice examples for demonstrating them to other people.)

              1Conducted in about as methodologically-sound a fashion as is possible when you're dealing with human subjects, and we're not likely to have anything better than that anytime soon.

              2Usually with some variation of the "shortcut" argument: reasoning takes a lot of time and biological resources, and postpones decision-making, so the human mind opts for a quick heuristic decision that may be revised later (but often isn't).

  6. Cliff

    Tree ring plus measured?

    Is this what caused the climate change denial camp's recent offensive? Something to do with the way tree ring data and directly measured temperature data was used after the 1961 correlation problems? By stealth, did this paradox damage the reputation of climate science just by existing?

    1. T. F. M. Reader

      Re: Tree ring plus measured?

      I must have slipped in my vigilance - not sure what denialist offensive you have in mind. Have not heard anything about tree rings for years.

      I recall reading the first and then the second paper on tree rings as a proxy for historic temperature measurements. I am too lazy to check, and my memory may be faulty after all these years, but if it isn't the first sample consisted of 3 stumps, and the second - of 21 or so. Both samples were from basically the same place. I decided to discount all the conclusions that could be drawn from either sample (or both - it well may be that the samples were similar enough that the Simpson paradox would not manifest itself) regarding the temperature history for the planet as a whole at that point.

  7. GumboKing
    Alert

    I disagree with the initial analysis of the first two graphs

    While I understand the paradox and agree the two sets should not be merged, the initial analysis of the first two graphs does not "prove that the act of spending more on advertising really is directly causing the product to sell better". It seems to show that an increase in spending is causing it to sell better up to a certain point, and past that point it will actually decrease.

    For Client A they should cap spending at 45 pounds and Client B should cap spending at 350.

    For Client A, two campaigns at 30 pounds should have generated around 600K in results, three 20 pound campaigns would have generated 750K whereas the 60 pound campaign only generated around 325K.

    As an imaginary Client, I would be pretty ticked if I would have had much better results running 2 or 3 campaigns of lesser value instead of just one expensive less effective campaign.

    P.S. (Paradox Sidenote) in Back to the Future II there is a Pair of Docs Paradox.

    1. Anonymous Coward
      Black Helicopters

      Re: I disagree with the initial analysis of the first two graphs

      Quite right, it assumes the existence of a cause; it doesn't prove it.

      There's also an assumption that to take an 'average' of 2-dimensional scatter data you can simply average the X and Y components separately. For those who want to try it for themselves, it's a very interesting and educational exercise to 'prove' the formula for averaging from first principles, then expand the solution into the 2- and N- dimensional cases. It's only when you take the same approach to second and higher order statistical methods (standard deviation, etc.) that it starts to go a bit wibbly. But if you've genuinely got multi-dimensional data (i.e. vectors, not independent values) you really should be having a crack at it.

      [Helicopter icon, because they churn out vectors like there's no tomorrow].

  8. John Tserkezis

    It's Freakonomics all over again.

    It happens all the time, all over the place. One example I cite frequently, is in the "Freakonomics" "what's in a name" chapter.

    There is quite clear statistical difference in the performance of a child, and their name. So how does a name affect one's school exam scores? Easy, it doesn't. It doesn't make a squat of difference. The difference is in genetics, upbringing, and if the child was an accident or planned.

    The most likely answer is because well educated parents have their kid later in life, where it was planned, will probably name their kid some sensible, ordinary, conservative name. On the other side of the street, the type of mother who names her kid "temptress" is probably on the lower end of the socioeconomic scale, had her kid at 16, and was too busy lighting farts instead of worrying about school exams. The name isn't the cause, it's the effect.

    Yet, even today, parents are increasingly naming their brats ever-exotic names in the at least subconscious effort that it's going to make a difference.

    Which brings me to my point (yeah I know it took a while): It's easy to assume the average plob doesn't know anything about statisics, but the people who analyse databases, come from the same group. They're at the same risk of mistaking cause and effect.

  9. kmac499

    Downhill all the way ??

    Sounds a lot like the way I was taught algebra "you can't add Apples and Oranges to get one answer, same goes for 'x' and 'y' "

    What woud be even more worrying if there is a reverse Simpsons where two negative correlations sum to make a positive.. (Drug Trials, Diet Foods, etc...)

    1. Anonymous Coward
      Big Brother

      Re: Downhill all the way ??

      I don't see any reason why there wouldn't be. If one set that showed a negative trend was generally higher than another set that showed a negative trend, when combined, the resultant graph could trend upwards. It's just people aren't usually that interested in negative correlations to try combining them. Unless there was devious intent afoot...

    2. AaronG

      Re: Downhill all the way ??

      @kmac499

      Data Set 1:

      A B

      4 15

      7 13

      9 10

      13 5

      Data Set 2:

      A B

      122 95

      127 82

      133 71

      137 55

      Both show a negative trend.

      Combined, albeit very artificial looking, they show a positive trend.

  10. Richard Barnes

    For real expertise...

    ..... in manipulating different data sets to produce misleading statistics, look at the marketing departments of Fund and Asset Managers. When one fund gets closed down and merged with another, all sorts of statistical shenanigans are possible.

    1. Jimbo 6

      Re: For real expertise...

      The whole article reminded me of a favourite Dilbert cartoon -

      Boss : Use the CRS database to size the market.

      Dilbert : That data is wrong.

      Boss : Then use the SIBS database.

      Dilbert : That data is also wrong.

      Boss : Can you average them ?

      Dilbert : Sure. I can multiply them too.

      (That and many more at http://stats.stackexchange.com/questions/423/what-is-your-favorite-data-analysis-cartoon )

    2. Michael Wojcik Silver badge

      Re: For real expertise...

      Or check out Huff's classic text How to Lie with Statistics. Full of fun projects for the whole family!

  11. Anonymous Coward
    Anonymous Coward

    Thanks for the series

    It is very enlightening

  12. William Boyle

    Statistics...

    To quote the Bard - there are lies, damned lies, and then there are statistics! Having marketing people use statistics to prove a point is a pure oxymoron, and proof of the previous statement.

    1. veti Silver badge

      Re: Statistics...

      If by "the Bard" you mean Shakespeare, I'm pretty sure the word "statistics" occurs precisely zero times in the whole of his collected works.

      That bon mot comes from someone in the 19th century, though it's by no means clear who.

  13. Mark Simon

    The moral or the story …

    Never use averages as the source of your data. Anything which combines data has already lost important detail.

    This also applies to democratic elections.

    Government is won by the party which gains the most seats. However, the majority of a majority is not always a majority. For example 60% of 60% is only 36%.

    At least twice in recent Australian political history, the government had the majority of seats but the opposition had the majority overall popular vote. On another occasion, one minor party had 10% of the popular vote, but not one seat.

    Isn’t statistics wonderful? Lies and Damned Lies …

    1. Evil Auditor Silver badge

      Re voting

      I don't know much about Aussie voting but as far as I see it, it's got nothing to do with averages and more with the particular way the voting system is set up, e.g. thresholds, definition of constituencies.

    2. Frumious Bandersnatch

      Re: The moral or the story …

      Never use averages as the source of your data. Anything which combines data has already lost important detail.

      Oh, I don't know about that. While reading the first article in the series (and again, with the German tank problem) I was slightly disappointed not to see Little's Law listed. Now there's an interesting (and valid) application of averages...

  14. Anonymous Coward
    Anonymous Coward

    Normalization???

    To get numbers normalized when appropriate works wonders. Spending x% more yields y% more revenue should correlate in both. But for cripe sakes don't just add numbers to the same pool.

    With statistics and fallacy you can prove smoking is healthier, or my favorite: 30% of car crashes are caused by drunk drivers... so you'd rather be drunk than be the cause of the other 70% of crashes. And don't ever call them accidents when a drunk driver is involved, because *it was no accident*.

  15. willi0000000

    hmmmmm..

    my takeaway on this is that the only good statistician is an unemployed* one.

    * assuming that the unemployment was caused by a refusal to use statistics to "prove" his last boss' completely untenable position.

  16. Herby

    Lies, Damn lies, and Statistics.

    All bad. When my mom took a statistics class, the professor went further. He had a bunch of examples of "this is how you can alter the results to your liking". It happens all the time. Of course EVERY politician uses these techniques, so nothing is new in the world.

    Of course, my dad also told me something: "Don't average averages, it doesn't work". You can't average 100 MPH and hope to catch up doing 300MPH to compete with a 200MPH vehicle. It just doesn't work.

    Mom & Dad were smart! MBAs do that to you (yes, they both had MBAs)!

    1. veti Silver badge

      Re: Lies, Damn lies, and Statistics.

      Actually, that works just fine. If you drive at 100 m.p.h for an hour, then 300 m.p.h. for another hour, you'll have covered 400 miles - and you will catch up with the car that was doing 200 m.p.h. the whole time.

  17. J.G.Harston Silver badge

    I saw this all the time observing the election counts last week.

    "Ooo! we're 20% ahead in Box D, it's looking good for us!"

    Ah, but only 60 people voted in Box D, about 600 people voted in each of Boxes A, B and C and the other candidate only has to be 0.666% ahead in all three other boxes to win.

  18. Joe Loughry
    Happy

    colour coding is brilliant

    Thank you for this. It's the first time I really understood Simpson's Paradox, despite having seen it before. The breakthrough was the colour-coded plot in Figure 4. Brilliant way of explaining it.

    Would you care to step into a time machine and take the place of my *first* statistics teacher in 1982?

  19. Frumious Bandersnatch

    Doh

    The author mustn't have got the memo at Vulture Towers. I thought the current rule was "no Simpsons jokes, please – we're adults here..." [paragraph 3],

  20. Anonymous Coward
    Anonymous Coward

    Meta-analyses

    See the paradox quite often.

  21. Qwixel

    What problem? Without normalizing your scales, you are comparing apples to oranges.

    That's it. Problem solved. Someone give me government grant money.

This topic is closed for new posts.

Other stories you might like