back to article Wiley shuts 19 scholarly journals amid AI paper mill problems

US publishing house Wiley this week discontinued 19 scientific journals overseen by its Hindawi subsidiary, the center of a long-running scholarly publishing scandal. In December 2023 Wiley announced it would stop using the Hindawi brand, acquired in 2021, following its decision in May 2023 to shut four of its journals "to …

  1. sitta_europea Silver badge

    What happened to the review processes?

    1. lnLog

      a right mess

      The whole system is a machine to make profit and keep pension funds ticking over.

      Editors are seldom paid positions, similarly the peer review process is also not paid, researchers have to pay to submit papers and if they want the paper to be available to all they must pay extra, or it goes behind a pay wall.

      The issue of fraudulent and or poor quality papers means a vast number of academics just do not respond to requests to provide a peer review and yet the institutions actively push for them to generate as many papers as possible with no regard given to providing peer review.

      Many editors at the 'higher end' journals will ignore many papers that do not have someone significant on the author list or are submitted by an institution or country that does not meet their ideals, either ignoring or selecting for 'third world' depending on the field.

      1. This post has been deleted by its author

      2. gobaskof

        Re: a right mess

        "Editors are seldom paid positions, similarly the peer review process is also not paid, researchers have to pay to submit papers and if they want the paper to be available to all they must pay extra, or it goes behind a pay wall."

        I've never seen pay to submit in the 30 or so papers I have submitted, but yes peer review is never paid, and it is very expensive to have your paper open access. Open Access has become a massive scam with many funders requiring papers to be open access, but the journal is allowed to be paid open access for some by close papers for others. The result is that many institutions are now paying the journal to publish, and then paying them again for as subscription to read other work. It would be one thing if the system worked well but we are expected to do all of the typesetting, proof reading, and very specific formatting for them. Many journals don't provide much more than a website and a brand name. Does this make it fast... no! I just got told my next paper will be released this June, it was accepted for publication in November 2023, I submitted it in August 2023. I have seen it take much longer.

        Think how much faster science would be if we didn't spend years waiting for the person we paid £2k to put a pdf on their website. Peers can still review it even if the work is available from day 1 (this is what F1000 do). The system is broken. The system was incredibly broken before AI. If AI can highlight how broken the system is then this is finally a great use for AI

        1. Ian Johnston Silver badge

          Re: a right mess

          It has long been the case - in my areas of interest anyway - that actual publication is irrelevant. Everybody works on preprints ... by the time the paper is published the field has moved on and so all publication counts for is the distorted REF score.

      3. Snowy Silver badge
        Joke

        Re: a right mess

        Considering many institutions are just paper mills adding AI just make it quicker.

        1. Ian Johnston Silver badge

          Re: a right mess

          95% of all paper are worthless, rising to 99% in pseudosciences like psychology.

          1. DrBobK
            Headmaster

            Re: a right mess

            Why do you think psychology is pseudoscience? What credentials and scientific background do you have that might give your assertion any credence? Do you even know what academic psychologists do? What about neuroscientists? Are they conducting pseudoscience too? What about biologists? More pseudoscience? Is computer science 'science' in your opinion? I picked those fields because there are plenty of academic psychologists who publish in journals in those areas. For my part, I've published in The Journal of Neuroscience, Current Biology, Nature Human Biology, Experimental Brain Research, and Parallel Computing amongst many others. I work in a psychology department.

            1. Anonymous Coward
              Anonymous Coward

              Re: a right mess

              The reason people think psychology is a pseudoscience is that multiple replication studies have indicated the majority of published psychology papers are not replicable. Some of the most famous and often-quoted experimental results have turned out to be illusory when anyone has tried to replicate them. It's not the only field to have a reproducibility crisis, but it's hard to think of any that has it worse.

              1. DrBobK
                Headmaster

                Re: a right mess

                You are wrong to paint all of psychology as being hit by the replication crisis. There were plenty of terrible problems in e.g. social psychology, but other areas (the basic science stuff, sensory systems, neuroscience end of things) were untouched. I've been on a couple of panels discussing these issues in psychology and other areas. Some of the problems were down to outright fraud (social priming studies and the like) and some were down to (very) poor statistical practices. The latter tend to affect sub-disciplines in which researchers put undue emphasis on single values derived from statistical tests ('p-hacking') rather than considering patterns of findings in relation to theory (that's probably why work on sensory systems stands up pretty well - it is heavily theory-driven). It really isn't clear whether some disciplines outside psychology might have similar problems - people haven't looked closely enough, they've concentrated on fraud, as opposed to poor scientific practice. Anyway, I'm glad you did have a decent rationale for your comment rather than thinking that we're all still Freudians, God forbid! All the business at Hindawi, was, of course, plain old fraud.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: a right mess

                  The comment about psychology being pseudoscience was not mine, though I understood why Ian Johnston made it, hence my reply.

                  I have wondered if there is a more fundamental problem with social psychology experiments: that the initial conditions are not well controlled. You have a set of human test subjects whose brains are already full of unmeasureable preconceptions and life experience before the experiment starts. The results might be more dependent on that, than on the method of the experiment. If that's the case, any attempt to replicate it with a different group of subjects is likely to fail.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: a right mess

                    Maybe it's too late on a Friday afternoon to be thinking about such things, but if I'm right about the above, it occurs to me we should be glad that there is a reproducibility crisis in social psychology.

                    How would you build a well-controlled social psychology experiment? The only way I can think of is to build a computer so powerful that it can simulate an entire human brain - or society of human brains. You could do your experiment on those simulated brains, reset them back to their initial conditions, and repeat your experiment to see how reproducible it was.

                    If you look around you and find you're living in a world where social psychology doesn't have a reproducibility crisis, despite the non-existence of such powerful computers, then you're probably in one.

            2. Anonymous Coward
              Anonymous Coward

              Take a breath

              While you may have devoted plenty of study time to your chosen field, you kinda blew your defense of it in the first line. I worked in neuroscience, and I tend to call out people that are gatekeeping or using a weak appeal to authority to slap down people in even an uncivil interaction. So you lost any hope of making your case when you started putting words in the other guys mouth and implying without actual knowledge they weren't qualified to an opinion, let alone to challenge yours.

              In both my field and yours there has been a serious reproducibility crisis for some time. For too long poor methods and conjecture were pervasive, and as this article points out the peer review process is a hot mess in general, but especially in the social sciences. If you publish a cryptography paper, no one really just takes your word for it. People do the math. Neuroscience is slowly facing these issues, but is also taking a "don't fight in front of the kids" approach where much of that discussion is not in the public sphere. We are a long way from where the particle physics teams are. The don't go to print without a long row of 9s.

              Psychology hasn't (from the outside at least) seemed to want to tackle that head on. Sociology is even worse. So if I hear a barb about bad research in my field, or a barb in general, I take it as an opportunity to acknowledge failings like 20 years of "lets blast 40 people's heads with radiation and declare we found the x region" papers claiming weak statistical correlations on diffuse patterns that never back up on subsequent tests. I talk about failures in peer review, and about jargon so dense even specialists are finding everything but the abstract to be nearly impenetrable due to poor writing and explanation in the body of the paper. I do that because it's better for all the fields that I work in to not just acknowledge the problems but to try to fix them, but it takes thick skin.

              1. DrBobK

                Re: Take a breath

                See upcoming comment... The original critical comment was actually OK because it turned out it was based on a decent rationale (the replication issue), rather than some old 'all psychologists put people on couches and ask them about their mothers/fathers' trope. You are right, I shouldn't have jumped to the wrong conclusion.

      4. Ian Johnston Silver badge

        Re: a right mess

        I once said that a paper I was send for review was unsupported gibberish, because it was unsupported gibberish. The paper was published and I was never asked to review for that journal again.

        1. imanidiot Silver badge

          Re: a right mess

          I assume that particular paper was also duly struck off your list of reliable sources?

          1. GrumpenKraut

            Re: a right mess

            Rather the journal.

        2. tiggity Silver badge

          Re: a right mess

          Decades ago (before switching from biochemistry / biology field to computing) I had an interview for a company, lets call them many oceans, I was sent a paper in the post (like I said, ages ago) to prepare a presentation on for the interview, it was by a US academic (though nationality irrelevant).

          Paper was dire, high on opinion, low on evidence (as is the case with so many papers, lacking the hard numeric data for you to analyse yourself), but the few graphs & stats presented were a farce as conclusions made bore no relationships to the stats & graphs presented.

          So my presentation was metaphorically ripping the paper to shreds.

          Interviewers explained that the academic who was lead author on the paper was their star new signing & role I was applying for was on his team. I walked out at that point, explaining that as paper claims were either due to fraud or incompetence*, I would not be able to work "under" that person, whichever of the 2 reasons it turned out to be.

          Until ALL**the experimental data is available for work done, looking at most papers all you can do is fact check the small amount of data presented & if there's no egregious stats errors leaping out have to assume it is OK. If all results were there, you could do your own statistical analysis. Decent researchers will provide raw data on request, but would be nice if raw data had to be published somewhere.

          There are wider issues with papers too, mostly journals are looking for "positive" results - little interest in publishing an article that says compound X has no significant effect on biological activity Y, but these "negative" results are still very useful. Often only way to include some "negative" results is within an overall "positive" paper and mention that it was of note that A, B, C had significant effects but V, W and X did not.

          .. After all, a lot of time and effort is spent on things that produce "negative" results . ***

          * TBF, a lot of scientists are clueless on stats & so do not realise they are often making claims without any significant numbers to back them up. I happened to have a solid maths grounding, and also did some more maths courses at uni, whereas, especially in biology, psychology etc. where maths A levels were not mandatory, you could complete a degree without ever having had a strong maths grounding.

          ** It is too easy to cherry-pick the data, discarding "outlier" results that do not fit your hypothesis. Sometimes get hints of this as the stats can occasionally look a bit too good to be true

          *** I have a friend who works in cancer research, an awful lot of "negatives", but at least that's a field where people are interested in knowing that compounds that look a decent bet turned out not to be, so "negatives" easier to publish in that field. Similar in "viable room temp semiconductors" - which are not really a thing, and people would currently be ecstatic if something could be produced easily, cheaply at useful size and strength that was a superconductor at something as "warm" as -50 C.

    2. DS999 Silver badge
      Trollface

      Since researchers hate reviewing papers

      Maybe they decided to bypass them and have ChatGPT do the reviewing

      1. Snowy Silver badge
        Trollface

        Re: Since researchers hate reviewing papers

        Written by AI, Reviewed by AI, then read by AI to improve the AI.

        AI all the way down!!

  2. that one in the corner Silver badge

    "Evidence-Based Complementary and Alternative Medicine"

    sounds a rather paradoxical title, but it has gone now. "Journal of Oncology" was apparently respectable, at least up to a few years ago (when my correspondent retired from the field), whilst "Scientific Programming" is another one that sounds, if not contradictory then outrageously optimistic[1].

    The idea of an AI-spammed "Mathematical Problems in Engineering" receiving submissions along the lines of "if Jane has a 2km bridge, along which Jack drives a 5 tonne lorry containing 17 apples, what is the required Young's Modulus of the support cables?" is scary enough to give the now ex-editors of "Sleep Disorders" nightmares,.

    But it is good to know that they promised all the extant material would be kept in the archives and would not be forgotten.

    Otherwise, "Scanning" lived in vain.

    [1] Don Knuth chose his titles carefully

    1. An_Old_Dog Silver badge
      Boffin

      Job Title: "Scientific Programmer"

      I knew someone whose job title was "Scientific Programmer." It simply meant that he wrote application programs which supported the company's scientists' needs-and-requests.

      1. Bebu
        Windows

        Re: Job Title: "Scientific Programmer"

        David Gries's classic The Science of Programming Springer Verlag NY 1981 is a glimpse of what scientific programming might be.

        Not exactly for the faint hearted perhaps. :)

      2. LionelB Silver badge

        Re: Job Title: "Scientific Programmer"

        Yes - while ambiguous, that is the commonly-understood meaning of "scientific programming" (at least amongst scientists, if not programmers at large).

        1. tiggity Silver badge

          Re: Job Title: "Scientific Programmer"

          Ironically I first did intensive programming for a reason (rather than hobbyist for fun stuff that started back with an Acorn Atom) when a science student as I needed to model effects of various drugs on sodium transport in cell membranes (for my graduate thesis) and there was nothing around (this was a looong! time ago) to do that (a few pieces of pharmokinetics software, but mickey mouse stuff really) and it made sense to code this as maths was non trivial and a lot of data points.

          I was a scientific programmer for myself.

          1. LionelB Silver badge
            Pint

            Re: Job Title: "Scientific Programmer"

            "... that started back with an Acorn Atom) ... (this was a looong! time ago) ..."

            He, he. I'll take your Acorn Atom and raise you IBM System/370, Fortran 66 on punched cards (yes, I is that old).

            As I recall, the first program I ever wrote was a Lotka–Volterra predator–prey simulation as part of an Applied Maths course. When I came to the UK in the late 70s as a postgrad student (pure maths), I found myself much in demand in particular by the Engineering students, as I was the only person they knew who had any programming experience whatsoever. Many beers ensued.

            After a lengthy stint as a software engineer (Telecoms) and a brief stint as a quant, I moved back to academia. Currently in research (computational neuroscience, statistics), I mostly code in Matlab these days, as it's de facto standard in the field (although Python is encroaching), but also quite a lot of C99. And even the odd bit of Fortran.

    2. LybsterRoy Silver badge

      Re: "Evidence-Based Complementary and Alternative Medicine"

      "if Jane has a 2km bridge, along which Jack drives a 5 tonne lorry containing 17 apples, what is the required Young's Modulus of the support cables?"

      Bit of a guess here but I'd go for 42

    3. Munchausen's proxy
      Pint

      Re: "Evidence-Based Complementary and Alternative Medicine"

      I wish I could buy the power to give an extra upvote for '"Scanning" lived in vain'.

  3. KittenHuffer Silver badge

    Kinda ironic ....

    .... that in the last paragraph they say that they'll be making money this quarter because of "Q4 content rights deals for training AI models".

    And just what would these AI models trained on papers submitted to their publications be used for?

    I'm sure they'll be used for writing children's stories, or something like that!

  4. Anonymous Coward
    Anonymous Coward

    When I published a paper on ArXiv

    I was inundated with offers of publication in numerous dubious 'scholarly journals'.

    The 'journals' sought my acceptance without first disclosing their publication fees, which when queried often turned out to be hundreds or thousands of dollars.

    Needless to say I declined their kind offers.

    1. Anonymous Coward
      Anonymous Coward

      Re: When I published a paper on ArXiv

      This is a *very common* practice in Brazil. Predatory publishers scan lists of authors/papers in conferences and invite them to publish in their very generic-titled journals ("Revista Foco, Brazilian Journal of Development, Cuadernos de Educación y Desarrollo, South Florida Journal of Development") then send you a message saying your paper is pre-approved, and if you just send them a fee of the equivalent to 80-300 US dollars they will publish it in a few days!

      My hobby is to locate the name of the editor-in-chief of these journals, which is usually a researcher or professor in a university, and write to the journal CCíng the editor asking if he knows he is being listed as editor of a predatory journal. Curiously, some them don't mention being editors of that journal in their CVs...

    2. GrumpenKraut

      Re: When I published a paper on ArXiv

      That's why my email address is *not* anymore on the first page with arxiv articles. Zero such offers since then.

    3. LionelB Silver badge

      Re: When I published a paper on ArXiv

      Having been in research for a couple of decades, I get on average 10-15 spammy journal emails a day. The spam filters in my institution are pretty rubbish (Outlook with Mimecast). They flag about half of them, but I also get a fair number of false positives, e.g., genuine review requests from reputable journals I've published in our reviewed for previously.

      It's an endemic problem in academia.

  5. JavaJester

    Hindawi due diligence?

    Did they do any due diligence before buying Hindawi? Whoever was responsible for that due diligence should be shown the door.

    1. Paul Crawford Silver badge
      Trollface

      Re: Hindawi due diligence?

      They asked HPE for advice - no problem they said!

  6. Andrew Williams

    Are the various institutions going to shed those who are zipping out AI papers? As in the humans who chose to really have zero involvement in the science.

  7. Tron Silver badge

    Put your stuff on Academia.

    It should stand or fall based on its quality, not on the journal it appeared in.

    Academic publishers have been gutting Uni. budgets for years. The internet should have made them obsolete. Too many vested interests and too much reliance on tick boxes to rate depts. and staff.

    This will be less of a problem in the UK now as the basic funding model has been broken by the government. The cap on fees and restrictions on international students that subsidised local ones will see UK courses, departments and even whole unis vanish - including highly rated ones. I guess a post-developed country no longer needs educated people. 'After the Romans left' etc.

  8. steelpillow Silver badge
    Devil

    Who shall peer review the peer reviewers?

    You can always find fruitloops and cash-hungry scallies in academia. Bring the few higher-placed fruitloops together as an editorial team and bingo! you have a "peer review" journal dedicated to pseudoscience, all ready for interested parties to sponsor a scally or two.

    I had a happy time on Wikipedia a few years back, exposing some of this crap and getting those journals blacklisted as reliable sources. Tip of the iceberg, sadly.

    Seems like now, all they need to do is point an AI at the backnumbers.

  9. Anonymous Coward
    Anonymous Coward

    Long live and success to Alexandra Elbakian!!!

    Www.libgen.is

    Vast repository of scientific works known as Library Genesis

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like