back to article Uni revealed it killed off its PhD-applicant screening AI – just as its inventors gave a lecture about the tech

A university announced it had ditched its machine-learning tool, used to filter thousands of PhD applications, right as the software's creators were giving a talk about the code and drawing public criticism. The GRADE algorithm was developed by a pair of academics at the University of Texas at Austin, and it was used from 2013 …

  1. IGotOut Silver badge

    When will they learn

    AI is as shit as the data you give it, it just does a better job of making shit decisions.

    Amazon learnt this year's ago.

    1. Doctor Syntax Silver badge

      Re: When will they learn

      They don't seem to have applied this lesson to whatever AI chooses the products offered in response to searches. I think it might be self-reinforcing as the results get ever shittier.

      1. Muscleguy

        Re: When will they learn

        Amazon punts stuffI I've recently bought at me. Especially if it as a consumer durable you don’t need two of or will only buy ever few years. It is that witless. My Wish List seems to play absolutely no role in what products it tries to punt at me.

        As a system designed to tempt me to buy things it’s pretty much pointless.

    2. AMBxx Silver badge

      Re: When will they learn

      Even before AI there were similar systems. Ignoring any actual bias, the use of such systems implies that all previous decisions were perfect and there's no need to change.

  2. Anonymous Coward
    Anonymous Coward

    Oh dear! "Designed to replicate" - more like "Designed to perpetuate". There are more subtle ways of

    discriminating than simply by gender or race. For example how did the algorithm do on strong candidates

    from weak universities vs weak candidates from strong universities?

  3. Mike 137 Silver badge

    Double talk?

    "While every application is still looked at by a human reviewer," the 2014 paper noted, "GRADE makes the review process much more efficient. [...] GRADE reduces the total number of full application reviews the committee must perform."

    So those GRADE didn't tag high got less rigorous scrutiny? Hawking would probably have scored low on GRADE, and might therefore have been rejected. But maybe the use of "AI" is part and parcel of a trend. Almost everywhere the PhD is not what it used to be - it's become something of a diploma mill, not least because you practically never get to research your own choice of topic, but are typically used as a research assistant on your supervisor's project.

  4. Anonymous Coward
    Anonymous Coward

    Was it trained on their own acceptance data from previous years?

    So that sounds like a lose-lose then: either use the AI and get the historical biases or carry on with a wetware evaluation and get mostly the same historical biases (because the current staff doing the evaluations were recruited in the image of those whose biases are in the data.)

    1. Muscleguy

      Re: Was it trained on their own acceptance data from previous years?

      Ah but you can put your staff through expensive awareness training punted by some critical studies SJW on the make.

      How would you do that to your AI even if you were stupid enough to try?

  5. Korev Silver badge


    Unless I missed it, they also didn't include completion of their PhD in the model, which sounds like an obvious thing to have.

    (Although I do wonder if certain groups drop out more often and then feeding that in would be counted as discrimination)

    1. Nigel Sedgwick

      Re: Completion

      Korev makes (unless we have both missed something) an excellent point.

      What 'AI' or ML should be trying to do is maximise PhD results as a function of applicant assessment process. If one is just looking to copy previous applicant assessment, this makes no adaptation of the 'AI'/ML algorithms for ability to get a PhD (or, especially, to get a highly rated PhD).

      And, obviously, one can only assess PhD quality some 3+ (more likely 4+) years after the applicant was assessed for initial suitability.

      This makes me think (given the ratios of masters to doctorate slots and the 2+ to 4+ years of delay) that any such 'AI'/ML assessment is better targetted at masters courses than at doctorial positions.

      Keep safe and best regards

      1. Ken Hagan Gold badge

        Re: Completion

        At a more mundane level, the same is true for undergraduate admissions, which is a point generally missed by the press in the UK when they decide to take a swipe at "positive discrimination". You aren't trying to let in the applicants with the best grades from the exams they've just taken (or, in the UK, are about to take -- since we do uni admissions before the exam results come out). You are trying to let in the applicants who will get the top grades in 3 or 4 years time. That's harder, but with thousands of students going through the system each year, it *is* possible to show that basing admission purely on exam grades is *not* the best strategy.

        1. AMBxx Silver badge

          Re: Completion

          But is basing your selection on future results correct? Couldn't underperforming students have been let down by the university by some other bias?

          1. W.S.Gosset

            Re: Completion

            In practice (at least in Australia), it is overwhelmingly the students who do the letting-down, not the uni.

            Certainly my experience (and everyone I knew), both as "student" and lecturer.

  6. Anonymous Coward

    AI isn't, ML doesn't

    Artificial Intelligence isn't intelligence; it's just Machine Learning.

    Machines don't learn from a database of human experience, they learn from human feedback.

    Without a substantial effort to review individual results and mark them as right or wrong, a machine won't learn, it will only replicate.

    The problem is that few want to put in the effort to teach the machine or the programming to allow the machine to be taught.

  7. Pascal Monett Silver badge

    "It was never used to make decisions to admit or reject prospective students"

    And who exactly do you think you're kidding apart from yourselves ?

    This application "reduced the number of full reviews required per applicant by 71 percent and, by a conservative estimate, cut the total time spent reviewing files by at least 74 percent ”. You'll excuse me if I infer that you only reviewed the applications that were favorably noted by your AI, which clearly indicates that it chose who you would spend your time on, therefor anyone it didn't like you didn't spend time on.

    I'm sorry, but your statement is factually incorrect.

    It is also a blatant lie.

  8. Eclectic Man Silver badge


    Surely one other problem is that people might actually have applied to other universities for postgraduate research?

    I got a research grant at Leeds University in the UK not because I was the best candidate, but because the best candidate accepted an offer to go to Cambridge University instead. (He got the best first in our year, so fair dues, and is now a full professor.)*

    The training of AI only on people they accepted is clearly missing out all those other people they would have accepted if only they had read their applications carefully, invited them for interview, or had not been offered a place at Harvard, Yale, Stanford, Cornell etc.

    *In case you are interested, I completed in 3 years, and did get a Ph.D., but am not a professor.

    1. W.S.Gosset

      Re: Problem

      You got your PhD in the scheduled 3yrs?!


      1. IGotOut Silver badge

        Re: Problem

        He didn't say 3 consecutive years.

    2. Muscleguy

      Re: Problem

      Mine took 5 years, big project and the last two I had a teaching fellowship and was part time.

      I’m not a professor either.

      PhD thesis I’ve looked at in the last 15-20 years get thinner and thinner with less and less work in them.

      The reality is that PhD students are cheap research labour and are used as stuff even if in a formal taught program. Got a project you want done? A PhD stipend is way cheaper than a postdoc salary and may be available from your institution so you just have to provide the research funds and a bit of teaching and you get to pick their young keen iconoclastic brains for your next grant application to bolster your flagging creative juices.

      Too many are doing PhD’s often for the wrong reasons and are chasing a diminishing number of jobs which are casualising more with every passing year. Full professors will be on short term contracts before too long. Some treat the job like that anyway moving on regularly.

  9. DavCrav

    There are a few uses of this software, surely

    I don't mean using it to decide who gets in, that would be ridiculous. I can think of two uses:

    1) Each year, one can compare what GRADE thinks to what actually happened, and then retrain the model with a new year's data. This should let you see how your admissions team's views are changing over time, especially as personnel change.

    2) You can use this model to look out for biases in your choices. If the model is biased, then your original choices were.

  10. Cuddles

    Easier method

    "reduced the number of full reviews required per applicant by 71 percent and, by a conservative estimate, cut the total time spent reviewing files by at least 74 percent.”

    Assuming courses are significantly oversubscribed, simply sending a randomly selected 71% of applications to the circular filing cabinet would have exactly the same effect. Based on my interviewing experience, you're usually looking at at least 10% of applications that would likely be worth interviewing. If you have 50 applicants for a post, you can safely throw 30 of them out without even reading them and still be left with too many good picks to actually interview. No-one really likes to admit it, but the whole selection and interview process is inherently arbitrary anyway, based heavily on personal feeling and "I know it when I see it" decisions. When it comes down to it, that's the entire point - if we could quantify the process we wouldn't need to worry about having humans read applications and perform interviews at all. So while discarding applications without even looking at them sounds like it should be a bad idea, the fact is that it would have no effect whatsoever on the final outcome - considering 20/50 applications at random is identical to only having 20 applications in the first place.

    The only time you might actually have to worry about missing out on that hypothetical single perfect candidate is if there might not be enough decent candidates left in the remaining applications. But that just means you didn't have that many applications in the first place, so you don't need to worry about saving time by pre-filtering them.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like