When will they learn
AI is as shit as the data you give it, it just does a better job of making shit decisions.
Amazon learnt this year's ago.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
A university announced it had ditched its machine-learning tool, used to filter thousands of PhD applications, right as the software's creators were giving a talk about the code and drawing public criticism. The GRADE algorithm was developed by a pair of academics at the University of Texas at Austin, and it was used from 2013 …
Amazon punts stuffI I've recently bought at me. Especially if it as a consumer durable you don’t need two of or will only buy ever few years. It is that witless. My Wish List seems to play absolutely no role in what products it tries to punt at me.
As a system designed to tempt me to buy things it’s pretty much pointless.
"While every application is still looked at by a human reviewer," the 2014 paper noted, "GRADE makes the review process much more efficient. [...] GRADE reduces the total number of full application reviews the committee must perform."
So those GRADE didn't tag high got less rigorous scrutiny? Hawking would probably have scored low on GRADE, and might therefore have been rejected. But maybe the use of "AI" is part and parcel of a trend. Almost everywhere the PhD is not what it used to be - it's become something of a diploma mill, not least because you practically never get to research your own choice of topic, but are typically used as a research assistant on your supervisor's project.
So that sounds like a lose-lose then: either use the AI and get the historical biases or carry on with a wetware evaluation and get mostly the same historical biases (because the current staff doing the evaluations were recruited in the image of those whose biases are in the data.)
Korev makes (unless we have both missed something) an excellent point.
What 'AI' or ML should be trying to do is maximise PhD results as a function of applicant assessment process. If one is just looking to copy previous applicant assessment, this makes no adaptation of the 'AI'/ML algorithms for ability to get a PhD (or, especially, to get a highly rated PhD).
And, obviously, one can only assess PhD quality some 3+ (more likely 4+) years after the applicant was assessed for initial suitability.
This makes me think (given the ratios of masters to doctorate slots and the 2+ to 4+ years of delay) that any such 'AI'/ML assessment is better targetted at masters courses than at doctorial positions.
Keep safe and best regards
At a more mundane level, the same is true for undergraduate admissions, which is a point generally missed by the press in the UK when they decide to take a swipe at "positive discrimination". You aren't trying to let in the applicants with the best grades from the exams they've just taken (or, in the UK, are about to take -- since we do uni admissions before the exam results come out). You are trying to let in the applicants who will get the top grades in 3 or 4 years time. That's harder, but with thousands of students going through the system each year, it *is* possible to show that basing admission purely on exam grades is *not* the best strategy.
Artificial Intelligence isn't intelligence; it's just Machine Learning.
Machines don't learn from a database of human experience, they learn from human feedback.
Without a substantial effort to review individual results and mark them as right or wrong, a machine won't learn, it will only replicate.
The problem is that few want to put in the effort to teach the machine or the programming to allow the machine to be taught.
And who exactly do you think you're kidding apart from yourselves ?
This application "reduced the number of full reviews required per applicant by 71 percent and, by a conservative estimate, cut the total time spent reviewing files by at least 74 percent ”. You'll excuse me if I infer that you only reviewed the applications that were favorably noted by your AI, which clearly indicates that it chose who you would spend your time on, therefor anyone it didn't like you didn't spend time on.
I'm sorry, but your statement is factually incorrect.
It is also a blatant lie.
Surely one other problem is that people might actually have applied to other universities for postgraduate research?
I got a research grant at Leeds University in the UK not because I was the best candidate, but because the best candidate accepted an offer to go to Cambridge University instead. (He got the best first in our year, so fair dues, and is now a full professor.)*
The training of AI only on people they accepted is clearly missing out all those other people they would have accepted if only they had read their applications carefully, invited them for interview, or had not been offered a place at Harvard, Yale, Stanford, Cornell etc.
*In case you are interested, I completed in 3 years, and did get a Ph.D., but am not a professor.
Mine took 5 years, big project and the last two I had a teaching fellowship and was part time.
I’m not a professor either.
PhD thesis I’ve looked at in the last 15-20 years get thinner and thinner with less and less work in them.
The reality is that PhD students are cheap research labour and are used as stuff even if in a formal taught program. Got a project you want done? A PhD stipend is way cheaper than a postdoc salary and may be available from your institution so you just have to provide the research funds and a bit of teaching and you get to pick their young keen iconoclastic brains for your next grant application to bolster your flagging creative juices.
Too many are doing PhD’s often for the wrong reasons and are chasing a diminishing number of jobs which are casualising more with every passing year. Full professors will be on short term contracts before too long. Some treat the job like that anyway moving on regularly.
I don't mean using it to decide who gets in, that would be ridiculous. I can think of two uses:
1) Each year, one can compare what GRADE thinks to what actually happened, and then retrain the model with a new year's data. This should let you see how your admissions team's views are changing over time, especially as personnel change.
2) You can use this model to look out for biases in your choices. If the model is biased, then your original choices were.
"reduced the number of full reviews required per applicant by 71 percent and, by a conservative estimate, cut the total time spent reviewing files by at least 74 percent.”
Assuming courses are significantly oversubscribed, simply sending a randomly selected 71% of applications to the circular filing cabinet would have exactly the same effect. Based on my interviewing experience, you're usually looking at at least 10% of applications that would likely be worth interviewing. If you have 50 applicants for a post, you can safely throw 30 of them out without even reading them and still be left with too many good picks to actually interview. No-one really likes to admit it, but the whole selection and interview process is inherently arbitrary anyway, based heavily on personal feeling and "I know it when I see it" decisions. When it comes down to it, that's the entire point - if we could quantify the process we wouldn't need to worry about having humans read applications and perform interviews at all. So while discarding applications without even looking at them sounds like it should be a bad idea, the fact is that it would have no effect whatsoever on the final outcome - considering 20/50 applications at random is identical to only having 20 applications in the first place.
The only time you might actually have to worry about missing out on that hypothetical single perfect candidate is if there might not be enough decent candidates left in the remaining applications. But that just means you didn't have that many applications in the first place, so you don't need to worry about saving time by pre-filtering them.