It’ll all end in tears
How about….. hmmmm …… actually interviewing people? Call me radical? Weed out the obvious outliers, and interview the rest? Otherwise companies will eventually just recruit AI bots.
Job seekers who use the same AI model to compose their resumes as the AI model used to evaluate their application are more likely to advance through the hiring process than those submitting human-written materials, according to researchers. The findings, detailed in a preprint paper titled "AI Self-preferencing in Algorithmic …
1. Ask several major LLMs to generate CVs fitting a published job description (let's assume the job is AI-related), send them all in. One that was generated by the same LLM that classifies the CVs will have a higher chance of being shortlisted for the first interview (the main purpose of a CV, of course).
2. The prospective employer may have a "guardrail" in place to cross-check incoming CVs (e.g., by the names and the supplied contact details) to detect applicants who employ the above trick, to weed them out...
3. ... Or maybe to give such applicants bonus points on their way to the shortlist? After all, this demonstrates AI skills, and the job is, by assumption, AI-related. And AI skills are deemed useful, possibly essential, to all jobs in our brave new world.
Short listing by HR and many managers is probably missing a lot of good people. When I was hiring I tried to find engineers that were intelligent and could think rather than those that just listed the right experience or vendor exams. A person with the right characteristics can learn and adapt to changing requirements. I wanted people that could create automations rather than become them.
Yup - going back a few years, we advertised a deputy DBA post. HR stepped in, did the initial advertisement and picked the people they thought were the best fit to be interviewed.
So, that was a bunch of people who used Excel, some of whom were looking for secretarial work. Sure, we could train them up, but... well, some left when they saw the aptitude test, and the few that remained needed a LOT of training.
Where they all technically inept? Nope, they'd removed all the people who had actual IT skills, and the two who had some experiences in database management (found that out in the review with HR as to why we didn't pick anyone).
Had to bin all the applicants, rewrite the job advert (back to what we'd originally proposed) and tried again. Managed to get a much smaller, but better qualified mix of applicants. One even got the job.
Now, how is AI going to do any of this better? Will it understand what we're looking for, what skills are needed, desired and are beneficial?
I'll not hold my breath waiting...
(And AC to fit in with the others :p )
Although the authors are careful to say "it's not necessarily discrimination", it does seem to be a particularly pernicious form of discrimination - using a particular AI says nothing whatsoever about the candidate and their ability to learn different techniques. we appear to be moving from ageism to AI-ism
The first thing to do on any process is to announce that AI generated content is unwanted.
Because there is no reliable test, I would get back to suspects and say that their material seems artificial, and if this is the case, they need to send human generated documents.
If they insist they did it, fair enough but, if it turns out it was done by AI, instant rejection.
People who used AI well to generate material are good and that is a positive, as long as they are honest about it but interviewers should know what they can do themselves.