Re: Do you trust Yale or Stanford more?
Yale. Stanford is conflating correlation with causation. And with stupidity.
SHORT VERSION: Big Tech has been cutting jobs for almost three years now. This long predates any possible effect from AI. AI adoption is just the latest convenient excuse for cost reductions.
LONG VERSION.
What is actually going on with IT jobs is this:
- Up until a handful of years ago, Big Tech firms (you know who the usual suspects are) were wildly over-hiring in anticipation of growth needs, partly to "fill the bench" with trained, fully on-boarded people ready to go, and partly to deny that talent to their competitors. Basically, like Chelsea FC.
- Then a cold dose of reality sets in. They realized that they were paying lots of people lots of money to twiddle their thumbs, learn Rust, read El Reg, and other time-filling. But nobody wanted to be the first to admit a mistake* and start firing people. (Or "rightsizing" or "balancing the workforce" or "reskilling" or whatever.)
- And then along comes Musk who buys Twitter and starts slashing jobs in the name of efficiency -- not because he has spotted overstaffing and has the courage to address it, but because he has no idea what people in Twitter do, and in his mind, anything he doesn't understand can't be important. Of course, his cuts are entirely capricious and in many cases are essential people, but they get the payroll down. (Remind anybody of anything?). This is right around Nov 2022. (By an odd coincidence, that is also when ChatGPT was first publicly released, but years before it would be credible as a scapegoat.)
- The other tech bros sit up and pay attention, and suddenly every Big Tech "realizes" that it too can make "efficiency savings", and they also start cutting jobs. Not only are they letting experienced and inexperienced people go, they are also rescinded job offers to grads and cut way back on making new offers to the Class of '23 (as the Americans say). IT unemployment starts to rise. This is well-documented online if anybody wants to verify, both in news stories and in financial notes from the likes of JP Morgan.
- Fast forward a couple of years. The universities have still been pumping out IT grads (because people can't switch tracks on a dime), there are few to no entry level jobs to be had, and the newly minted grads are competing with the laid off people who have real experience on their resumes. No surprise, it's very hard for a lot of them to get hired.
- Obviously, Big Tech can't keep firing people in the name of "efficiency" forever. But then along comes AI, and the usual carnival barkers are shouting that AI will devastate employment *even though there is absolutely no evidence for this* and *lots of evidence that efficiency gains are marginal at best, especially in programming jobs* and *even if it does happen it is years away at least*. Never mind reality, here is another perfect excuse for companies to cut payroll. So now all these firms are continuing to cut but now they are attributing it to "AI automation" for which they have not a sliver of evidence.
- (In truth, there is also some amount of companies firing people and attempting to replace them with AI, but mainly because the managers have no idea what the workers do, and the remaining workers are taking up the slack, and when things start to fall apart they will have to walk it back. But that is secondary, especially among Big Tech.)
- And then along comes Stanford, probably with an outcome already in mind, one that will grab far more headlines than Yale's "null result"**. They note that (a) IT employment is down, and (b) lots of firms are *saying* that they cut jobs because of AI, and (c) the downward trend started right around Nov 2022, and they draw a conclusion that is simple, obvious, and wrong (to paraphrase H.L. Mencken), and point at ChatGPT. This is, of course, complete nonsense: although it was released at the same time as the Twitterpocalypse, It would be at least two years before job replacement could be remotely plausibly attributed to ChatGPT.
----
*Compare also: wildly overspending on AI infrastructure. Not wanting to be first to admit a mistake is a fixed feature of recent tech fads. There is a powerful herd mentality (I feel like there is a "nerd mentality" joke there somewhere?)
**Sadly this happens a lot with all kinds of studies, including scientific ones. Null results are dull, and go in the bottom drawer.