Instructions from HR
The beatings AI rollout will continue until staff morale improves!
If your company isn't seeing great returns from its investment in AI, you might want to look at the humans tasked with deploying it and how you can motivate them. Right now, many employees fear AI-driven job losses and aren't well trained to use the tech, according to Forrester. The research and advisory biz says in its latest …
The economy is such a shambles right now, I took a gander at a job board the other day, and the only listing was for some Christian college and like six listings for the same MSP under different names. I feel like my boss can do whatever right now, because what am I going to do? Take a job at the donut hut?
That was already a big part of the article, but I think Doctor Syntax has it right. In my case, I'm not too afraid of losing my job to AI, but when pushed to use it, I am afraid that I'll be expected to speed up by using it but, if it makes mistakes, I'll be blamed for them. I can't automate something without taking the risk of consequences if the AI doesn't work, but if I automate it but still check the results to prevent that, they'll be disappointed with the lack of benefits from having automated it. This is especially true of AI stuff implemented by someone else who I don't trust to have tested how well their thing works because, at least when I write it, I have seen what its deficiencies are and can make some plans around them. This probably doesn't apply to everyone since I'm coming from a very programming-based background which also allows me to customize any AI tools I develop, but I think it's likely another of the reasons why people are averse to using this untested and mostly untestable software.
> Fire most of the workers. The rest can pick up the slack with our fancy shiny new tools.
> Force the workers to use the tool on the threat of firing. Measure success by metrics that encourage usage of tools.
> When something goes wrong due to tool unreliability, blame the worker. Fire a few more people. Change no metrics. No, intensify the metrics.
> Wonder why no one likes the tool, or the current situation, and are angry and unproductive all the time.
> “Oh, it must be because they are scared of the AI, like the subhuman Luddites they are.”
> Repeat until breakage.
They keep repeating the benefits of AI and how the bots will save us time, but we're seeing an uptick in the number of support calls for "this is doing that and the AI says" or I can't do what the AI says, can you fix" and I've even had a shiny eyed AI devotee tell me I'm doing my job wrong because "the AI says this and I can't find the setting, so I want to know why you didn't include that in the app"
PA’s… Don’t see status conscious (male) managers getting rid of their (young female) PA’s, regardless of how capable AI becomes.
I don't know. With the right polymers and mechatronics, embodied AI maniquins might give flesh and bone a run for the manager's money.
Although you still occasionally spot the nail polishing bimbo it is extremely rare and more often nepotism than status or bonking.
More often the reality is the PA is the only one who knows how everything is done or works (not quite the same thing though) in the office of her glove puppet of an employer.
Lawyers get caught misusing LLMs on a fairly regular basis now.
Which makes you wonder how many of them would happily make up citations of legal precedent before LLMs were available. The citations almost certainly couldn't be checked properly in any case taking less than a day or two, if the other barrister didn't have enough staff assigned and available.
I'll do you a deal on a used carby and distributor - no aftermarket tat, original parts... very original - and although I know I am cutting my own throat, I'll also throw in timing and mixture adjustment.
At Dibbler's Dealerships we aim to please (ourselves.)
"the knocking noises and smoke from my car"
Doubtless half these numpties are driving Teslas and deserve to be cremated by the electrical fire and ensuing battery explosion.
I'm watching a previous employer go down the road of "embracing" AI, and suffice to say I'm glad to be well out of that place.
E.g. supposedly most workerbees are now simply mandated to use AI for their jobs. The in-house tools for search, code dev, QA testing and so on now have AI widgets bolted on the side, sometimes effectively, other times poorly, but always with a module for monitoring, tracking, and reporting employee use of AI.
And, those AI usage metrics are now directly tied to quotas which are used for bonuses, reviews, and presumably performance plans to manage them out the door if they don't measure up.
No surprise this place also counts badge-ins at corporate offices, with a mandated on-site quota "or else". They were happy to have us all WFH during the pandemic times, of course.
They've adopted the "unlimited PTO" scam too, marketed as flexible time off or something similar, and promptly loaded it up with restrictions. Of course the main goal is getting accrued vacation time off the books and implicitly discouraging the rank and file from ever using any more.
All this on top of an ongoing trend of "strategic" layoffs. Read: small enough to fly under the radar of WARN notices, most of the time.
Any one of those things alone is bad enough. In the aggregate I think it's pretty safe to assume the executive class is out to get you; or rather, wring the last penny out of your used-up carcass and then cast you aside.
It's just stupid. What ever happened to judging employees based on how good a job they do instead? If someone excels without AI, what's the problem?
If AI is genuinely useful, those who use it should excel and this should be evident in the quantity and quality of their work. In this situation, those who don't use it and who's work is suffering in comparison may need to adapt. But when your only metric is "Barry used AI 23% more than you so he's getting a bonus and you aren't", you're just part of the slop machine.
Rewarding people based on how good their work is presumes that managers know what their people are doing and understand it well enough to evaluate it. How often is that true?
Mind you, I did have a boss once who told me "I don't really understand what you do, but people keep telling me how good you are and asking for you on their projects, so keep doing that." At least he was honest about it.
In the part of the US where I live "PTO" also applies to any time off because you're ill. Its a bit of a scam because you can easily eat up vacation time and even accrued vacation because of anything more serious than a cold. The amount you can accrue is also strictly limited -- once you've hit the limit then you either use it or lose it.
If you run out of vacation days (hours) then you can be 'lent' days to prevent them from docking your wages and so causing a potential legislative fiestorm. (These sorts of policies are applied with a fairly light touch at first, you hardly notice the change, in fact its sold as a plus because you get a measly extra allowance (in hours, note) and nobody asks you for what you're doing. Being in hours, of course, means that any absence -- a dentist's appointment, for example, comes out of your PTO. Its definitely a "Heads we win, Tails you lose" proposition.)
We have a similar PTO system (also in the US), and I really enjoy it. The cap starts at 160 hours and goes up with seniority. As I'm rarely sick, it means extra vacation days for me. If I have to miss multiple weeks of work, that's heading for short-term disability territory rather than PTO.
No he's not - though he's almost as slavish in support of Israel Gaza genocide & illegal war in Iran & bhis popularity with teh public is in freefall.**
.. and we are still waiting for the East European male models* damaging Starmer property case to go to trial.
* Lets make use of "AI" - these are teh words of "AI", not me..
If I ask Google search.
"does a male model often mean a rent boy in media stories"
Then its "AI" answer gives
"Yes, in media stories, literature, film, and common tropes, "male model" is frequently used as a euphemism or a cover story for a "rent boy" (male prostitute) or high-end male escort."
** I despise Starmer, not because I'm a Fargae fan or similar, but because I'm a socialist & Starmer definitely is not.
And you call that good?
For us it just highlights how awful the US workplace is because most of the rest of the civilised world gets far more than 7 days PTO, we don't have to use our PTO if we're ill because we get paid sick leave on top of that as well as paid public holidays.
Still enjoying your piss poor "benefits"?
160 hours is a few more than 7 days...
My first job out of university, the official leave policy was you got no PTO at all until your 1st year of employment, at which time you got 40 hours, and went up from there with seniority.
Luckily, my boss realized the official policy was crap and allowed us to take whatever we needed, within reason. We were salaried, so as long as stuff got done, we came and went as we pleased. You know, like adults or professionals (take your pick).
(I'm the AC who posted the "We have a similar PTO system" comment)
As spuck pointed out, 160 hours is 4 weeks, not 7 days - in your first year. After that it goes up (as does the cap). And there are also 9 (I think) scheduled holidays, and 2 "floating holidays" you can take anytime you like, i.e. it's more vacation time.
I, too, am salaried. As unofficial compensation for the (infrequent!) times we have to work late, or being on call (usually 1 call a month, and can be handled remotely), we're told to not use PTO for things like doctor's appointments. "Just don't abuse it" is the rule.
So yes, I think starting with 6 weeks of combined vacation/holiday/sick time in the first year and increasing after that is pretty good. It would be difficult for me to believe that other places would give, say, double that - three months off a year?
"43 percent of business leaders expect to reduce entry-level roles"
So where will the future senior staff, team leaders and "managers" come from? The entry-level job is the first step on the path for all these. This is the same silly thinking as the drive towards electronic systems mediating all human interaction - eventually the population with die out (think about it).
Right now, many employees [...] aren't well trained to use the tech
If the boosters would just spend some time thinking of good reasons to use it, rather than just proclaiming that it's going to revolutionise everything...
I recently sat in on a Copilot "training" session (provided by Microsoft!) and we were shown how to filter some data in Excel and... check the weather, of all things. I can already do those without a chat bot.
We did a pilot of Copilot. I found it couldn't do bar graphs in Excel. Or half the other things I asked for. The other half I could do better quality than it. I take slightly longer, but still probably shorter than trying half a dozen prompts to get what I want, then double-checking that the results are accurate.
As for summarizing emails and Teams messages, if someone gets too many to read themselves, that's an issue that needs to be addressed, not covered up by an LLM that hopefully is accurate when producing summaries.
Did peform a teams meeting with copilot transcribing.
To be fair it does a decent job, but then has a sort of seizure at various points and spits out whatever gobble-de-gook it feels like.
You only spot that 3-4 months later when you're trying to work out what "Langostine the bitch-goblet potatoes" actually meant
>” when you're trying to work out what "Langostine the bitch-goblet potatoes" actually meant”
frequently see this problem, totally random garbage text, with the current generation of AI assisted grammar checkers (Windows, iPad & Android, over the last year or so). Didn’t have it previously when the default was to simply keep the typo’s and poor grammar the human actually entered and give the problematic text an underline - leaving it to the user to review and change if they wish.
… in helping employees toward redundancy by more effective use of AI.
We are in an interim stage where AI can be a useful tool, but is not yet capable of reducing headcount significantly.
It won’t be long until this is no longer true.
I’ve made a number of posts over the last few years decrying AI as being mostly useless for my role as a software engineer.
Not anymore. Now it has become genuinely useful as the ability of AI to create decent code has exponentially improved. I can produce prototypes to validate ideas in a tenth of the time it once took. I can constrain AI to specific rules for each project with much greater accuracy.
This is great for me as a senior engineer, but for anyone starting their career, it’s going to prove catastrophic.
AI is now way more effective than an intern at medium complexity tasks.
However as each day passes, it feels like I’m assisting in making software engineers redundant.
I’m lucky, close to retirement.
If I wasn’t, I’d be scared shitless by now.
It’s the exponential improvement I’ve seen first hand that’s done it. I can no longer kid myself that it only generates slop.
> AI can be a useful tool, but is not yet capable of reducing headcount significantly.
It doesn't seem like AI's capabilities in that area really matter that much.
The actual factor is if corporate executives can _claim_ AI is capable of reducing headcount.
It's worth remembering the primary factor in most executives' compensation is ultimately the company stock price. The people with say-so over the CEO, the Board of Directors, are usually compensated the same way.
So their shared motivation is to pump the share price regularly. Since there's rarely any actual innovation at a lot of these outfits, their playbook (from MBA school, McKinsey Bros, et al) is often along the lines of:
1) latch on to whatever the latest buzzword fad in the marketplace seems to be
2) cut costs, usually by getting rid of employees
Even better if those things line up together, or at least can be made to appear that way. The reality doesn't matter so much -- pay no attention to the man behind the curtain; even better if there's no man at all, as they probably want a paycheck.
I work for a company where AI is being promoted heavily internally.
I would actually say that the majority of users are quite positive towards it. More positive than me in general.
We offer internal prompt training, tbh I didn't find it helpful at all.
So we have it, people generally like it and.....
It doesn't really drive any efficiencies for most becasue it doesn't really help that much with most people's day to day jobs.
For me it lets me do 'more' but only to add thing I wouldn't bother with otherwise, so rather than just a written response from me to a query I mich use AI to add a checklist or a little guidance note.
So I'm using it to add glitter. It's not more efficient since I simply wouldn't have bothered with those extras before and they wouldn't have been missed.
The Devs love it and think it's doing a lot of the basic drudge work for them. Which may be true.
Bid guys are using it to generate custom bid documents or parse tender docs
So it helps a bit, but it's not this gigantically transformative tool for most
Once again the 'AI' is not at fault !!!
The 'Holding it wrong' excuse always works.
If 'AI' worked and did the job it was supposed to do people would use it ... because people are lazy and will use ANY tool that makes their job easier.
The 'AI' sales people sell a huge gain on the back of using 'AI' then when it does not appear look around for someone to blame.
Show me someone who is using 'AI' for real and making the huge profits off it that the sales dept pitched !!!
You never will get buyin by using a gun against the head.
It needs to deliver real value and make peoples jobs easier.
P.S.
Who is checking the 'slop' that is being produced now ... ???
Just because the 'AI' machine is producing 1000's of pages of output does not mean it is correct and usable ... other than if you measure success by the poundage of paper produced !!!
:)
I’m not worried about AI taking my job. It can have it. I’m worried it’ll rewrite history afterwards and make it look like I spent the last five years doing nothing except just aggressively clicking delete on every email in my inbox. Truth is I’ve been flat out busting my balls doing all sorts of things, working silly hours. But this AI thing’s going to stroll in, generate perfect reports, and quietly imply nothing existed before it and all the workers were lazy. Think about it, the priority job 1 for it will be to make everyone prior look absolutely useless and lazy to justify its own existence, even if the reports are not true. Cheers for that. Not only replaced, but retroactively made useless and called lazy. I can see it now, spitting out reports detailing how and why humans were always the problem. At this rate my legacy is going to be “unknown employee, oxygen thief.” No wonder morale’s on the floor right now, not only is it going to replace us and take our jobs, it will lie about what effort we put in during those 12 hour shifts where overtime was mandatory but unpaid. AI won’t care about that.. There isn’t a thing I can do about this either as managers will believe everything it spits out.
This is largely the case where I work. Managers see AI as this great fountain of magical things (and powerpoint charts). They just want a slide, lack the skills and ability to determine if the information is correct or not, but have an idea of the answer they WANT to see. Which is not always the real situation. So they're completely OK with wrong answers as long as it supports their bad ideas.
So it will quickly satisfy the managers that are "shopping for the right answer". Why wait for the humans to run an analysis if they can get the answer they want from AI? Seems like the actual contents of the answer don't matter. By the time they realize they've been "managing" based on hallucinations the company will be far too long gone to be saved.
Anon, obviously.
Big "AI" push where I work
My manager & plenty of us had to go on an "AI" course as we were deemed to not be using it enough, all that means is we make pointless use of it to keep our personal metrics up.
e.g. previously we had an automated "AI" integration with support ticket system that would automatically analyse the ticket, see if a solution was obvious from the ticket itself (and, if not would analyse based on what it had ingested of other tickets on the system) and update the ticket with it's findings.
When a human looked at the ticket they would decide if "AI" commentary was useful (or not)
Unfortunately that showed as zero "AI" use on our metrics (as only certain types of "AI" use show as related to a specific person)*.
So, we ditched that automated approach and when a human looks at a ticket we then fire off an "AI" analysis, using the tooling where metrics are recorded.
End result is less efficient than previous "AI" use as instead of opening a ticket with "AI" analysis (good or bad) already added, we have to manually fire it off: Obviously "AI" analysis takes a while, so you fire off analysis on multiple tickets for efficiency (I wrote a browser plugin to do that to avoid prompt writing, I fire the plugin on ticket web page and it adds ticket url and (editable) prompt configured in the plugin & launches "AI" session using configurable creds in the plugin).
"AI" sometimes gives useful comments, but also sometimes get's things extremely wrong: It still needs a lot of product & software knowledge t decide in which category the "AI" comments fall. It can be useful on an issue you yourself has not encountered before (but was in an older ticket that a colleague resolved) as would give you the related old ticket, so avoids that sort of search through old tickets grunt work.
Nnote to keep the stupid metrics up, fire off "AI" request before even reading the ticket! Even if it turns out to be a standard obvious well known issue, got to keep those numbers up!)
Is there any meaningful gain?
A small amount of "grunt work" time saved.
Still have to read the ticket.
Then, if not 100% sure of the solution having read the ticket
Have to read the "AI" analysis.
If "AI" analysis is obviously good then time saved.
If "AI" analysis obviously incorrect then just lost the time of reading "AI" comments, then begin your own investigations.
Worst case is when "AI" suggestions looks superficially plausible, you then (based on using "AI" rules we have to follow) have to investigate that "AI" proposed solution (rather than starting your own investigation going your preferred investigative route) - when (often happens) the superficially plausible "solution" is BS, you can then start investigating how you want to. This can be time costly.
Overall probably some small time savings, but given "AI" costs, more staff would be a more financially cost effective solution to getting through tickets.
* We did say at the time that the metrics were totally stupid, but we are software people not C suite so our views & skills do not count compared to an MBA with an IQ not dissimilar to that of my cat (a cute, friendly & lovable cat, but definitely amongst the most dim of all the cats that have shared a home with me).
If any of the metrics for usage of AI include all the time telling it that it's a useless pile of garbage because it's confidently answered your prompt with incorrect information a child could spot - complete with made up chain of validation?
I "play" a lot with Gemini as a means of keeping track of how one of the market leaders is doing. Earlier today I asked it to confirm the price and content of a new product being released later this year - this product is described by the manufacturer, including contents but not price. Gemini gave a completely incorrect contents list - linking to the manufacturer product description, and confirmed the release price (which has not been released).
When I challenged it, I got the standard "Good catch! Actually, I hallucinated that, basing my answer on a previous product by a different name from the same manufacturer released 3 years ago."
I wouldn't trust AI for an accurate spagbol recipe without double checking it elsewhere. It'd probably include "3 teaspoons of ground glass" or something just for the lolz.