With Gartner's track record
Will they be using AI on their itanium workstations?
Mainstream adoption of AI in the office and among employees remains around two years off, according to analysis from consultancy Gartner. work Study shock! AI hinders productivity and makes working worse READ MORE So-called “everyday AI” is at the peak of inflated expectations in the view of the global research biz, yet …
> Payback from office AI expected in around two years
Does "payback" mean revenge?
That the managers who recommended AI solutions will find themselves replaced by a computer program that falls for all the hype of new, shiny, solutions and recommends them upwards, with no understanding of what they do or what their limitations are?
Or as we call them here, the "Ministry of Stating the Bl**dy Obvious".
When I worked in the public sector, senior management lapped up the annual Gartner reports like they were gospel. Always reminded me of the tale of the emperor's new clothes. I'm surprised organisations still fall for their guff in this day and age.
As this article alludes to, machine learning is now a giant gamble that affects us all. When Zuckerborg had a brain fart and thought spending the GDP of a small country on us all wearing VR headsets was a good idea, he was really only throwing away Facebook investor's money. With machine learning though, so many large companies have thrown so much money at it that if there is no return on the investment it will affect all of us. Massive job losses and stock market crashes hurt most people eventually. Maybe society needs a way to temper this kind of 'innovation' because gambling with other people's money is too much fun.
This post has been deleted by its author
I am wondering who or what is going to read or ingest these "reports?" More AI?
Hordes of AI "assistants" all "eating" the shite the others producing.
Even in nature that won't work out in the long run without the injection of energy from a bit of sunlight (photosynthesis;) or other energy source as all you end up with is a great pile of dead shit.
"summarise chats and email messages to services that can write a report with minimal guidance"
and then summarise the report back to itself...
The assumption with the obsession on summarisation as the killer app (the *only* app?) seems to be that we are all shit communicators who have no idea how to express ourselves effectively or have any ability to comprehend what others are saying. Which is probably fair, but AI ain't gonna change that.
To quote Galaxy Quest - "Explain, as you would a child"...
> Workstyle analytics suggests we need a clear focus on curating the analytics required to optimize the combination of technology, talent and business outcomes facilitated by the digital workplace
BAM!, what a knockout sentence. Who isn't impressed? I can imagine the reports creator literally crying with happiness as the words appeared before them, much like the words of God appeared to Moses on the Mount. Proof that it really works. Now for the implementation ...........................................................................................................
If anyone from Gartner reads this: if you're so sure, let's make a bet. Let's pick some milestones to be hit by [consults calendar] August 16, 2026. Let's pick a stake and some judges.
Of course nobody from Gartner would ever take such a bet, because they know they have no idea what the state of the industry is going to be like in two years, and it's not their job to know. It's their job to make whatever mouth noises will support the case of whoever hires them.
Lines up with personal work experience. We spent 10 months developing a product based on gen ai. Total accuracy was like 30% for the first 6 months. We nearly scrapped the project but after several reworks we got to 80%. It has allowed a team of 2 to do the same work as a team 6. Same work output and quality.
The biggest pivot was moving away from the idea that it could be fully automated. Instead, it does most of the busy work and a human can make final decisions.
Ai has been awesome for finding information, summarizing, and giving suggestions. It's abysmal for making critical business decisions.
When AI is able to spot the weak signals that are potentially important, and to identify minor mutterings that have great significance, then I'll be impressed.
Until then (perhaps 2324) all AI will be good for is the sort of thing Gartner themselves do - interview a bunch of self important bigwigs on what they think the future is, spew out in a report the median view, and then sell this report back to said bigwigs.
I've tried using AI to summarise organisational annual reports I've written, and the results were dismal. Apparently written in coherent English, but utterly unable to separate the routine from the significant.
In the workplace we got used to the "six week event horizon" which meant that anything over six weeks out was going to happen "sometime in the future" and could be postponed indefinitely as it was just long enough for everyone to forget exactly what was promised.
We've seen similar event horizons with other technologies, fusion energy being an obvious example. (Note that this doesn't mean the technology won't eventually yield results, they're just not around the corner.) AI, being primarily composed of existing hardware technology and a lot of hot air (both from the hardware and the salesforces) is going to yield tangible benefits for all "real soon". Two years being long enough for everyone to forget exactly what was promised.
The only thing that's certain about the mass adoption of AI effects on most of us is that its going to cost us. Any jobs that can be eliminated or downgraded will be. Prices will be sliced 'n diced to the nth degree ensuring that everyone yields the maximum (i.e. everything's going to cost too much). We should do better than this but historical experience tells us that whatever use AI has will be rapidly and irrevocably subverted to the single goal of screwing as many people as possible.