Electric Monk?
Just brand it “Electric Monk” for Slack and no-one ever needs to apply any thought/effort to their interactions with colleagues…
Douglas Adams was close to the mark again!
Slack previewed generative AI tools on Thursday, aimed at helping boost worker productivity by automatically summarizing text in channels users might miss, or even drafting messages of a specific tone or length. The fresh features are currently under development and bundled under the banner Slack GPT. Customers will be able …
People are using AI to help them do their job. Eventually, the AI will know enough about the job to do the job by itself. You, the trainer, lean back to watch TV while the AI does your job. After 3 months of flawless work by the AI, you get a call from the company, announcing that you have been made redundant.
I actually went through similar about 15 years ago. I worked with a rather large telecom that had a penchant for buying small companies and not integrating anything. Finally, the Powers That Be came to us and asked what they could do to help us do our jobs better, and we made the HUGE mistake of telling them. For 3 years they worked on our suggestions, and eventually worked out ways to integrate everything. The job became so easy a child could do it, and productivity increased to the point that nobody had to work hard at the job anymore. Next thing you know, they sent the work overseas and walked almost everyone out the door, keeping only a few US based techs to tackle the occasional hard problem I still work with those guys from time to time, and they are miserable.
My current company recently started asking what they can do to make our jobs easier. I haven't responded to The Boss on this, but I have relayed the above story to my coworkers.
Back in 2011, when IBM was still involved in artificial intelligence technologies, they put up their latest “Watson” software against humans on a quiz show about arcane trivia. And for its final, on-air answer (not long before it became retired For Good) it spectacularly guessed “Toronto” as a “U.S. City” which had its largest airport named after a WW-II hero and its second-largest named after a WW-II battle.
(Hint: “Toronto” ain’t a “U.S. City”, not even to this day.)
Methinks A.I. is not yet ready for mission critical tasks that it isn’t liable of committing Stunned Blunders on, ye jest don’t trust et!
https://www.cs.toronto.edu/~sheila/384/w11/why-toronto.pdf
It's fine. AI is fine. People have been concerned about AI for years, but it's fine. We've had the Turing Test to determine if AIs may becoming sophisticated enough to pass for humans. To prevent tomorrow becoming today*, tomorrow and too late, the President has appointed VP Harris as AI tzar. Or some other title TBA because tsar sounds a bit too Russian. But basically anything that demonstrates more intelligence and can fool one Harris is too dangerous for America, and thus the Free World. The Harris will set the gold standard. No machine will be allowed to become more self-aware than a Harris, and Judgement Day will be avoided.
*See also- https://factcheck.afp.com/doc.afp.com.33E89QM which points out that some versions of the Harris have been mildly altered, but the alterations don't really change the word salad much.
As AI worms its way into more and more areas, there will a come a point where someone higher up the food chain with think and then say out loud:
"If you are using AI to do most of your work, what the hell are we paying you for?
In fact, why should we continue paying you at all?"