
Doh!
Yet another reason I'm so glad I'm retired. Commiserations for you folks having to deal with this crap.
Among the optimism and opportunities perceived around AI agents, Gartner has spotted some risks – namely that organizations might create "thousands of bots, but nobody now remembers what those bots do or why they were built." The collision of muddied management thinking and much-hyped autonomous agents will be interesting to …
This post has been deleted by its author
Yeah, until your retirement account is upgraded to management by agentic AI bots, so as to enhance your user experience, with improved security and management options ... Mine's already so geoblocked that I can hardly access it from where I live, in a dungeon, abroad. Soon enough, cash and precious metals, stuffed in a mattress, might be the only reliable way to ensure the availability of these hard-earned funds, again!
In this day and age, anything computerized seems to be at risk of becoming agentic-AI restricted from access and management by its proper owner imho, for our own safety and benefit of course (licensed software, cars, bank accounts, studded straitjackets, ...).
Yet another reason I'm so glad I'm retired.
The manglement at work have gotten drunk on the AI kool-aid - and together with their latest initiative at creating a fungibille workforce, whilst I understand where they are coming from for the latter, it's the execution of that which I have a problem with. I thought that this job would be my last till retirement, hopefully in a couple of years - think I'll jump before that.
Our company has decided to fill Teams with bots. They can't be uninstalled, are named something cute that bears no relation to their function so who knows what they're for now, and all they seem to do randomly and suddenly nag me to sign in from time to time when I'm already signed into Teams. Thank goodness we are now all more productive.
The room/desk-booking system at my company now features an AI "assistant". I asked it to tell me the airspeed velocity of an unladen swallow, but it did not understand the question. It was also unable to tell me the answer to Life, the Universe and Everything. Clearly, its training data did not include any of the classics.
AI seems good on initial glance because it's basically a pretty-good bullshit generator. But as soon as you ask questions about accuracy, truthfulness, and judgement you find AI is actually not very much use to anyone.
If you write software by copying patterns of code you've found on the internet, without understanding what they do or how they work, your code will be useless. That's what AI code writing agents do: they simply find something similar that appears to match what you've typed so far. It doesn't have any concept of "understanding" or "intelligence".
I've yet to see an AI product that I would actually use.
I've never seen one that does something I couldn't do better with the same data, a tiny bit of code, and a slow processor, whereas they cost billions to train an inadequate statistical model.
It's glorified autocomplete (especially LLMs, which are literally this) but without the rigour and predictability. And I don't see any evidence that they actually "learn"... if they learned you wouldn't have to keep "retraining" the model... you could just expose it to the data, provide it with corrections and off it would go on the SAME MODEL adjusting as it encountered new information.
They reek of an over-complicated statistical model that - like all statistical AI models - require you to "train" undesirable behaviour out by overwhelming it statistically. If you trained it one 1,000,000 pieces of incorrect data, to train it to be correct you need to retrain with 1,000,001 pieces of data to the contrary, and so on. It's just working on the probability it finds in the dataset, nothing else.
And the nonsense about just using data phrases to CONTROL ITS RESPONSE (where the LLM creators provide it with huge hidden initial prompts that tell it what it should and should not answer, etc.) is the biggest lot of manure I've ever seen in my life. It's so easily overridden (again, statistically) and so un-rigorous that it's worthless.
Another few years and the fad with die and be consigned to the "oh, look, it can paint a picture" levels of app again, and we can get on with some real work on AI again. Like actually trying to solve the inference problem (which is what you're talking about) and not just continue building statistical models that we somehow hope will magically learn, turn intelligent and form AGI spontaneously.
Since the 60's the cries have been "if only we have more processing power", "if only we had more connectivity", "if only we had more training data", "if only we had more funds"... then AGI will just jump out of the ether like the soul of a person and somehow magically become intelligent.
Turns out, now that we're literally spending billions of dollars training models on billions of nodes, each with billions of instructions per second of processing, on the ENTIRE INTERNET of data, that just hasn't happened.
So maybe now is finally the time to just go away quietly and think about WHY that is and WHAT we actually need to do, rather than cross our fingers and hope Frankenstein's monster just jolts into life after cobbling some body parts together.
I've been saying that ever since I studied AI in the late 90's, and nothing has changed except that we now have LITERALLY what people were asking for... and it's made absolutely no difference to the actual intelligence of the system.
Translation is actually a machine learning product that works pretty well. Of course, this is nothing new. We've been using it for a couple of decades, even though the current translators are much better than ancient Google Translate (for instance). You still need to check and correct the output. But it's got to the point where it is consistently easier to start from machine translation than from scratch (which has not always been the case). And you generally get usable results even when translating to languages you are not able to check and correct.
AI seems good on initial glance because it's basically a pretty-good bullshit generator.
Recently, there was a presentation by a group of senior managlement - what was going on in my mind during and after it was how fluent the aforementioned were at spewing out gobbledygook. They would be ideal candidates to be replaced by AI - we'd not see much of a difference
Institutional problem. Groups, shared email addresses created and forgotten when the creator leaves and doesn't tell or even see their replacement or the person who's newly created job should overlap those responsibilities. there may also be security issues related to things like this that aren't documented or passed on to new employees or replacements.
They are busy creating "an ideal customer experience", although they have no idea who their customer is, what they want, or how to achieve any changes - but thats OK, because the customers will LOVE the ideas our marketing team came up with, and even though the uni graduate we have developing it couldnt quite get it working, the customers will appreciate its potential...
I mean Excel is actually really good at what it was designed for - being a financial spreadsheet. Somewhat less good when used as a predictive device or database.
The problem is surely that they are both highly customisable tools, which can be made to do things which we do not fully understand and can be programmed in ways that are obscure and convoluted, and are for some reason relied upon as having almost divine oracular knowledge which is imparted to us mortals. But they are both really easy to make mistakes with when creating them, including use outside the expected boundaries of the programmer/creator.
So have agents supervise agents. And those agents supervised by others, greater still. And so on, ad infinitum.
Or just have all agentic actions presented to a central entity at runtime, before execution, along with the justification for the action. Something like an ERP. So agentic actions can be validated and audited