To be pedantic: you should only trust code you understand.
I can compile some nasty little bits and bytes while understanding naught of it. Might even have written the nasty bits myself if I were so inclined...
17 publicly visible posts • joined 24 Nov 2023
I don't exactly disagree, but I think there are narrow use cases. As an example, a chatbot that can take free-text user input, then analyse it and decide which canned chat journey to put the user on (or direct them to a real person).
It's a hybrid between 'pick from these options' and 'go describe things in your own words'. The AI involvement?: 1 single chat step to take ambiguous input and put it on an appropriate track.
So there can be some benefit. However having an AI handle the entire interaction with the user is a dumb idea... And I've now seen it far too often.
At what point does it just become more effective to create simple, deterministic processes or do the work yourself? - I dare to think rather quickly.
I posit that almost any task you need an agent to handle correctly will fall into one of two categories:
1. It's simple enough that you can handle it almost entirely through a simple chain of processes in low code solutions (Power Automate et al.). And end up with a much more reliable outcome, much more quickly.
2. It's complex enough that to get a reliable output you may as well write a full UML diagram alongside your prompt and feed it to the LLM. At which point just write the code yourself I guess?
The main way I could see these "agents" being useful is handling ambiguity within a narrow scope; a single cog/process as part of an otherwise classic and deterministic chain of processes.
But if course the zealots continue to insist on all or nothing. After all, if it can't do *everything*, why bother, right?
When (if?) AGI rolls around I fully suspect we'll have been dumb enough to train it on the entire corpus of the internet. Including the plethora of research articles regarding exfiltrating data over air gaps.
£10 says we'll be dumb enough to not air gap it in the first place. £20 says we'll try and fail miserably.
I've found doing CompSci in college, and coding my fair share of boilerplate instant-legacy code in Java at the time has helped me in better handling various aspects of IT in my career - writing better PowerHell scripts being one example.
I can't help but shake the feeling most devs would benefit greatly from the inverse; working in IT for a year or two.
An issue I find most prevalent when dealing with outsourced support teams:
Ticket: "Is it possible to do 'this' ?"
Support: "Yes, that's all done for you now"
That... is not... what I asked. Undo that right the hell now.
I'm unsure what's so difficult to understand about a query for information.
Isn't that just the entire problem?; Screw mere sandboxes...
With all the inventive ways we have managed to exfiltrate data over air gaps, we can't even be entirely sure that "correctly" air-gapping a hypothetical AI capable of exponential growth will be sufficient.
In cases like this, where possible, the most appropriate route is to get these systems on a subnet disconnected from the internet for security/compliance reasons. And ideally have that entire VLAN unroutable from other nets.
If they also need to be managed remotely 24/7... Well, you're kind of outta luck for some certifications but well managed firewall rules and conditional access restrictions go a long way.
I remember back in college working on my project for Comp Sci, worth approximately 60% of the grade
Always kept the code on a USB stick to work on in class or at home (just in case I dreamt up a solution at 2am and needed to wake up and code it before I forgot...)
Anyway, predictably, the thing dies about a month before the submission deadline.
Luckily I had the mental wherewithal to always copy it's contents to the local machine I was working on before unplugging it... Crisis averted.
Funny story that.
I had the inverse and had to point out to my bank that they were trying to send my solicitors £3m instead of £30k for the house deposit... Only didn't happen because the screen was facing both of us and I could see them typing it out.
You'd think that would be the end of it, but then they do it A SECOND time a few seconds later, which I duly pointed out.
While I'm hopeful this would have triggered some internal anti-fraud measure, at this point I'm honestly not convinced
Unfortunately the additional layers of law and litigation will only make it prohibitively more expensive to start up, or run a small business.
And likely do little to curb the greed of IT illiterate shareholders. And in large enough organisational structures, there's a limit to how much you can hold higher ups to account where a single mistake or omission may have been made by the boots on the ground.
But I do agree with the sentiment that something must be done, I just don't think law alone is the best way to do it. Just look at all the GDPR non-compliant cookie implementations about...
If we were to truly go above and beyond in nit picking, the technically correct use of Lego is LEGO in all caps, or LEGO™ or LEGO®
And is only ever an adjective, not a noun, if the company had its way.
I'll happily take the down votes on the chin for being an ass in my first sentence. But hold my opinion that language is not a static affair and evolves over time; policing it to this extent, in my opinion, is a waste of time.
Up voted to adjust for irrelevant pedantry.
Regarding the made up convention on the plurality of 'lego' - which brings to mind the old debate surrounding GIF - keeping track of the general inconsistency of the English language becomes a tired charade in edge cases such as lego, although at least there is precedent for the singular matching the plural in idiosyncratic ways (moose/moose vs goose/geese).
My opinion stands that there is not enough time in the world to care about these kinds of grammatical and phonetic exceptions. It is, after all, pronounced GIF.
At this point in time with tech more important than ever, and only ever more complicated, it's a must for any business which loses more than a few £ per hour due to outages.
For all those reasons, we've just gone through the process at our company of deploying calendars in Asana for each department, scheduling monthly/quarterly/yearly tasks that are easily forgotten and must be done. Then pulling all of them up into a company-level one overseen by the Ops Director through some small automated rules.
So you're saying a company delivering important services in a highly regulated sector failed to set and test appropriate RTO and RPOs for critical systems.
I mean... They probably did, but clearly the testing didn't cover this scenario properly.
When are people going to learn to do the basics properly (except after it bites them in the ass)?