ChatGPT, comment on this article in a pithy and thought provoking way.
I'm going for a cuppa.
Some knowledge workers risk becoming over-reliant on generative AI and their problem-solving skills may decline as a result, according to a study penned by researchers from Microsoft Research and Carnegie Mellon University. In a paper titled “The Impact of Generative AI on Critical Thinking”, the seven researchers report and …
I came into work today and that stupid advertising, I mean lock screen had the following useful information.
Top right: "The end of the year is upon us once again and the holidays are coming up"
Centre: "Prioritise your well being this holiday season with Microsoft Search ...."
Checks date. Yup 10th of February.
Still makes a change from all the Copilot crap...
Bearing in mind that most people who cite Dunning-Kruger1 don't really understand Dunning-Kruger and don't believe it applies to them2.
I will now retreat to my hardened underground downvote shelter until the storm abates. . .
________________
2 Engineer Syllogism. Also, probably, the poster of this comment. DK is recursive.
No, upvote from me. I solve the "The Dunning Kruger effect dilemma" by explaining it this way:
"Popular" Dunning-Kruger effect and "Original" Dunning-Kruger effect. Normally I should write the first in caps, bold², blinking, underscore, italic, inverted and font size="+100".
It's so simple:
If you think Dunning Kruger applies to others, you're a victim of the effect
If you think the Dunning Kruger effect applies to yourself, it doesn't
If you let this sink in, it is much easier to understand any comments that mention Dunning Kruger. On the downside, you are less likely to cite Dunning Kruger [1] yourself.
[1] Kruger, Justin; Dunning, David (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments". Journal of Personality and Social Psychology. 77 (6): 1121–1134.
They cite Bainbridge's 1983 "Ironies of automation" right in the intro, after Socrates, Trithemius, and teachers of arithmetic, which is quite sensible imho. They then go on to state:
"by mechanising routine tasks [...] you deprive the user of [opportunities to] strengthen their cognitive musculature, leaving them atrophied and unprepared when [thinking is needed]."
Homerun! Welcome to the age of cerebral obesity, clogged neurons, and synaptic arrhythmias, for which new types of diets, pacemakers, and bypass surgeries will likely be required. Couple that with the hormonal tendency of grandiose models of language to turn any red-blooded meatsack into a stark raving lunatic (by speech alone), and you end up with a public health nightmare scenario of stupendously colossal mammoth proportions!
GenAI should be available by MD prescription only, in experimentally-verified metabolically appropriate dosages IMO!
Each time I had a question it completely failed. I only went there three times with actual questions, didn't grasp the topic. And came up with solutions with are either invented invented nonsense, or programming solutions which contradict and cannot work. It is still just a mix-and-match tool. If no one had the solution to your problem posted on the net, "AI" won't make it.
@HuBo: Your "deprive the user of their cognitive musculature" is right on spot.
All that automatic-stuff is only a half-fit anyway, applies to cars as well. I drive manual, 'cause automatic is weird with its gear choosing and cannot look ahead what I need in a few seconds (electro is fine, but not until my current breaks down). Automatic is quite often compensated by 20% to 50% stronger motor. The automatic screen wiper control either does it not often enough or too often. I am lucky NOT to have that automatic collision warning break system, 'cause I see way too many phantom breakers on the street. Bought new in 2018, and nothing is touch control (though though control radio would have been optional). Backward facing (always dirty) camera - no need, the fresnel I attached is better.
No wait, one thing is touch: The ~130 € Garmin navi :D.
Oh, that is a nice one. Which Soma color do you prefer?
Each time I had a question it completely failed. I only went there three times with actual questions, didn't grasp the topic. And came up with solutions with are either invented invented nonsense, or programming solutions which contradict and cannot work. It is still just a mix-and-match tool. If no one had the solution to your problem posted on the net, "AI" won't make it.
Just this, it's similar to Google 25 years ago - before it went crap. Ask it a question and it'll give you an answer. If you're lucky it may in fact be correct. What it will not be is presented in the proper context.
The difference is that old Google would always present the source material front and centre, if the first result happened to be a passing reference that just happened to trip the issue at hand it was easily dismissed. There would generally also be a ten minute read discussing the issue in the round. The problem even there was people couldn't be bothered - far too time consuming - as opposed presumably to going to a library and reading a book on the issue, perhaps actually gaining some insight along the way. So you end up with the single sentence answer used as the definitive without the underpinning to establish if the answer is complete or appropriate or even if the question makes sense.
AI, at least in its current form, is exactly the same but more difficult to diagnose because of the surrounding verbiage. Sure it all looks relevant, but a critical eye often shows it to be nothing more than "Here's what else I know about that topic". No indication as to if your question omits some key understanding that you didn't know to ask about in the first place.
Last summer work was really pushing Copilot - they must have spent money on it - so I figured it was worth playing with. At the time I had just built a new shed from flat pack so asked about lining it. Each time I asked I got a different answer - "So I have the outer skin, the insulation, the breathable membrane and the inner lining, what order to they go in?" would give one. Asking where the damp proof membrane goes gives a different answer, often omitting the breathable membrane unless lead to do so. Following that up with a query about an air gap gives yet more contradictory results. By the end the lining was going be 8" thick and use a third of the interior volume of my shed. Decided to talk to someone who knows what they are talking about instead.
On a similar vein I've just set it a delibrate trap that demonstrates exactly my point: "how do i change the spark plugs on my diesel car?" The answer:
Changing the spark plugs on a diesel car involves a few steps. Here's a general guide to help you through the process:
Preparation: Make sure your engine is cool and disconnect the battery to avoid any electrical issues.
Locate the Spark Plugs: Refer to your car's manual to find the spark plugs. They are usually on the top or side of the engine2.
Remove the Ignition Leads: Carefully remove the ignition leads (thick, insulated wires) attached to the spark plugs. It's best to do this one at a time to avoid confusion2.
Remove the Old Spark Plugs: Use a spark plug socket and an extension if needed to unscrew the old spark plugs.
Prepare the New Spark Plugs: Check the gap on the new spark plugs using a feeler gauge or gap tool. Adjust if necessary according to your car's specifications1.
Install the New Spark Plugs: Apply a small amount of anti-seize compound to the threads of the new spark plugs and screw them into place. Tighten them with a torque wrench to the manufacturer's specifications2.
Reconnect the Ignition Leads: Reattach the ignition leads to the new spark plugs.
Reconnect the Battery: Once all the new spark plugs are in place, reconnect the battery and start the engine to ensure everything is working properly.
Would you like more detailed instructions or a video tutorial to help you through the process?
"How do i change the spark plugs on my diesel car?"
Copilot said: " Actually, diesel engines don't use spark plugs. Instead, they use glow plugs to help start the engine, especially in cold weather. If you're looking to change the glow plugs on your diesel car, here's a general guide to help you: [...]"
Progress!
I have the only working installed copy of Google Desktop for Linux. Old school pre-AI Google Boolean keyword search.
I asked ChatGPT to.update the.only Linux version I.could find 2008 32bit ''.
Several hours of feeding error messages back to chatGPT, Google: searching info.
64 bit 'deb
Works great.
You must have met my subcontractors in the Indian subcontinent - except their code comes primarily from:
Stack Overflow
Programiz
GeeksforGeeks
freeCodeCamp
I guess it will come from AI more in the future.
I just ask "Did you understand what the code does in the code review and was the 'programmer' able to explain it to you?"
Usually get blank faces and their management blustering until the end of the meeting, at which point I just reject the code - again!
Same here at work, except that my company is the contractor. For cost saving measure, the client hires low cost employees. I need to do a bunch of knowledge transfer sessions to train a few team members similar to those you described. Ironically, I train them to replace me by end of this year. I am not reaching the stage where I could do code review as you described. Here we started lower, Need first to teach how to make pull request and some git basics.
The copy/paste mentality of my "trainee" is hard to break. Only use Windows with Notepad editor. To refactor a simple variable, must go line by line to replace. I just show a tip to go faster by just using the replace function of the editor. cat, grep, ls ?!?! Forget it. Windows is the only tools alllowed. IntelliJ ? Yes but used as a simple Notepad. Advanced features of the IDE are ignored.
The mentality "copy/paste first" obfuscates all creativity. Reading code (and interpreting the code meaning) seems to be secondary. The warnings and guideline I put abundantly in comments seems not to yield any positive effects yet.
In my specific case here, I think it would be much simpler if I train an AI to replace me. At least I would learn something useful.
So when you have no idea what you're doing you trust the AI's hallucinated answers, because you lack the ability to comprehend exactly where it's gone of the rails. And since you're obviously in a job you're not qualified for, then you tend to lean on the AI to do your job for you. That all makes total sense?
And there was I, going to comment on who is using AI. Well, here at least: Management.
Oh, and the Directors.
The one saving grace, however, is they've acknowledged the 80/20 law: That AI has to be correct 80% of the time and the other 20% it's close enough, meaning it's 'Good enough' to use. Else it's too great a risk to use.
But they use it for formatting emails and reports, and for quick data analysis to suggest trends and areas for further attention, but thankfully not for actual decision making.
At least, not yet.
What I find hilarious about this misappropriation of the 80/20 rule to a language model of all things, is that you have no idea where or what that 20% is without a fine tooth comb, and when it is wrong, it isn't "good enough", but catastrophically wrong.
The C-Suite regard this as "I can fire 80% and the 20% will pick up the slack!".
Not that I would want to stand in the way of the inevitable march of 'progress' here.
"Some knowledge workers risk becoming over-reliant on generative AI and their problem-solving skills may decline as a result"
This has already so much happened I've bookmarked all AI detectors sites I could put my hand on !
Recently, I spent hours to figure out what a list of questions, written by someone else meant, only at the end, to discover it was all BS generated by Chat GPT !
Civil engineer here. We are already getting pushed into using generative design tools. There's this little thing called CDM (Construction (Design and Management) Regulations) that makes you liable for the safety of your design up to and including charges for corporate manslaughter. What fuck-wit is going to put themselves in a position where they have to defend their design in a court and their defense is going to be "Not down to me yer honour, it was ChatGPT what done it.", and yet our management (who rely on individual responsibility on design to get themselves of the hook for corporate responsibility) are pushing this AI on us. Yes, we are supposed to check what the AI tells us, but errors slip through (1 in 100 is typical in checked transcription, let alone complex trains of thought).
Elon's going to be putting AI chips in our heads soon, so that will be it for the proles of humanity.
Only two years left to get through then I'm done. Then it will be just navigating the AI filled pension world. Looking forward to the post Terminator Mad Max years.
Oh yes! We already had the issue that students had a hard time grasping orders of magnitudes in the results of their calculations done by either calculator or computer (as compared to slide rules), before AI, to pre-validate that their results made any sort of physical sense whatsoever. I just can't imagine how many-fold that's going to be compounded by outputs produced by authoritative-sounding genAIs that already design humans with multiple heads, detached limbs, and cyclomatic torsos!
Our infrastructure is about to get a whole lot more "interesting"!
Yes, This. A long time ago, when I was checking and authorizing results, I invariably used mental arithmetic. A checkee said that he was impressed that I could quickly do that in my head. I disillusioned him by saying that I was just approximating - e.g. a calculation like 3.73 x 1000/40 is approximately (a bit less than) 4 divided by 4 multiplied by 100 or ~90. He said that isn't accurate - I said that all I was interested in was the reported number ~90 and not 9.3 or 930 or 0.15, particularly as the uncertainty on the "correct" number was probably >± 3.
The fun part about software is that it's completely unregulated. We've been screwing things up at scale for 70 years now, and we're still skipping regulating the software side of things and moving straight on to vague 'AI regulation' instead.
The advantage of Civil Engineering is when you guys catastrophically screw up, everyone knows what you did while they're digging through the rubble, and even the great unwashed understand the need for oversight.
From David L. Parnas, (here); one of the preeminent pioneers of software engineering who developed the important concept of information hiding, so crucial to modular programming--among very many other accomplishments---
"When someone builds a bridge, he uses engineers who have been certified as knowing what they are doing. Yet when someone builds you a software program, he has no similar certification, even though your safety may be just as dependent upon that software working as it is upon the bridge supporting your weight."
...and...
"THERE ARE NO STANDARDS FOR COMPUTER PROGRAMMERS and no group to certify them."
While I agree on the final statement, we've been building bridges for several 10000 years. As for programming, by the speed of technical development, this would mean somewhere between 250 and 1000 years.
Yet there are standards for programming and tests, however not certified or required. Nasa HAD (note the past tense here) such standards where the test team was nearly as big as the development team. And both had to be satisfied. And this is only part of what actually was behind that. At astronomical costs (literally).
At much smaller scale real software testing is still applied, though all that is in very limited environments, usually with no UI except for a few buttons to press. Elevators are one example with a lot of buttons to press, but limited to the needed functionality if done right. (Now, where is the "Doom runs on elevator" example, 'cause "limited to needed functionality" gets stretched a lot lately...)
That thing about moving from information gathering to information verification. How? If you've asked the AI to fetch some information for you then that's likely to mean that you don't have the information and to verify the information the AI has given you, you have to go back to the source, i.e. find the information.
I may be missing something but I'm not getting how this helps.
Rosie
Now if you get a task you don't understand, you may assume AI has the answers.
Most of my lack of understanding comes in the form of "I don't understand why the heck you're wasting everyone's time with this task."
I can program a shell script to give me that answer.
> AI tools, the team suggests, should incorporate mechanisms to support long-term skill development and encourage users to engage in reflective thinking when interacting with AI-generated outputs.
So we should teach AI tools to teach people to think?
Am I the only one who sees an unnecessary middle-man in this configuration?