Re: Microslop
The analogy is missing one step, said AI client would need to autonomously access the web behind your back and compromise your security.
This is much worse than having a chat client installed. This is autonomous malware.
252 publicly visible posts • joined 27 Apr 2022
I have a friend who breathlessly falls for every AI fad, he's been excited about OpenClaw (ClawdBot) for about 2 months at this point. That's how I knew for sure I'd need to look into it much deeper before I considered using it.
Having an automated LLM with access to all your accounts, that can't tell the difference between your instructions and anything it's reading off the web? What happens when people add malicious commands to the bottom of web pages or even ads? What about spam messages?
The whole thing is brainless, both literally and figuratively.
No, it's a complicated remixer. It just creates a pastiche of the input so it can't be better than it or novel in any meaningful way.
I'm just waiting for IP laws to catch up and require all the input be licensed. The current state is essentially allowing all this content be stolen with no compensation.
Constitutional republic, that's what they always say. And of course the first like of the description of constitutional republic says that it's a form of a democracy. Even Wikipedia still says this after years of vandalism by people who blindly repeat dogma.
Are you a senior or higher level dev who uses this technology daily?
Because your post reads like it was written by someone with no firsthand experience at all. I use several different AI models and tools every day and the chance of these text generators replacing any developers no matter how junior is laughable. You need someone who understands what they're doing to operate these tools, because they make so many mistakes. In comparison to something simple like writing marketing copy there are so many factors involved in software engineering that are completely absent from LLM generated code that anyone with any experience with the technology would never write any of the things you're writing here.
The current LLM tools may enhance productivity, but require a lot of knowledge to use successfully. It sounds like you have knowledge in something substantially less complex that LLMs can do reasonably well and are applying that to the massively move complex task of software engineering. Even if the LLMs were good at producing code, that's the easiest part of the software engineering process. The author of this article is massively ahead of your understanding in this case.
I personally disagree, I've run in to lot of situations where junior developers have tried to check in poorly designed AI solutions just because they don't have the experience to understand their problems with the code.
LLMs can greatly exaggerate the Dunning-Kruger effect and that causes a lot of problems.
I normally apply the assistants in extremely limited scopes where I already know what I want. The error message lookup/explainer functionally can be helpful, particularly in the first thing you do is check sources.
I think we'll eventually have a good idea of what we should be using these things for, but the idea that inexperienced devs are the ones who get the most out of them seems completely backwards to me.
The main issue OpenAI has is that most of the ChatGPT use has no economic value, it's people using it like a search engine or chatting to it or making intentionally silly images. People aren't going to pay the cost for things that don't make them money. It's not worth $100/month to most people and I'm not even sure that's enough to operate profitably.