Convenient
This is a convenient excuse for a laughable vaporware product.
199 publicly visible posts • joined 27 Apr 2022
AI slop isn't worth reading. I use it to fill in paper-pusher based requirements that no one will ever read but it can't make anything worthwhile. Fill in the blanks occasionally but create? Only the lowest quality drek.
If your job happens to be writing marketing copy then it's great. It's not like anyone actually reads it anyway. It might as well be lorum ipsum.
Yeah, all it needs to do is be 1000X worse and it'll be as bad as the classic version of Outlook.
I'm counting the days until it's no longer supported. that's exactly what the corporate IT departments need as an excuse to remove it from everything. Then we'll be free from having to make all the HTML emails IE 5 compatible (yes, seriously).
The premise this article takes for granted is fictitious. We need a bunch more data centres because...? LLMs are limited application AI and we're using them for at least half the things they apply to now. More efficient models are being built now. Inference can be done on commodity hardware. What do we need a bunch more data centres for?
I disagree with this statement entirely "Just because its original vendor no longer supports a proprietary OS doesn't mean that you must do as instructed and cough up for an upgrade."
An unsupported OS is a ticking time bomb, keeping software up to date is not optional unless you're air-gapping your system. Information on exploits is too available these days to make that an option. Windows 7 should not be used for any network-connected applications at this point. If you don't want to pay Microsoft more money there are a lot of open source OSes you could be running.
They don't currently do this, but it could be done. You'd have to keep track of where the weights of the nodes in the neural network came from. It would definitely increase the complexity of the model and cost of "training" it though, which is why they aren't doing this. Also, the number of works attributed to even small pieces of text would be absolutely huge. We're talking about pages and pages for a paragraph.
It would be a lot easier to list ALL of the input in the LLM somewhere and link to it. Said list would be a huge Wikipedia-like monster but it's more practical.
I disagree with the idea that the models should be public domain, unless it's with the express agreement of EVERY copyright holder (which is functionally impossible). The remedy I personally support is to delete the offending models AND provide restitution to all copyright holders. Said restitution should be all income made off the model, not taking expenses into account, divided equally among all claimants. This should also have the needed deterrent effect of bankrupting any company selling a commercial model containing stolen IP.
This is compatible with existing IP laws, where giving away the offending derivative work isn't because it gives companies a way to steal your work and put it in the public domain.
What Microsoft is trying to achieve here is lack of responsibility for security or stability on machines without a TPM. If anyone who hacked Windows 11 to install on unsupported hardware they get to duck liability.
And liability is something corporates don't like to have to deal with.
It's not illegal if you've paid off the regulators and politicians in totally legal (because of your lobbying) ways to make it legal. This is how big business controls the US government, and how it's trying very hard to control every other government on earth.
No, you don't have it right, MongoDB's schema is very loose and defined by the client application. Which can be really great for application developers. But it's not really designed for or compatible with DBAs so I completely understand why any DBA would hate it. The actual database itself is basically a back end for a MongoDB client library and all the validation occurs on the client. This can be convenient for software developers because your web servers (that are cheap and plentiful) can do more of the work.
I work with a mix of different database types and there are a few things that MongoDb is massively better at than a relational database like complex data structures that have a lot of nesting. And for general CRUD it's not really any better or worse than a relational database. And it's very good for distributed databases and very large databases because of the ways the data can be sharded. What MongoDb sucks at is managing a bunch of related data in different collections (which are like tables). It's mostly about structuring your data in a way that makes sense for the database because if you build your MongoDb database like a relational database it'll suck, because there is no "relational integrity", each collection should be a contained concept (possibly a very complex one).
It's a tool in the belt, just like relational. But it's good to have some experience with different types of database so you don't end up in the "if all you have is a hammer everything looks like a nail" situation. Next time you build a trivial project I suggest you give it a try.
Can we dispense with the marketing BS terms like "hallucinations" that imply that LLMs are thinking. Facutality isn't even a thing the models are designed to test, they're just computing text output that approximates a response for the prompt.
It doesn't know anything, it doesn't think and it certainly doesn't "hallucinate" because that requires a conscious mind.
Something OpenAI doesn't want most people to know is that LLMs aren't that complicated. The only reason they weren't a thing years ago is the practicality and cost of putting together the enormous amounts of information needed to build the models. Their moat is very small and that's not good business-wise, especially since they're so unprofitable.
Really? I was just about to post about how much CPU progress has slowed down. When I was a kid a new CPU would come out and the older generation would be completely obsolete because each generation was massively faster. Now you can keep the same CPU for sometimes as long as 10 years and still play new games on it.
Those models were created before the most recent debate, how would they have any information about what happened? All they're doing is stringing together likely responses to the question.
Every time I read an article like this it mostly just shows how people's expectations of the performance of LLMs is not anchored in reality.
When Windows started disabling SMB 1.0 by default I updated my old NAS device (Buffalo Linkstation) with a newer version of Samba. I did this by telnetting to it and updating the software it was running manually. I was a little shocked that this was possible, but it's stayed like that for years so it seems to have worked. I'm sure an update would set it back to stock but all we know that's not happening with a 10+ year old NAS device.
So some of these things might be fixable if you try hard enough.
"The policy, which exempts field employees, is controversial because it states that choosing remote status will hinder career advancement and increase the chance of being selected for a layoff, among other downsides."
This is true of remote work everywhere, it limits your career advancement opportunities. Dell is just being honest about it.
As someone who works in web development, it's not going to happen. Users constantly demand more functionality, more responsiveness and browsers keep rolling out new features. The trend is towards more complexity, not less. It would take a big user revolt to change this and I'm not seeing anything like that, I'm seeing the opposite. Everyone seems to be happy to keep updating Chrome every time a release comes out.
Flying a jet fighter through a wide open sky is significantly less complex an AI problem than piloting a car though a city street with people walking, cyclists and other cars.
This is why I'm sure self-driving cars are decades away, there are a lot of steps along the way that don't exist. We don't even have semi trucks that are self-driving only on freeways yet, and there is a lot of money in that.