Re: Bullshit
> a fairly new technology
New? In comparison to other computer tech?
Neural Nets predate WYSIWYG wordprocessors. By decades.
The only "new" things here is the scope of the hype, the way that has reached pretty much everyone who uses computing devices on a daily, nay, minute-by-minute basis.
Actually, even that is barely true. As noted, NNs are old and people have been building them and offering them to compulsive computer users pretty much the whole time. Without much success, beyond making money before the bubble bursts (to the point of just-inside-the-letter-of-the-law fraud[1].
There have been - still are - places where NNs have done good - have saved lives[2] - so there is certainly no reason for thinking that there is some irrational anti-Net bias: but those genuine successes are not the subject of the massive hype machine.
The change now is that so many, many more of us are now in the minute-by-minute computer users category and the economy of scale means that spending one dollar per user has allowed a (very small number) of players to build some very big 'nets.
But being big doesn't mean the maths and logic behind these things has in any way suddenly changed[3]. All the flaws are still there[4].
[1] e.g. 1980s/90s, as automated trading took hold: take a stock exchange feed, train up as many 'nets as you can get PCs to run them - don't worry, the data rates are low enough and you don't need to waste 'net nodes on pseudo-parsing natural parsing language - then switch them to output mode before trading opens the next day: ta-da, predictions for the market that day. Repeat for, say, a month. Most models' predictions made losses - but this handful won Big Time! With a totally straight face, sell copies of that handful of 'nets (with witnessed guarantees that they did make those predictions!) to anyone with a big old pot of cash.
[2] 'nets used in medical image screening - *but* we just happen(!) to give very high value to the true positives, enough that we are willing to forgive the false positives - and quietly shrug our shoulders over the false negatives, because we are still catching more than we did previously.
[3] there are good reasons why AI research spent time on more than just NNs - and it wasn't just because the hardware wasn't available: Moore coined his law a while ago (and the current LLMs are appearing in a time where we are concerned that said law is losing steam)
[4] big bug bear: inability to explain their reasoning *and* to have that path tweaked to improve the results; instead, next run (irrespective of whether you change the prompt) you get a totally new output and need to go over inch of it again to look for flaws.