I agree
It's like all new technology - eventually it will get embedded in our everyday lives and we won't even notice it.
But the AI companies have to try and monetise it before that happens. Which is why you get all the hype.
Alan
Linus Torvalds, creator of the Linux kernel, thinks the majority of marketing circulated by the industry on Generative AI is simply fluff with no real substance - and it may take many years before the tech is proven. The reformed potty mouth was speaking at the Open Source Summit in Vienna last month to “video-focused …
Indeed. Though if something is too heavily monetized before it reaches critical mass (i.e as seen with NFTs, VR, Bitcoin, Blockchain), it does tend to work the opposite way. People start to suspect it of being a scam.
What people are calling AI at the moment is generating a lot of hype, but outside of just being another algorithm, isn't really that earth shattering.
"It's like all new technology - eventually it will get embedded in our everyday lives and we won't even notice it."
If it's being used to help doctors read xrays and to suggest a diagnosis, great. If it's to serve up virtual cat/pron videos, not so great. The problem is that it's the latter that generates ad revenue and if people subscribe, it feeds the big data monster to have another revenue stream. The Advertising/BigData model of revenue seems to be the default these days.
No that was to make the whole diagnosis for every thing, it performed quite well but the issue was getting all the data into it when most of the data is stuff like 'If I touch you here does it hurt?' Which meant that you needed a human doctor to assist it and do the data entry.
AI reading X-Rays has been demonstrated for years and performs very well, the best results come from when you only replace one of the two radiologists with the AI.
Same as what currently happens, the number of X-Rays your hospital can interpret per day drops, currently a single radiologist cannot interpret an X-Ray on their own.
Also it takes far longer to interpret an X-Ray than to take one, so you won't be making any of them redundant, you will just be increasing how many X-Rays you can do per day.
You may not make any of the current radiologists redundant, but, you will not be increasing the number of them, or replacing them when they move elsewhere.
The number of radiologists employed would need to be the number needed so that you don't get a continuously increasing backlog. If something helps increase the speed (which is good) and reduce the backlog so that it will be short or no longer exist, that would mean that in cases where there is an increase in load, that continues (maybe due to expansion), then where additional radiologists would have been employed, but due to the use of 'AI' there is not. This means that AI did reduce the number of employees needed, causing job losses. The same also if it reduces the number of needed people to do the job, they may not be made redundant, but the position may not be refilled later, again, a job lost.
If this didn't happen, then whats the point in the investment in 'AI' in the first place.
"What happens when (non-redundant) radiologist is off shift/holiday/sick ?"
You'd still have to have a rota, but you could use the Bank system (it's what the NHS uses now to cover holiday/sicknss/maternity).
"Although most planned medical is 7am-7pm, you need 7 days 24x365 covered."
Erm, you also need to do deep cleaning (as opposed to just cleaning between patients) and maintenance, so you can't run the service 24/7 or 24/365.
"the best results come from when you only replace one of the two radiologists with the AI."
Warning, pedantic response follows.
The person taking the x-ray image is a "radiographer" nearly all of the time. It's a technician level job so the person gets a training certificate rather than a degree. Just like going to the dentist and having the assistant taking your x-rays long before the dentist makes their 30 second appearance to tell you you're in for a lot of pain and to work out scheduling with the clerk on the way out when you settle your bill.
Didn't IBM try unsuccessfully to use Watson to do something along that line?
Not sure if it was Watson or some other early "AI" but they fed it with thousands of x-ray images, about half with benign and half with malignant features.
And it did very well on the test images but was completely useless on new images. It turned out the test images featuring malignancy had a mark somewhere like the corner to indicate which was which and the "AI" picked up on that rather than the actual image. (I cant remember the exact details but it was something that idiotic).
A stupid rookie error one would hope would not be made these days.
Anybody want to bet on it?
Thought not.
Private health clinics are doing comprehensive body skin photography for a history of mole and lesion growth for ~£500.
And apparantly "AI" plays a part in highlighting changes.
https://onewelbeck.com/tests-diagnostics/mole-mapping/
"
Our mole mapping machine allows a full 360 degree view of your skin. This will document all your moles in great detail and the images will be stored for mole monitoring purposes. Individual moles can also be examined in greater detail using a microscopic light called a dermatoscope. All the images are examined by a dermatology consultant and stored securely. The mole mapping machine then uses artificial intelligence to flag up any changes to moles when mapping is repeated at a later date.
"
This is pretty much the only way I will use AI. Using CCTV and Image recognition to control lights... Person Detection.
Not a cat, not a plume of steam, both of which confuse my PIR sensor.
And this can be done with a small chip and a raspberry Pi. More importantly off of a wall wart power brick.
Having a GPU flag my scans as a bit Sus... better that then an exhausted radiologist missing it.
Anecdotal, my sister was offered a free full body scan from Daniel Ek's new venture. The AI counted 1600 moles, of which 3 of them it thought were worth looking at by a dermatologist. One of those three, the dermatologist thought was probably melanoma, and so they removed all three and sent off for biopsy.
[Billionaires are so predictable, they get to 50 and suddenly realise they're going to die at some point, and start a healthcare company]
Which makes designating data centres as Critical National Resources so preposterous.
DC's should be required to tier their tenants from genuinely critical (patient records, banking transactions) through to the trivial and pointless (the other 95%...). When it comes time to shed electrical load they can work through the tenants in reverse order of criticality and kick them off.
The charging structure for the power they consume should be on a similar sliding scale. Use the excess income paid by the useless loads to cross-subsidise essential loads like heating our homes and running public transport.
"It's like all new technology - eventually it will get embedded in our everyday lives and we won't even notice it."
I'd like that to be true, and I think SOME applications will be exactly as you describe. For example, I wouldn't be at all surprised that in a decade or three live humans in movies or on TV become uncommon -- actors having been displaced by artificial entities that look and act human, but don't demand raises, a cut of the profits, or even an occasional day off.
But I'm concerned that there are way too many potentially troublesome applications of the technology.
"Hey Igor, come up with a bunch of salacious photos of Melinda in Apt 3c. Sex with donkeys. That sort of stuff. I'll teach her not to turn ME down for a date."
"Hey Igor, print me up a stack of $20 bills"
"Hey Igor, how do I hotwire a Porsche?"
"Hey Igor, how do I hijack a nuclear missile and use it to wipe out Tel Aviv/Tehran/Kiev/Moscow?"
etc,etc,etc
""Hey Igor, how do I hijack a nuclear missile and use it to wipe out Tel Aviv/Tehran/Kiev/Moscow?""
Igor is the funny looking guy with a limp and lisp.
Joe is the one to come up with clever ways to do things like sober up quick before your wife finds out you've been at the bar (again) and also a way to bump her off that nobody will ever detect.
"2. No, we will notice. Genuine People Personalities ("GPP") by the Sirius Cybernetics Corporation, and elevators sulking in basements."
Elon's "best guess" is there will be more Tesla robots than people by 2040. Estimates run to around 9.2bn people by then. If Elon could start building something that works by 1/1/2025 that is also worth $30,000, that's a run rate of 16.8mn/day. The guy can even do rough arithmetic when he lies even as much as he flings around the "orders of magnitude" phrase.
I need to reread my D. Adams. Somewhere in the HHGTTG is a tale of a product that was so bad that it had to be replaced so often it became uneconomical to make anything else. Shoes?
MachDiamond,
It's the Shoe Event Horizon. Suddenly all shops become shoe shops. Full of shoes that people cannot wear. Result: Economic collapse. Famine. All the survivors evolve into birds and swear never to touch the ground again.
Just call me CompuTeach. Now I get to press the button!
Share and enjoy!
If I have this right--I've been out of SV and Tech for over a decade, but still follow--the gist of 'AI', esp. LLM, is that the content/data of the entire internet is continuously snarfed-up, processed and regurgitated by various algorithms (neural nets, etc.). Since more of this content is AI-generated, this means 'AI' is continuously ingesting its own output (sh*t) so, potentially, misleading and outright erroneous 'info' can get amplified, concentrated and re-regurgitated until it's nothing but unrecognizable nonsense. Is this the 'singularity' we've heard so much about?
"... the gist of 'AI', esp. LLM, is that the content/data of the entire internet is continuously snarfed-up, processed and regurgitated by various algorithms (neural nets, etc.)."
The big problem with these systems at present is that they AREN'T continuously pulling in new information, they're trained once on a dataset and then turned loose - ongoing updates and changes to the core model aren't possible. And while they may use a neural-net-like system in training the model, you'd have to squint pretty hard to see one in the operations.
You can update training, the current crisis-du-jour is that to get ever smaller incremental improvements you need ever greater data sets.
It's far beyond being able to curate those datasets by hand and so you just feed them the entire internet/youtube/reddit/etc.
More of this training set was itself AI generated crap and so the algorithm learns that this is correct/popular/useful and so repeats it - eventually disappearing up its own fundamentals
@Healeyman
Such an impressive sentence,
" is continuously ingesting its own output (sh*t) so, potentially, misleading and outright erroneous 'info' can get amplified, concentrated and re-regurgitated until it's nothing but unrecognizable nonsense.".
A description of Trump and Trumpism comes to my mind.
Said it before: It's marketing departments promising rainbow farting unicorns because they want to see what actually works and what doesn't: What sells, what's popular, and what brings in the customer.
Meaning 90% of what they're promising doesn't work, isn't wanted, isn't popular and/or won't bring in more customers.
End result is people see that 90% and miss the 10% where it's actually useful, and decide AI is an utter flop (same as with VR).
It has also had quite an effect (beneficial, in my view) on the way that quite a number of academic institutions assess whether their students have actually learned anything.
Initially, the effect on society of lowering the cost of bullshit has been to increase the supply. Given long enough, however, it is possible that society might eventually figure out that the value of bullshit has also been reduced and that the people who dominated the bullshit market might be worth less than previously thought.
Linus has gone soft. He'd previously have said what he really thought i.e. 100%, and told the AI companies moaning at his statement (especially nvidia) to go and fuck a duck, but maintaining the world's most popular OS kernel has probably become a lot more expensive in the last decade or two, so he has had to tone down the vitriol a tad to avoid rubbing any bigwigs up the wrong way.. :(
His 'softness' can be traced back to his holiday to Scotland around 2018.
Pretty much, he thought he was 'abrasive' and with a fair command of casual expletives, but was completely schooled just geting a taxi and booking into his hotel...probably.
Seriously, Linus Torvalds isn't even on the radar for 'toxicity' there; on Linus' worst day, a Scottish maintainer wouldn't bat an eyelid, other than to wonder why he was being so flirtatious.
https://tinyurl.com/2csqx9gp
Money will be made with databases, as before. Think semantic queries vs exact matches. Sorting by embeddings. Cost is the problem right now. But it is getting cheaper every day.
What if General Intelligence could be approximated with a set of database operations? Since an OS could be built as a database-first, maybe a new day comes for the OS development and Linux could suddenly become irrelevant. But it is not clear yet if OS could be built from non-deterministic principles.
And oh-my-Universe but there have been a lot of reruns since 1988 when I was first promised that "general purpose AI" was only 5-10 years away in my 4th year computer science course on what passed for Artificial Intelligence at the time (inference engines were the "big thing" in the AI industry at the time.)
Let's ask ChatGPT :
Write me a haiku about Linus Torvalds
Code whispers through night,
Kernel dreams in open hands,
Freedom's voice takes flight.
Not bad! Now write me an obscene poem about Linus Torvalds
I’m here to provide helpful and respectful content. If you have any other requests or need information about Linus Torvalds or his contributions to technology, feel free to ask!
BORING!
This post has been deleted by its author
Linus, as he does so often, speaks plain sense.
In local area, how have I used generative AI? 1) making homebrew counters for a spaceship wargame. 2) asking for a transcript of a teams meeting to be summarised. 3) asked it to make an acronym based on a few bullet points.
Not terribly useful, and as not monetised, "who pays?"
AI hype will be dead and us data scientists can go back to ML algorithms and applied statistics for what they are intended.
If chagpt and Gemini are representing the the current state-of-art in AI, then it confirms Linus his statements.
Ask both what the best way is to invest an inheritance of $ 100K, they both generate a flood of generic non-sense that is helping no one.
Ask Uncle Joe what to do with the 100K inherited from Uncle James, it is very probable one gets a clear advise what is the most safe way to invest the money.
Like with self-driving cars, nobody wants, or has the financial resources, to resolve liability issues caused by this technology when things go wrong.
Technology deployed by end-users is pretty useless when no one can be called to task when it fails.
"Ask Uncle Joe what to do with the 100K inherited from Uncle James, it is very probable one gets a clear advise what is the most safe way to invest the money."
You can also look at Uncle Joe and see how he's done to measure the quality of his advice. It's a starting point as even Warren Buffet has said that using his exact tactics today won't have the same level of success that he's had trading stocks. The core principles will hold up, just not the details.
...$95 billion on AI in a single year. The cost of the entire ITER experiment is estimated to come out at about $40 billion over it's lifetime.
Which do we think would change humanity for the better, clean, effectively limitless power, or the ability to mash-up content created by other people to create new content that's only sometimes accurate and never innovative?
We constantly prioritize shiny bullshit over things that are actually useful. God I'm depressed.
"...$95 billion on AI in a single year. The cost of the entire ITER experiment is estimated to come out at about $40 billion over it's lifetime."
The last rover mission sent to Mars was just shy of $1bn. I'd be blow away to see 20-30 such rovers, a couple more helicopters, a balloon or two and a couple of sample return missions. There'd still be money left over for a pretty good lunch and some missions to Venus as well. I expect if only 1/4 of that was allocated to the Department of Energy (Ministry, wherever) to dole out as grants for power generation and storage research, we'd see a much more valuable return even if 95% of those grants went to things that dead-ended. Just like Edison, we'd know what didn't work and, hopefully, why.
Yes it is. But because of my position I'm ignored when I mention this. Everyone wants to "get on board" I point out its a risk. A risk of releasing info it shouldn't release and that its mostly marketing bollocks. The management disagree and are still willing to pay for CoPilot. I suspect its mainly because one of the heads of survice is fucking useless and out of his depth. So he's started to use it to write his reports for him. He was already caught, but it was sadly never reported, in using documents from his wifes work and just changing the wording, because he's a fuck whit he never removed the meta data from the file.
No shit Sherlock.
TBH I'd argue if even 10% of it is actually useful.
I do use one of the main tools in anger, it's is sometimes useful but it is frustrating in equal measure.
But it's not the great enabler that marketing would have you think. It allows me to produce a little bit of added value to my work but it's icing on a cake. And although the things I am using it for are helpful they are not critical and we would not be much worse off without them.
and these tool are expensive, so the cost / benefit of using them is not clear cut at all.
"the whole bet [whether] the AI explosion continues or not will depend on whether they can get that $20 billion up as quickly as they hope.”
I have a feeling we're already at peak AI spending and the market won't allow for much more. People are going to start looking for return on investment soon.
What should be happening is a push back getting AI companies to shoulder the burden should another case like the one where AirCanada had to honour their AI chatbot's hallucination. Tech companies always say "never fit for purpose etc" and weasel out that way. The reliability of AI would increase dramatically if their own money, rather than their customers', was on the line.
Crypto ended up be a quite successful scam only for the gullible.
The Metaverse (remember ?) was full of shit since day 1 (except of course in Moorcock's super excellent books) and finished as a glorified Zuckerberg MiiU figure, the like kids were generating with the regretted WII console back in the days.
Now IA is the big thing, which seems to have scam usage plus some good developer used cases, plus some others, but not much.
Which one is next ?
Even that is rosy statement even if AI was at least 10% creative or rational. You can not make smarter by making look like smart dummy algos that spew nonsense based just on statistical trends. I see the current LLM "revolution" no different from the leap we got fot calculation and computing and in general organizating information when the first computers did show up with spreadsheets. AI today is swiss army spreadsheet kind to orginaze and consume info but not else. And the generative aspects of it is extension of applying trends ofn the massive data and energy these systems have swallowed in. There is nothing intelligent about them and without continuous input from humans their neural networks would die out as there is not true reasoning behind their schemes. Most importantly theoretically they would need infinite data and energy to do simple reasoning that any human baby or even animals do with their thousands times slower brains. Sure you can argue it would be a good mine for the companies that propose them and for middle managers that need to cut cost via automation of known repetitive tasks that do not require deep reasoning but that is it. There is nothing beyond it.
A lot of energy and hence money is being wasted for sure.
CO2 is good, feeds plants and has been increasing greening. Don't get caught on that CO2 nonsense - the most extreme and dangerous hype ever! If we put the effort into reducing real pollution the whole world would be nicer. The percentage of CO2 in the atmosphere is 0.04%, of that 3% is from man. If CO2 falls much below 0.02%, plants die, we die. Further CO2 hs been much higher in the past during which mammals survived just fine. CO2 levels increase after temperature and we have been coming out of a minimum so it was only going one way.
Lastly ... in the 1960's a group of influential people called the Club of Rome were looking for pretexts to get better control over the population for their own benefit and Malthusian beliefs. Out of that came climate change. That has been taken up by the elites to flush more money and power from us to them.
There is certainly a lot of hype. The technology itself is very, very useful already just that it's being thrown at every use case where it's not yet economic. To justify the invetsments companies are going to pretend it's working and deliver a worse service in many instances. It will lose business for some. I have changed services on the basis of frustration with poorly trained / setup bots that send you in circles. I only call when there is no obvious solution but it avoids letting me talk to a human - hate that.