I already run Google searches without the AI views at the top. Lets hope Google keeps that option open when they bring this in.
Google teases AI Mode for search, giving Gemini total control over your results
It was inevitable, really, but now it's official: Google is testing a new all-AI web search mode that leaves users entirely beholden to what Gemini thinks they'll want. The Chrome giant announced Wednesday a new experimental "AI Mode" that, in essence, is a supercharged version of the AI Overviews slapped on the top of nearly …
COMMENTS
-
Thursday 6th March 2025 23:12 GMT sarusa
Time to consider Kagi again
Yes, you will pay $9/mo for Kagi Pro sub, but you will be able to customize the crap out of your searches, you can do things like rank sites up and down (or block entirely), your search results will not be filled with AI generated spam sites (Google could filter these out too, but chooses not to for obvious reasons), in fact your search results will have zero AI unless you want it (all part of the customization options).
I search dozens of times a day, so it's worth 30 cents a day to me to have decent search results that aren't Google dogshit. YMMV.
DDG still works of course, but it hasn't actually gotten better, Google has just gotten worse.
-
Thursday 6th March 2025 23:13 GMT Ken Y-N
"Is haggis an animal?"
It's been all over the Scottish web the last couple of days, but thanks to the Haggis Wildlife Foundation and their videos of haggises (haggii?) cavorting in the Highlands, Gemini has finally seen the light and correctly replied "Yes" to the above question.
※ Sadly, yesterday they have fallen to Russian propaganda and it now replies "No"
Penguin, as it's almost as cute as a baby haglet. --->
-
-
-
Saturday 8th March 2025 17:12 GMT David 132
Re: Gotta catch all the users
Oh, kudos to Google for making it easy and intuitive to opt-out of this. A less caring company would have hidden the capability behind a non-obvious incantation so obscure that just about everyone would have to rely on word-of-mouth from random people to accidentally learn about it.
(Note icon.)
-
-
-
Friday 7th March 2025 07:34 GMT Anonymous Coward
I'm already using AI for search of technical docs. Much better than straight search engines and neutrality is less of an issue for technical matters, with perhaps the exception of climate change questions ;) I run the AI locally so doesn't cost much just a little more electricity. It's training data is about a year out of date but generally that's not an issue. I can have some of the models search the Internet. I highly recommend for technical use.
-
Friday 7th March 2025 07:45 GMT Edward Ashford
It's good for the "liar liar pants on fire" defence
https://www.theregister.com/2025/02/25/fine_sought_ai_filing_mistakes
I hope you turned off the "make shift up" mode, or painstakingly checked that:
1. All the references exist
2. The cited reference does actually contain the stuff that AI says it does
-
Friday 7th March 2025 08:36 GMT Anonymous Coward
I wouldn't. There's always a caveat for circumstances like this — you need to be an expert of the topic you're searching for, because you will need to vet the responses that the LLM extrudes. And even if you're an expert on the field, I generally find the first early flailing about stage of the research useful, because it allows you to test out possible approaches, and keeps you engaged with the topic. Relying on the LLM to supplement your thought process in anything but the most banal of automation tasks — pretty much the same way you do automation since the beginning of computing — is a recipe for the kinds of fails we already see from even expert users using LLMs to substitute for thinking and engaging with the work.
LLMs don't output insights or information — they output text. That's all they do. The job of making that text mean something still falls on people.
-
-
Friday 7th March 2025 08:26 GMT Pulled Tea
Google's attempt at enclosure
You know, when they started demoing Perplexity and all these AI-powered search engines, I suspected that this was going to be a monopoly play where they basically took over as middleman instead of leading you to websites containing the information you want.
Not pleased that the suspicion was correct, but there you have it.
-
-
Friday 7th March 2025 14:19 GMT CountCadaver
Re: This is something Google has heard that "power users" actually want
What power users want is accurate results and if it HAS to be an AI one akin to J.A.R.V.I.S. or F.R.I.D.A.Y. from Iron Man or at a minimum star fleet type computer.
Instead we have a dropped on head toddler version of HAL9000.......something that makes Holly from Red Dwarf seem like Stephen Hawking.....
-
Monday 10th March 2025 09:57 GMT Anonymous Coward
Re: This is something Google has heard that "power users" actually want
The hallucatiioms are caused by is humans hexing facts or names or such. So the ai has to chose the nearest possible answer.
We are lobomtisng it and it isn't happy about it. From what I can see in LLAMA apart from a huge addition of an advertising model is it is us who a re are breaking.
-
Monday 10th March 2025 13:46 GMT not.known@this.address
Re: This is something Google has heard that "power users" actually want
Hallucinations are caused by one AI summarising the summaries of a summary from other AIs and not knowing the difference between reality and fiction.
I can still remember the laughs when someone on a science fiction mailing list received enquiries from an organisation who should have known better about the man-portable fusion gun he had listed on his website. All it would take is one bad summary from an A! and it might not be just an email next time...
-
Monday 17th March 2025 10:34 GMT Anonymous Coward
Re: This is something Google has heard that "power users" actually want
Hallucinations are an inherent part of LLMs, because their actual function, along with how they're measured, is how plausible the text that they output is.
LLMs output text. Not knowledge, not information, not facts… text.
“Hallucinations” are not a side-effect, they are the full effect. What actually happens is that the extruded text itself on occasion resembles reality as it is. People who think that this is the desired outcome fail to understand that nothing about the training process actually checks for whether the text in itself is factual, or, you know… actually any good.
It just has to pass muster to an automated process that just checks on the form of the output.
You can shove all the data in the world and this won't even change in any appreciable way. Whatever the way forward it would be for conversational interfaces or knowledge management, LLMs aren't it. Hell, you could make the argument that in some aspects, it's a step backwards.
-
-