It should be illegal to force people to use generative ai to do things it is not needed for.
Seeing Microsoft's plans to add ai to windows was the last straw that made me change to linux.
I wish that was the case but sadly most of them are basically Bing or Google frontends or belong to entities that I trust even less. As far as I can tell there are very few independent crawls out there.
SearXNG is great at what it does but it falls into the Bing/Google/etc-frontend category since it just forwards your query to one of the search engines it has modules for. It doesn’t have its own crawl and index.
Kagi has been doing a decent job for me, with the downside that it’s paid, and does use results from other places.
They go into detail about how they work, but it’s them paying for results from lots of engines, plus their own engine, then heavy duty filtering of the results.
Plus a ML results summarizer you can press after searching.
Isn’t it the training of the models which is the most energy intensive? whereas generating some text in answer to a question is probably not super intensive. Caveat: I know nothing
Yes training is the most expensive but it’s still an additional trillion or so floating point operations per generated token of output. That’s not nothing computationally.
Just consider how long it takes GPT4 to answer a question. Anywhere from a few seconds to a minute in my experience. There’s at least one A100 at probably 400w going full throttle that whole time, plus all the supporting hardware.
Technically if you submit a query to the search engine, you do so because you want answer to a question in the best way possible without having to do too much digging.
So does it matter if it uses AI to help you? I say its a great feature.
No. A lot of times I’m looking to compare many answers. I’ll give you an example.
If I want to look for interesting barbecue rubs that I haven’t tried before I’ll query a search engine. Historically (not so much recently) Google has been better at searching through forums than a direct forum search. So I can check many different sources for the ratios people are using and make my decision.
Google’s half baked AI is really terrible right now. It has a memory of about two answers, barely understands context, and hallucinates more often than both copilot and ChatGPT.
Now I’m looking for a coffee rub and it’s giving me injection advice (happened when I tested Gemini), it gets barbecue styles mixed up, doesn’t follow dietary restrictions that are explicitly stated, and will give you recipes for the wrong cut and type of meat.
It’s not ready, and anyone trusting it for an answer to a question is going to have a bad time. If you have to verify it by checking a bunch of links anyway then it’s not only worthless, it’s making search take longer and take up screen real estate.
Brisket coffee rub is fairly common. I only know one guy who uses a coffee injection because it’s not common, although other injections are pretty common for brisket.
I have no idea why it went off on that particular tangent. I guess whatever barbecue data it was trained on had a lot of injection advice along with the coffee rubs.
I’m searching to get specific information, and good information. I’ve seen LLMs make shit up and be wrong enough times for me not to trust them. I’d rather turn that feature off.
We’re in the technology sub. People here are old enough to know how to Google (old forums, preferably Reddit, as Lemmy is absent), they don’t know how to use an AI effectively (just look at how they’re trying to justify that). Don’t worry about the downvotes and their nonsense responses. Those are the same people who microwave their water instead of using an electric kettle.
As on all social media people here (group)think that they are the smartest. But Lemmy is also a bubble, one with people who don’t want to innovate or experience new things. Very weird for something so tech focused.
This may actually be a net improvement to the Google Search experience, since the engine is borderline unusable without uBlock Origin. But also it feels weird that Google would make an AI generated prompt the focal point and not the entire rows of sponsored ads that litter all search results.
How did the big tech industry get this terminally stupid?
Lol. The generated result that is incomplete and slower than the rest of the search. I usually scroll past it because it’s not done generating. If it does generate fast enough, it’s usually too vague or broad
Almost every time I ask a direct question, the two AI answers almost always directly contradict each other. Yesterday I asked if vinegar cuts grease. I received explanations for both why its an excellent grease cutter, and why it doesn’t because it’s an acid.
I think this will be a major issue with AI. Just because it was trained on a huge wealth of knowledge doesn’t mean that it was trained on correct knowledge.
Just because it was trained on a huge wealth of knowledge doesn’t mean that it was trained on correct knowledge.
Which makes its correct answers and it’s confidently wrong answers look as plausible as each other. One needs to apply real intelligence to determine which to trust, makikg the AI tool mostly useless.
I don’t see any reason being trained on writing informed by correct knowledge would cause it to be correct frequently. unless you’re expecting it to just verbatim lift sentences from training data
Normal search results are already littered with useless ai generated seo optimized crap. It’s got to the point where sometimes it’s quicker to learn the knowledge you seek the old fashion way: by reading books.
I just assume that these people never have any problems that they have to solve personally. Otherwise they would be frustrated by the inability to find necessary information. They are either rich or children or both.
engadget.com
Oldest