At least it’s citing sources and you can check to make sure. And from my anecdotal evidence it has been pretty good so far. It also told me on some occasions that the queried information was not found in it’s sources instead of just making something up. But it’s not perfect for sure, it’s always better to do manual research but for a first impression and to find some entry points I’ve found it useful so far
The problem is that you need to check those sources today make sure it’s not just making up bullshit and at that point you didn’t gain anything from the genai
As I said the links provide some entry points for further research. It’s providing some use to me because I don’t need to check every search result. But to each their own and I understand the general scepticism of generative “AI”
If you don’t check everyone source. It might be just bullshitting you. There’s people who followed your approach and got into hot shots with their bosses and judges
(on mobile, so sorry for any formatting weirdness)
English teachers will only give you an arbitrary, subjective answer about whether it’s a word - you want a linguist if you want an objective answer.
Since we’re dealing with two different “words” (roots) here, factory and overclocked, the first thing to look for is compound stress. Many compound words in English get initial stress: compare “blackbird” and “a black bird”.
This isn’t foolproof, however. For some speakers there are compounds that don’t get compound stress - some speakers say “paper towel” as expected, while others say “paper towel”, but it’s still a compound either way.
So how can we actually tell that paper towel is one word? See if the first member of the potential compound (the non-head) can be modified in any way.
For example, we know doghouse is a compound because in “a big doghouse” big can only refer to the house, and cannot refer to “the house of a big dog”. Similarly, blackboard must be one word because it can take what appear to be contradictory modifiers: " a green blackboard".
So, in the same way, paper towel and toilet paper are one word because “big paper towel” can’t mean “a towel made from big paper” and “pink toilet paper” can’t mean “paper for a pink toilet”. (Toilet paper also gets compound stress.)
Yet another way to test is by semantic drift (meaning shift). As mentioned earlier, blackboards don’t have to be black, so the meaning of the compound doesn’t perfectly correspond to the pieces of the word - instead, the fact that it’s a vertical board you write on in chalk is much more important to the meaning. This is because once the pieces combine to form a new word, that new word can start to shift away from the meaning of the pieces. Again, however this process takes time, so it’s not a perfect test.
So, back to the original question: is “factory-overclocked” one word?
Well, it doesn’t get compound stress, and for me I can still say things like “it’s home-factory-overclocked” to mean that it was overclocked in its home factory, so the first member can take modifiers. And, the whole thing still means what the pieces mean.
So, in my grammar, “factory-overclocked” is two words. But for some of you “home factory overclocked” may not be possible, which would indicate that it’s started to become one word for you. Everyone’s grammar is different, but we can still test for these categories.
If you instead mean by your question, “can factory and overclocked be combined with a hyphen?”, however, I can’t help you, because language-specific writing conventions are subjective and arbitrary, and not something that linguists usually care very much about.
Yeah AI can be wonky, but what idiot would spend a shitload of money on a graphics card without even being willing to click an article and read a bit?
It’s on you if you do that. Even if the AI shit worked way better, why would you trust there aren’t shady things happening to influence the AI and have you spend more money.
My dad falls into this category he constantly replaces things with way worse things just because they are new, I can’t get my mind round how he can replace really good working stuff with new junk that isn’t even capable of doing the job.
If only he tends to give these things to the conmen installing the junk. I’ve tried to stop him wasting his money so many times but it’s futile I’ve tried to use his latest oven he had an amazing one before but when I tried to use the new one following every instruction he said and the actual manual I looked up online myself the food came out later absolutely fridge cold.
He insists his new cooker is brilliant, what can you do with that response to something that’s obviously broken from new and he should be returning.
He might need a home but I can’t do that to him truthfully, I just wish he had some common sense when it comes to things like appliances etc.
It goes without saying that this shit doesn’t really understand what’s outputting; it’s picking words together and parsing a grammatically coherent whole, with barely any regard to semantics (meaning).
It should not be trying to provide you info directly, it should be showing you where to find it. For example, linking this or this*.
To add injury in this case it isn’t even providing you info, it’s bossing you around. Typical Microsoft “don’t inform a user, tell it [yes, “it”] what it should be doing” mindset. Specially bad in this case because cost vs. benefit varies a fair bit depending on where you are, often there’s no single “right” answer.
*OP, check those two links, they might be useful for you.
LLMs don’t “understand” anything, and it’s unfortunate that we’ve taken to using language related to human thinking to talk about software. It’s all data processing and models.
Yup, 100% this. And there’s a crowd of muppets arguing “ackshyually wut u’re definishun of unrurrstandin/intellijanse?” or “but hyumans do…”, but come on - that’s bullshit, and more often than not sealioning.
Don’t get me wrong - model-based data processing is still useful in quite a few situations. But they’re only a fraction of what big tech pretends that LLMs are useful for.
Yeah, I’m far from anti-AI, but we’re just not anywhere close to where people think we are with it. And I’m pretty sick of corporate leadership saying “We need to make more use of AI” without knowing the difference between an LLM and a machine learning application, or having any idea *how" their company could make use of one of the technologies.
It really feels like one of those hammer in search of a nail things.
The internet as we knew it is doomed to be full of ai garbage. It’s a signal to noise ratio issue. It’s also part of the reason the fediverse and smaller moderated interconnected communities are so important: it keeps users more honest by making moderators more common and, if you want to, you can strictly moderate against AI generated content.
And you can use multiple models, which I find handy.
There is some stuff that AI, or rather LLM search, is useful for, at least the time being.
Sometimes you need some information that would require clicking through a lot of sources just to find one that has what you need. With DDG, I can ask the question to their four models*, using four different Firefox containers, copy and paste.
See how their answers align, and then identify keywords from their responses that help me craft a precise search query to identify the obscure primary source I need.
This is especially useful when you don’t know the subject that you’re searching about very well.
*ChatGPT, Claude, Llama, and Mixtral are the available models. Relatively recent versions, but you’ll have to check for yourself which ones.
Neither is worth it. But if you have unlimited money, XTX is the better card and therefore a better deal. But if money is a factor, get the XT because the performance per $$$ of the XTX isn’t worth selling a kidney.
I think amd is worth it’s money tho, they scale their price to match their nvidia counterparts performance wise, i mean 7900xtx worth just as much as 4080 and performs as such but have more memory and is a better match for Linux gaming
The AI response reads like when you are looking for something in Reddit and you get 3 very different responses in 3 different threads about the same topic
ChatGPT4o can do some impressive and useful things. Here, Im just sending it a mediocre photo of a product with no other context, I didnt type a question. First, its identifying the subject, a drink can. Then its identifying the language used. Then its assuming I want to know about the product so its translating the text without being asked, because it knows I only read english. Then its providing background and also explaining what tamarind is and how it tastes. This is enough for me to make a fully informed decision. Google translate would require me to type the text in, and then would only translate without giving other useful info.
This is not going to stop me from wanting better research. I'll still go to Userbenchmark just to be sure. AI isn't going to tell me what it thinks and expect me to take it face value.
A good AI summary would tell you the benchmark scores and general pricing, but yes, it’s better to go to UserBenchmark anyway— especially since the whole ploy from search is to keep you from going there, robbing the original source of it’s ad revenue.