"Bullshit is 'any utterance produced where a speaker has indifference towards the truth of the utterance'. That explanation, in turn, is divided into two "species": hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.
"ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users') agenda."
To see anything there at all in a mirror, there are three easy actions:
I can spend my time flexing.
I can make the time to shape myself up.
I can ask better questions.
Pick yours. Stop laying it on #LLMs. They only work with what humans put into them at that second. #Compassion for any struggle to make order out of the #other.
What a load of BS hahahaha. LLMs are not conversation engines (wtf is that lol, more PR bullshit hahahaha). LLMs are just statistical autocomplete machine. Literally, they just predict the next token based on previous tokens and their training data. Stop trying to make them more than they are.
You can make them autocomplete a conversation and use them as chatbots, but they are not designed to be conversation engines hahahaha. You literally have to provide everything in the conversation, including the LLM previous outputs to the LLM, to get them to autocomplete a coherent conversation. And it’s just coherent if you only care about shape. When you care about content they are pathetically wrong all the time. It’s just a hack to create smoke and mirrors, and it only works because humans are great at anthropomorphizing machines, and objects, and …
Then you go to compare chatgpt to literally the worst search feature in google. Like, have you ever met someone using the I’m feeling lucky button in Google in the last 10 years? Don’t get me wrong, fuck google and their abysmal search quality. But chatgpt is not even close to be comparable to that, which is pathetic.
And then you handwave the real issue with these stupid models when it comes to search results. Like getting 10 or so equally convincing, equally good looking, equally full of bullshit answers from an LLM is equivalent to getting 10 links in a search engine hahahaha. Come on man, the way I filter the search engine results is by reputation of the linked sites, by looking at the content surrounding the “matched” text that google/bing/whatever shows, etc. None of that is available in an LLM output. You would just get 10 equally plausible answers, good luck telling them apart.
I’m stopping here, but jesus christ. What a bunch of BS you are saying.
Actually, what really matters is not the quality of your code or the disruptiveness of your paradigm, or whether you can outlive the competitors that existed when you started up, but whether you can keep the money coming. The rideshares in particular will fail over time in any country with labour laws that allow drivers to unionize—if the drivers make a sane amount of money, the company’s profits plummet, and investors and shareholders head for the hills. Netflix is falling apart already because the corporations with large libraries of content aren’t so happy to license them anymore, and they’re scrambling to make up the revenue they’ve lost. Google will probably survive only because its real product is the scourge of humanity known as advertising.
Again, it’s all business considerations, not technical ones. Remember the dot-com boom of the 1990s, or are you not old enough? A lot of what’s going on right now looks like the 2.0 (3.0? 4.0?) release of the same thing. A few of these companies will survive, but more of them will fold, and in some cases their business models will go with them.
I actually don’t disagree with you and think we’re on the same page. Basically, you can summarise our whole discussion as all companies are doomed to fail at end of day.
If you don’t change and innovate you will fail.
If you change and innovate too much you will fail.
Finding the middle ground is rough and most companies will fail.
🤖⚔️ @fnieser.bsky.social und ich haben #ParzivAI gebaut. Ein #llm, das auf Mixtral-8x7B-Instruct-v0.1basiert, und das mit mittelhochdeutscher Literatur finetuned wurde.
Bisher noch Proof of Concept, aber demnächst vllt im Schulunterricht?
Amazon isn’t doing this, their sellers are. What this shows is how full Amazon’s product listings are with counterfeits sold by lazy scammers from China. Don’t trust Amazon for anything.
As of this writing, it is not yet available but you can see the Install button. Just click it to pre-register and it will auto download and install once OpenAI launch it.