There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmer_humor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

schnurrito , in It must be a silent R

This is hardly programmer humor… there is probably an infinite amount of wrong responses by LLMs, which is not surprising at all.

DragonTypeWyvern ,

I don’t know, programs are kind of supposed to be good at counting. It’s ironic when they’re not.

Funny, even.

KairuByte ,

Eh

If I program something to always reply “2” when you ask it “how many [thing] in [thing]?” It’s not really good at counting. Could it be good? Sure. But that’s not what it was designed to do.

Similarly, LLMs were not designed to count things. So it’s unsurprising when they get such an answer wrong.

pkill ,

the ‘I’ in LLM stands for intelligence

logorok ,
@logorok@lemmy.world avatar

I can evaluate this because it’s easy for me to count. But how can I evaluate something else, how can I know whether the LLM ist good at it or not?

KairuByte ,

Assume it is not. If you’re asking an LLM for information you don’t understand, you’re going to have a bad time. It’s not a learning tool, and using it as such is a terrible idea.

If you want to use it for search, don’t just take it at face value. Click into its sources, and verify the information.

stevedidwhat_infosec , in It must be a silent R

You’ve discovered an artifact!! Yaaaay

If you ask GPT to do this in a more math questiony way, itll break it down and do it correctly. Just gotta narrow top_p and temperature down a bit

VitaminF ,

Chatgpt just told me there is one r in elephant.

stevedidwhat_infosec ,

Is this satire or

VitaminF ,

No, actually did, try for yourself.

CodexArcanum , in It must be a silent R

I was curious if (since these are statistical models and not actually counting letters) maybe this or something like it is a common “gotcha” question used as a meme on social media. So I did a search on DDG and it also has an AI now which turned up an interestingly more nuanced answer.

https://lemmy.world/pictrs/image/883cf1b0-39c9-45ec-9df9-f476d9ac3480.jpeg

It’s picked up on discussions specifically about this problem in chats about other AI! The ouroboros is feeding well! I figure this is also why they overcorrect to 4 if you ask them about “strawberries”, trying to anticipate a common gotcha answer to further riddling.

DDG correctly handled “strawberries” interestingly, with the same linked sources. Perhaps their word-stemmer does a better job?

sus ,

many words should run into the same issue, since LLMs generally use less tokens per word than there are letters in the word. So they don’t have direct access to the letters composing the word, and have to go off indirect associations between “strawberry” and the letter “R”

duckassist seems to get most right but it claimed “ouroboros” contains 3 o’s and “phrasebook” contains one c.

ReveredOxygen ,
@ReveredOxygen@sh.itjust.works avatar

DDG’s one isn’t a straight LLM, they’re feeding web results as part of the prompt.

CommanderCloon ,
glimse , in It must be a silent R

Copilot may be a stupid LLM but the human in the screenshot used an apostrophe to pluralize which, in my opinion, is an even more egregious offense.

It’s incorrect to pluralizing letters, numbers, acronyms, or decades with apostrophes in English. I will now pass the pedant stick to the next person in line.

homesweethomeMrL ,

I salute your pedantry.

Beanie ,

That’s half-right. Upper-case letters aren’t pluralised with apostrophes but lower-case letters are. (So the plural of ‘R’ is ‘Rs’ but the plural of ‘r’ is ‘r’s’.) With numbers (written as ‘123’) it’s optional - IIRC, it’s more popular in Britain to pluralise with apostrophes and more popular in America to pluralise without. (And of course numbers written as words are never pluralised with apostrophes.) Acronyms are indeed not pluralised with apostrophes if they’re written in all caps. I’m not sure what you mean by decades.

ryannathans ,

Why use for lowercase?

GetOffMyLan ,

Because English is stupid

SlopppyEngineer ,

It’s not stupid. It’s just the bastard child of Germany, Dutch, French, Celtic and Scandinavian and tries to pretend this mix of influences is cool and normal.

DragonTypeWyvern ,

Victim blaming and ableism!

The French and Scandinavian bits were NOT consensual.

(Don’t forget Latin btw)

roguetrick ,

There are plenty of non Norman consensual French words and the Danes had as much a right to be there as the Angles and the Saxons did in kicking the celts out. Let’s not even talk about if the anglo-Saxons had more legitimate claim than the norse-gaels.

bisby ,

Because otherwise if you have too many small letters in a row it stops looking like a plural and more like a misspelled word. Because capitalization differences you can make more sense of As but not so much as.

psud ,

As

That looks like an oddly capitalised “as”

That really gives the reason it’s acceptable to use apostrophes when pluralising that sort of case

Mouselemming ,

By decades they meant “the 1970s” or “the 60s”

I don’t know if we can rely on British popularity, given y’all’s prevalence of the “greengrocer’s apostrophe.”

ProfessorProteus ,
@ProfessorProteus@lemmy.world avatar

Never heard of the greengrocer’s apostrophe so I looked it up. thoughtco.com/what-is-a-greengrocers-apostrophe-1…

I absolutely love that there’s a group called the Apostrophe Protection Society. Is there something like that for the Oxford Comma? I’d gladly join them!

SkyezOpen ,

I will die on both of those hills alongside you.

Aatube ,
Beanie ,

Hah, I do not like the greengrocer’s apostrophe. It is just wrong no matter how you look at it. The Oxford comma is a little different - it’s not technically wrong, but it should only be used to avoid confusion.

VirtualOdour ,

I use it for fun, frivolity, and beauty.

Beanie ,

Oh right - that would be the same category as numbers then. (Looked it up out of curiosity: using apostrophes isn’t incorrect, but it seems to be an older/less formal way of pluralising them.)

Mouselemming ,

Now, plurals aside, which is better,

The 60s

Or

The '60s

?

Bunnylux ,
@Bunnylux@lemmy.world avatar

Oooh, pedant stick, pedant stick! Give it to me!!

warbond ,

Thank you. Now, insofar as it concerns apostrophes (he said pedantically), couldn’t it be argued that the tools we have at our immediate disposal for making ourselves understood through text are simply inadequate to express the depth of a thought? And wouldn’t it therefore be more appropriate to condemn the lack of tools rather than the person using them creatively, despite their simplicity? At what point do we cast off the blinders and leave the guardrails behind? Or shall we always bow our heads to the wicked chroniclers who have made unwitting fools of us all; and for what? Evolving our language? Our birthright?

No, I say! We have surged free of the feeble chains of the Oxfords and Websters of the world, and no guardrail can contain us! Let go your clutching minds of the anchors of tradition and spread your wings! Fly, I say! Fly and conformn’t!

I relinquish the pedant stick.

Melvin_Ferd ,

English is a filthy gutter language and deserves to be wielded as such. It does some of its best work in the mud and dirt behind seedy boozestablishments.

NeatNit ,

Prescriptivist much?

Toneswirly , in It must be a silent R

Ladies and gentlemen: The Future.

itsraining ,

“In the Future, people won’t have to deal with numbers, for the mighty computers will do all the numbers crunching for them”

The mighty computers:

Rhaedas , in It must be a silent R

I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in "strawberries" thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said "really?", and it corrected itself once again.

LLMs are very useful as long as know how to maximize their power, and you don't assume whatever they spit out is absolutely right. I've had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I've found some of the simplest errors in the middle of a lot of helpful things. It's at an assistant level, and you need to remember that assistant helps you, they don't do the work for you.

sun_is_ra , in It must be a silent R

First mentioned by linus techtip.

i had fun arguing with chatgpt about this

kubica , in It must be a silent R

5% of the times it works every time.

some_guy ,

You can come up with statistics to prove anything, Kent. 45% of all people know that.

hades , in It must be a silent R

The T in “ninja” is silent. Silent and invisible.

todd_bonzalez , in Ah yes, the I in LLM
@todd_bonzalez@lemm.ee avatar

Yes, asking an LLM a nonsense question can result in a nonsense answer.

I see it as a feature. Asking an LLM a stupid question can be fun.

My favorite is “Can I take a duck home from the park?” or “How do I teach a crab math?”.

Amanduh ,

I asked chatgpt 4o how to teach a crab math and it called it a “playful exercise since crabs cannot learn math”

captainlezbian ,

I agree but it’s also an important thing to remember when asking sensical questions. Something that responds nonsense to nonsense means it’s prioritizing an answer that sounds right over an answer that is right.

trolololol ,

Hey that’s not new, people have been doing this since language was invented. Today they’re most successful in areas such as country presidents and company CEOs.

lorty ,
@lorty@lemmygrad.ml avatar

Good thing they never make a mistake when we ask actual questions!

ma1w4re ,

<span style="color:#323232;">Generally, it is not advisable or legal to take a duck home from a park. Many parks have rules and regulations designed to protect wildlife, and removing animals from their natural habitat can harm both the animal and the ecosystem. If you find a duck that appears to be injured or in distress, it's best to contact a local wildlife rehabilitation center or animal control for assistance.
</span>

😖😖😭😭😭😭

x00z , (edited ) in Ah yes, the I in LLM
@x00z@lemmy.world avatar

llama3.1 (8B) by Facebook:


<span style="color:#323232;">>>> what does the I in LLM stand for?
</span>

<span style="color:#323232;">In the context of natural language processing (NLP) and artificial 
</span><span style="color:#323232;">intelligence, "LLM" typically stands for **L**arge **L**anguage **M**odel.
</span><span style="color:#323232;">
</span><span style="color:#323232;">A Large Language Model is a type of machine learning model that has been 
</span><span style="color:#323232;">trained on large datasets to understand and generate human-like language. 
</span><span style="color:#323232;">These models are often used in applications such as text classification, 
</span><span style="color:#323232;">sentiment analysis, question answering, and even chatbots like myself!
</span><span style="color:#323232;">
</span><span style="color:#323232;">I'm glad you asked!
</span>

<span style="color:#323232;">>>> but the I !!!!
</span>

<span style="color:#323232;">In the specific case of an LLM being a Large **L**anguage **M**odel, the 
</span><span style="color:#323232;">"LL" part doesn't explicitly mention the letter "I".
</span><span style="color:#323232;">
</span><span style="color:#323232;">However, in the context of AI and ML models, there are other types of 
</span><span style="color:#323232;">models that do involve the letter "I", such as:
</span><span style="color:#323232;">
</span><span style="color:#323232;">* ILM: Image Language Model (combining computer vision with natural 
</span><span style="color:#323232;">language processing)
</span><span style="color:#323232;">* IMLM: Interpretable Machine Learning Model
</span><span style="color:#323232;">* MLM: Masked Language Model (a specific type of LLM)
</span><span style="color:#323232;">
</span><span style="color:#323232;">And then there's also...
</span><span style="color:#323232;">
</span><span style="color:#323232;">* DALL-E, but that's a different story!
</span><span style="color:#323232;">
</span><span style="color:#323232;">But in the case of the original LLM, it typically stands for Large 
</span><span style="color:#323232;">**L**anguage **M**odel.
</span>
sheogorath ,

Well I’ll be damned.

https://lemmy.world/pictrs/image/da60cf75-f2fc-45f6-9816-fd359e5440d6.jpeg

Text for my screen reader using friends.

Me: What is the I in LLM stands for?

Gemini: The “I” in LLM doesn’t stand for anything. LLM stands for Large Language Model.

Laborer3652 , in Ah yes, the I in LLM
Swedneck ,
@Swedneck@discuss.tchncs.de avatar

it just struck me that LLMs would be so massively improved by simply making them prepend “i think” before every statement, instead of having them confidently state absolute nonsense and then right after confidently state that they were completely incorrect.

JackbyDev ,

I’ve been experimenting with ChatGPT a little more the past couple of weeks. It sounds confident and authoritative. What is funny is when you find inaccuracies. It seems good at knowing you’re trying to correct it. I haven’t tried lying to it when I’m correcting it yet but I wonder if it would also accept those even if they’re nonsensical lol.

bouh , in Ah yes, the I in LLM

Oh so managers are biological LLMs! It explains everything!

brucethemoose , in Ah yes, the I in LLM

LLMs are like improv machines. You give them a prompt, and they steamroll through it, no matter what.

There is literally no going back and changing it. It improvs as it goes

ChronosTriggerWarning , in Ah yes, the I in LLM

Now do the blgznz’x’llstrni in LLM!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines