There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

FooBarrington , (edited )

We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols.

No, LLMs understand a tree to be a complex relationship of many, many individual numbers. Can you clearly define how our understanding is based on something different?

When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.

What is the difference between “appropriate” and “likely”? I know people who use words to sound smart without understanding them - do they decide which words are appropriate, or which ones are likely? Where is the border?

Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t.

This is wrong. If you ask it something, it replies and you correct it, it will absolutely “learn” from it for this session. That’s due to the architecture, but it refutes your point.

It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.

So why can it often output correct information after it has been corrected? This should be impossible according to you.

That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.

Aaah, the old “stochastic parrot” argument. Can you clearly show that humans don’t analyse inputs and then output what they determine to be most likely to follow?

If you’d like, we can move away from the purely philosophical questions and go to a simple practical one: given some system (LLMs, animals, humans) how do I figure out whether the system understands? Can you give me concrete steps I can take to figure out if it’s “true understanding” or “LLM level understanding”? Your earlier approach (tell it when it’s incorrect) was wrong. Do you have an alternative? If not, how is this not a “god of the gaps” argument?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines