There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

lvxferre , (edited )
@lvxferre@lemmy.ml avatar

Did you try this with an LLM?

No, for two reasons.

One is that the point of the example is to exemplify how humans do it, the internal process. It highlights that we don’t simply string words together and call it a day, we process language mostly through an additional layer that I’ll call “conceptual” here (see note*).

The second reason why I didn’t bother trying this example in a chatbot is that you don’t need to do it, to know how LLMs work. You can instead refer to many, many texts on the internet explaining how they do it, such as:

Because GPT-4 analyzes it exactly the same way you did and then some:

You’re confusing the output with the process.

Sometimes the output resembles human output that goes through a conceptual layer. Sometimes it does not. When it doesn’t, it’s usually brushed off as “it’s just a hallucination”, but how those hallucinations work confirms what I said about how LLMs work, confirms the texts explaining how LLMs work, and they show that LLMs do not conceptualise anything.

Part of what is surprising about LLMs is they have emergent properties you wouldn’t expect from them being autocomplete on steroids.

Emergent properties are cute and interesting, but at the end of the day LLMs are still autocomplete on steroids.

I think that people should be a bit greedier than that, and expect a language model to be actually able to handle language, instead of just words.

*actually two layers - semantic and pragmatic. I’m simplifying both into one layer to show that, at least in theory, this could be actually implemented into a non-LLM language model.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines