There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

lvxferre ,
@lvxferre@lemmy.ml avatar

I also think that they should go back to the drawing board, to add another abstraction layer: conceptualisation.

LLMs simply split words into tokens (similar-ish to morphemes) and, based on the tokens found in the input and preceding answer tokens, they throw a die to pick the next token.

This sort of “automatic morpheme chaining” does happen in human Language¹, but it’s fairly minor. More than that: we associate individual and sets of morphemes with abstract concepts². Then we handle those concepts in contrast with our world knowledge³, give them some truth value, moral assessment etc., and then we recode them back into words. LLMs do not do anything remotely similar.

Let me give you an example. Consider the following sentence:

The king of Italy is completely bald because his hair is currently naturally green.

A human being can easily see a thousand issues with this sentence. But more importantly, we do it based on the following:

  • world knowledge: Italy is a republic, thus it has no king.
  • world knowledge: humans usually don’t have naturally green hair.
  • logic applied to the concepts: complete baldness implies absence of hair. Currently naturally green hair implies presence of hair. One cannot have absence and presence of hair at the same time.
  • world knowledge and logic: to the best that we know, the colour of someone’s hair has zero to do with baldness.

In all those cases we need to refer to the concepts behind the words, not just the words.

I do believe that a good text generator could model some conceptualisation. And even world knowledge. If such a generator was created, it would easily surpass LLMs even with considerably lower linguistic input.

Notes:

  1. By “Language” with capital L, I mean the human faculty, not stuff like Mandarin or English or Spanish etc.
  2. Structuralism would call those concepts “signified”, and the morphemes conveying it “signifier”, if you want to look for further info. Saussure should be rather useful for that.
  3. “World knowledge” refers to the set of concepts that we have internalised, that refer to how we believe that the world works.
  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines