There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Traister101 ,

We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols. When they create output they don’t decide that one synonym is more appropriate than another, it’s chosen by which collection of symbols is more statistically likely.

Take for example attempting to correct GPT, it will often admit fault yet not “learn” from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn’t learn from it because it can’t. It doesn’t know what words mean. It knows that when it sees the symbols representing “You got {thing} wrong” the most likely symbols to follow represent “You are right I apologize”.

That’s all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines