There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

barsoap ,

Second, it’s not even clear to me that a model which only saw the text of a book, but not any descriptions or summaries of it, during training would even be particular good at producing a summary.

Summarising stuff is literally all ML models do. It’s their bread and butter: See what’s out there and categorise into a (ridiculously) high-dimensional semantic space. Put a bit flippantly: You shouldn’t be surprised if it’s giving you the same synopsis for both Dances with Wolves and Avatar because they are indeed very similar stories, occupying the same approximate position in that space. If you don’t ask for a summary but a full screenplay it’s going to come up with random details to fill in the details it ignored while categorising, again the results will look similar if you squint right because, again, they’re at the core the same story.

It’s not even really necessary for those models to learn the concept of “summary” – only that, in a prompt, it means “write a 200 word output instead of a 20000 word one”. It will produce a longer or shorter description of that position in space, hallucinating more or less details. It’s really no different than police interviewing you as a witness to a car accident and having to pay attention to not prompt you wrong, including assuming that you saw certain things or you, too, will come up with random bullshit (and believe it): It’s all a reconstructive process, generating a concrete thing from an abstract representation. There’s really no art to summary it’s inherent in how semantic abstraction works.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines