There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

fhein ,

llama.cpp uses the gpu if you compile it with gpu support and you tell it to use the gpu…

Never used koboldcpp, so I don’t know why it would it would give you shorter responses if both the model and the prompt are the same (also assuming you’ve generated multiple times, and it’s always the same). If you don’t want to use discord to visit the official koboldcpp server, you might get more answers from a more llm-focused community such as !localllama

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines