There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

h3ndrik , (edited )

It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It’s probably not enough to run any reasonably smart LLM aside from maybe Microsoft’s small “phi” model.

So unless it’s a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it’s going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It’s the best/fastest way to do inference on slow computers and CPUs.

Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines