There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

fhein ,

For LLMs it entirely depends on what size models you want to use and how fast you want it to run. Since there’s diminishing returns to increasing model sizes, i.e. a 14B model isn’t twice as good as a 7B model, the best bang for the buck will be achieved with the smallest model you think has acceptable quality. And if you think generation speeds of around 1 token/second are acceptable, you’ll probably get more value for money using partial offloading.

If your answer is “I don’t know what models I want to run” then a second-hand RTX3090 is probably your best bet. If you want to run larger models, building a rig with multiple (used) RTX3090 is probably still the cheapest way to do it.

maxwellfire ,

I feel like this really depends on what hardware you have access too. What are you interested in doing?How long are you willing to wait for it to generate, and how good do you want it to be?

You can pull off like 0.5 word per second of one of the mistral models on the CPU with 32GB of RAM. The stabediffusion image models work okay with like 8-16GB of vram.

possiblylinux127 ,

“Bang for Buck”

Good luck. I would wait for the AI phase to crash

hendrik ,

Buy the cheapest graphics card with 16 or 24GB of VRAM. In the past people bought used NVidia 3090 cards. You can also buy a GPU from AMD, they're cheaper but ROCm is a bit more difficult to work with. Or if you own a MacBook or any Apple device with a M2 or M3, use that. And hopefully you paid for enough RAM in it.

thirdBreakfast ,

An M1 MacBook with 16GB cheerfully runs llama3:8b outputting about 5 words a second. FA second hand MacBook like that probably costs half to a third of a secondhand RTX3090.

It must suck to be a bargain hunting gamer. First bitcoin, and now AI.

Damage ,

Patient gamers at least have the steam deck option now

Fisch ,
@Fisch@discuss.tchncs.de avatar

I actually use an AMD card for running image generation and LLMs on my PC on Linux. It’s actually not hard to set up.

s38b35M5 ,
@s38b35M5@lemmy.world avatar

Details on your setup?

russjr08 ,

I’m not the original person you replied to, but I also have a similar setup. I’m using a 6700XT, with both InvokeAI and stable-diffusion-webui-forge setup to run without any issues. While I’m running Arch Linux, I have it setup in Distrobox so its agnostic to the distro I’m running (since I’ve hopped between quite a few distros) - the container is actually an Ubuntu based container.

The only hiccup I ran into is that while ROCm does support this card, you need to set an environmental variable for it to be picked up correctly. At the start of both sd-webui and invokeai’s launch scripts, I just use:


<span style="color:#323232;">export HSA_OVERRIDE_GFX_VERSION=10.3.0
</span>

In order to set that up, and it works perfectly. This is the link to the distrobox container file I use to get that up and running.

kata1yst , (edited )

KobaldCPP or LocalAI will probably be the easiest way out of the box that has both image generation and LLMs.

I personally use vllm and HuggingChat, mostly because of vllm’s efficiency and speed increase.

DarkThoughts ,

It is probably dead but Easy Diffusion is imo the easiest for image generation.

KoboldCPP can be a bit weird here and there but was the first thing that worked for me for local text gen + gpu support.

istanbullu ,

Automatic1111 for Stable Diffusion and Ollama for LLMs

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines