There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Is there any actual standalone AI software?

Is there any computer program with AI capabilities (the generative ones seen in ChatGPT; onlineText-to-Picture generators, etc.) that is actually standalone? i.e. able to run in a fully offline environment.

As far as I understand, the most popular AI technology rn consists of a bunch of matrix algebra, convolutions and parallel processing of many low-precision floating-point numbers, which works because statistics and absurdly huge datasets. So if any such program existed, how would it even have a reasonable storage size if it needs the dataset?

Wahots ,
@Wahots@pawb.social avatar

lmstudio.ai

You can load up your own datasets, has some of its own, too. Most of these are pretty good, but run on synthetic data. Storing and processing something the size of chatgpt would bankrupt most people.

This program can use significant amounts of computer resources if you let her eat. I recommend closing other programs and games.

Evotech ,

ComfyUI is the best for image AI

astrsk ,
@astrsk@kbin.run avatar

GPT4ALL for chat and Automatic1111 for generative with downloaded models works great. The former does not require a gpu but the later generally does.

Starbuck ,

If you are into development, the setup I use is ollama running codegemma:7b along with the Continue.dev plugin for vscode.

ichbinjasokreativ ,

Stable diffusion and ollama for image and text generation locally. Super easy to do on linux and support gpu acceleration out of the box

random72guy ,
@random72guy@lemmy.world avatar

For LLMs, I’ve had really good results running Llama 3 in the Open Web UI docker container on a Nvidia Titan X (12GB VRAM).

For image generation tho, I agree more VRAM is better, but the algorithms still struggle with large image dimensions, ao you wind up needing to start small and iterarively upscale, which afaik works ok on weaker GPUs, but will gake problems. (I’ve been using the Automatic 1111 mode of the Stable Diffusion Web UI docker project.)

I’m on thumbs so I don’t have the links to the git repos atm, but you basically clone them and run the docker compose files. The readmes are pretty good!

bobburger ,

Llamafile is a pretty good option for 100% local LLMs. The smaller models are pretty good for basic applications. They run at a reasonable speed on my Samsung laptop and really fast on my M2 macbook.

Sabata11792 ,

If you have a good GPU, you should be able to run a model without issue. The big ones are technically usable with tweaking but so slow enough to be useless on normal hardware. A small model may be 4-8 gb, but a larger one could be 100+gb. You don’t need the training data(if its even public) to run them, only if your building or retraining the model. There’s a crap ton of different software to run AI on.

To get started assuming you got a beefy PC, you need a model and software to interact with it. I started with mistral7b and textGenWebUi and been trying out different software and models. Text gen has the basics to load and chat to a model and is a good starting point.

Model-https://mistral.ai/technology/Software-https://github.com/…/text-generation-web…

For Images, you can choose models at based on what the sample images look like, they tend to be specialized for certain styles or content. You can add LORAs to further change how the output looks(think specific characters or poses). It’s very much trial and error getting good images.

Models-https://civitai.com (potentially NSFW) Software-https://github.com/vladmandic/automatic

There’s more models and software out their than I can keep track of, so if something is crap you should be able to find an alternative. Youtube guides are you friend.

bjoern_tantau ,
@bjoern_tantau@swg-empire.de avatar

Krita has an AI plugin that’s pretty painless to setup if you’ve got an nVidia card. AMD has to be done manually or you can fall back to slow CPU generation. It uses ComfyUI in the background.

Grimy ,

You need a GPU for any kind of performance.

For text I suggest: Ollama backend - command line interface, very easy to download models with one line of code. Supports most models and you can talk with the model inside the terminal so it’s stand alone OpenWebUI - easy install with docker and is meant to work easily with ollama. Comes with web search features and uploading pdfs. A bunch of different community tools and modules are available.

For img I suggest either: Automatic1111 - Traditional UI using gradio. Lots of extras you can download through the UI to do different things. ComfyUI - Node based UI, a bit more complicated but more powerful than automatic1111

For models, you can go on civitai and just download whatever you need and drop it into their respective folders for both auto and comfy.

For text, there’s also LMStudio which is very user friendly. It is closed source and much slower than ollama from my experience though. I have a 4060 in my laptop (8gb VRAM) and I’m getting an image every 2 secs about using stable diffusion 1.5 models and text speed is on par with chatgpt with the smaller 8b-9b model. For text I suggest gemma2 which is probably the best small model out right now.

sunzu ,

Do you have 24gb GPU.

If so.. Then you can get decent results from running local models

FaceDeer ,
@FaceDeer@fedia.io avatar

You can get decent results with much less these days, actually. I don't have personal experience (I do have a 24GB GPU) but the open source community has put a lot of work into getting models to run on lower-spec machines. Aim for smaller models (8B parameters is common) and low quantization (the values of the parameters get squished into smaller numbers of bits). It's slower and the results can be of noticeably lower quality but I've seen people talk about usable LLMs running CPU-only.

CaptDust ,

Local LLMs can be compressed to fit on consumer hardware. Model formats like GUFF and Exl2 can be loaded up with a offline hosted API like KobaldCPP or Oobabooga. These formats lose resolution from the full floating point model and become “dumber” but it’s good enough for many uses.

Also noting these models are like, 7, 11, 20 Billion parameters while hosted models like ChatGPT run closer to 8x220 Billion

FaceDeer ,
@FaceDeer@fedia.io avatar

Though bear in mind that parameter count alone is not the only measure of a model's quality. There's been a lot of work done over the past year or two on getting better results from the same or smaller parameter counts, lots of discoveries have been made on how to train better and run inferencing better. The old ChatGPT3 from back at the dawn of all this was really big and was trained on a huge number of tokens but nowadays the small downloadable models fine-tuned by hobbyists would compete with it handily.

CaptDust ,

Agreed, especially true with Llama3 their 7b model is extremely competitive.

FaceDeer ,
@FaceDeer@fedia.io avatar

Makes it all the more amusing how OpenAI staff were fretting about how GPT-2 was "too dangerous to release" back in the day. Nowadays that class of LLM is a mere toy.

lung ,
@lung@lemmy.world avatar

Whenever these corps talk up the danger of AI, all I think is “nice marketing dept bro”

KoboldCoterie ,
@KoboldCoterie@pawb.social avatar

Stable Diffusion (AI image generation) runs fully locally. The models (the datasets you’re referring to) are generally around 3GB in size. It’s more about the processing power needed for it to run (it’s very GPU-intensive) than the storage size on disk.

Rhaedas ,

The AI, image, and audio models that can run on a typical PC have all been broken down from originally larger models. How this is done affects what the models can do and the quality, but the open source community has come a long way in making impressive stuff. First question is more hardware - do you have an Nvidia GPU that can support these types of generations? They can be done through CPU alone, but it's painfully much slower.

If so, then I would highly recommend looking into Ollama for running AI models (using WSL if you're using Windows) and ComfyUI for graphical generation. Don't let the workflow of complicated ComfyUI scare you, starting from the basics with plenty of Youtube help out there it will make sense. As for TTS, there's a lot of constant "new stuff" out there, but for actual local processing in "real time" (still takes a bit) I have yet to find anything to replace my Coqui TTS copy with Jenny as the model voice. It may take some digging and work to get that together, it's older and not supported anymore.

hendrik ,

I don't think they break them down. For most models the math requires to start at the beginning and train each model individually from ground up.

But sure, a smaller model generally isn't as capable as a bigger one. And you can't train them indefinitly. So for a model series you'll maybe use the same dataset but feed more into the super big variant and not so much into the tiny one.

And there is something where you use a big model to generate questions and answers and use them to train a different, small model. And that model will learn to respond like the big one.

Rhaedas ,

The breaking down I mentioned is the quantization that forms a smaller model from the larger one. I didn't want to get technical because I don't understand the math details myself past how to use them. :)

hendrik ,

Ah, sure. I think a good way to phrase it is to say they lower the precision. That's basically what they do, convert the high precision numbers to lower precision fomats. That makes the computations easier/faster and the files smaller.

And it also doesn't apply to text, audio and images. As far as I know quantization is mainly used with LLMs. It's also possible with images and audio, but generally they don't do that. As far as I remember it leads to degradation and distortions pretty fast. There are other methods like pruning used with generative image models. That brings down their size substantially.

Ziggurat ,

There is tons of “standalone” software that you can run on your own PC

  • For Text generation, the easiest way is to get GPT4All package which allows you to run text generation model in CPU on your own PC
  • For image generation, you can try to get Easy difusion package which is an easy to use stable diffusion package, then if you like-it, time to try the “comfyUI”

You can check !localllama and !imageai for some more information

deranger ,

I’ve wanted to try these out for shits and giggles - what would I expect with a 3090, is it going to take a long time to make some shitposts?

Ziggurat ,

With SD 1.5 my old GTX 970was doing fine (30 second per image) I upgraded to a Radeon 7060 and with SDXL get like 4 images in these 30 seconds (but sometimes crash my Pac when loading a model)

chicken ,

3090s are ideal because the most important factor is vram, and those are at the top of the plateau for vram until you get into absurdly expensive server hardware. Expect around 3 seconds for generating a 512x512 image or 4 words per second generating text at around GPT 3.5 quality.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines