There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Self hoating an LLM for research

I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.

I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.

In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.

P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!

d416 ,

The easiest way to run local LLMs on older hardware is Llamafile github.com/Mozilla-Ocho/llamafile

For non-nvidia GPUs, webgpu is the way to go github.com/abi/secret-llama

OpticalMoose ,
@OpticalMoose@discuss.tchncs.de avatar

Probably better to ask on !localllama. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

Short answer - yes, you can do it. It’s just a matter of how much RAM you have available and how long you’re willing to wait for an answer.

Evotech ,

There’s a few.

Very easy if you set it up with Docker.

Best is probably just ollama and use danswer as a frontend. Danswer will do all the RAG stuff for you. Like managing / uploading documents and so on

Ollama is becoming the standard selfnhosted LLM. And you can add any models you want / can fit.

ollama.com/…/ollama-is-now-available-as-an-offici…

docs.danswer.dev/quickstart

theterrasque ,

Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.

For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.

And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.

As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.

So no, you’re not loading all the notes directly, and you won’t have a smart model.

For your hardware and use case… try phi3-mini with a RAG system as a start.

applepie ,

You would need 24gb vram card to even start this thing up. Prolly would yield shiti results

Bipta ,

They didn't even mention a specific model. Why would you say they need 24gb to run any model? That's just not true.

applepie ,

I didnt say any. Based on what he is asking, he can't just run this shit on an old laptop.

dlundh ,

I watched NetworkChucks tutorial and just did what he did but on my Macbook. Any recent Macbook(M-series) will suffice. youtu.be/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo

slurpinderpin ,

NetworkChuck is the man

Fisch ,
@Fisch@discuss.tchncs.de avatar

What I’m using is Text Generation WebUI with an 11B GGUF model from Huggingface. I offloaded all layers to the GPU, which uses about 9GB of VRAM. With GGUF models, you can choose how many layers to offload to the GPU, so it uses less VRAM. Layers that aren’t offloaded use system RAM and the CPU, which will be slower.

Sims ,

You need more than a llm to do that. You need a Cognitive Architecture around the model that include RAG to store/retrieve the data. I would start with an agent network (CA) that already includes the workflow you ask for. Unfortunately I don’t have a name ready for you, but take a look here: github.com/slavakurilyak/awesome-ai-agents

RichardoC ,

Jan.ai might be a good starting point or ollama? There’s …fromprod.com/…/using-your-own-hardware-for-llms.… which has some guidance for using jan.ai for both server and client

h3ndrik , (edited )

It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It’s probably not enough to run any reasonably smart LLM aside from maybe Microsoft’s small “phi” model.

So unless it’s a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it’s going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It’s the best/fastest way to do inference on slow computers and CPUs.

Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)

umami_wasbi ,

GPT4ALL with the LocalDocs plugin?

Skrufimonki ,

While you can run an llm on an “old” laptop with an Nvidia GC it will likely be really slow. Like several minutes to much much longer slow. Huggingface.co is a good place to start and has a ton of different LLMs to choose from that range from small enough to run on your hardware to ones that won’t.

As you are a teacher you know that research is going to be vital to your understanding and implementing this project. There is a plethora of information out there. There will not be a single person’s answer that will work perfectly for your wants and your hardware.

When you have figured out your plan and then run into issues that’s a good point to ask questions with more information about your situation.

I say this cause I just went through this. Not to be an ass.

lemmyvore ,

Can they not get a TPU on USB, like the Coral Accelerator or something?

theterrasque ,

It’s less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that’s usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb’s of RAM that’s many times faster than the CPU’s ram, which is the main reason it’s faster for llm’s.

Most tpu’s don’t have much ram, and especially cheap ones.

stanleytweedle ,

I’m in the early stages of this myself and haven’t actually run an LLM locally but the term that steered me in the right direction for what I was trying to do was ‘RAG’ Retrieval-Augmented Generation.

ragflow.io (terrible name but good product) seems to be a good starting point but is mainly set up for APIs at the moment though I found this link for local LLM integration and I’m going to play with it later today. github.com/infiniflow/…/deploy_local_llm.md

pushECX ,

I’d recommend trying LM Studio (lmstudio.ai). You can use it to run language models locally. It has a pretty nice UI and it’s fairly easy to use.

I will say, though, that it sounds like you want to feed perhaps a large number of tokens into the model, which will require a model made for a large context length and may require a pretty beefy machine.

s38b35M5 ,
@s38b35M5@lemmy.world avatar

matilabs.ai/2024/02/07/run-llms-locally/

Haven’t done this yet, but this is a source I saved in response to a similar question a while back.

Sekki ,

While this will get you a selfhosted LLM it is not possible to feed data to them like this. As far as I know there are a 2 possibilities:

  1. Take an existing model and use the literature data to fine tune the model. The success of this will depend on how much “a lot” means when it comes to the literature
  2. Create a model yourself using only your literature data

Both approaches will require some yrogramming knowledge and understanding of how a llm works. Additionally it will require a preparation of the unstructured literature data to a kind of structured data that can be used to train or fine tune the model.

Im just a CS student so not an expert in this regard ;)

s38b35M5 ,
@s38b35M5@lemmy.world avatar

Thx for this comment.

My main drive for self hosting is to escape data harvesting and arbitrary query limits, and to say, “I did this.” I fully expect it to be painful and not very fulfilling…

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines