There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

neidu2 , (edited )

Technically possible with a small enough model to work from. It’s going to be pretty shit, but “working”.

Now, if we were to go further down in scale, I’m curious how/if a 700MB CD version would work.

Or how many 1.44MB floppies you would need for the actual program and smallest viable model.

PixelatedSaturn ,

Might be a dvd. 70b ollama llm is like 1.5GB. So you could save many models on one dvd.

DannyBoy ,

It does have the label DVD-R

ignotum ,

70b model taking 1.5GB? So 0.02 bit per parameter?

Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs

PixelatedSaturn ,

Ah yes probably the Smaler version, your right. Still, a very good llm better than gpt 3

9point6 ,

Less than half of a BDXL though! The dream still breathes

errer ,

It is a DVD, can faintly see DVD+R on the left side

lidd1ejimmy OP ,

yes i guess it would be a funny experiment for just a local model

kindenough ,

Maybe not all that LLM, https://en.wikipedia.org/wiki/ELIZA

Naz ,

squints

That says , “PHILLIPS DVD+R”

So we’re looking at a 4.7GB model, or just a hair under the tiniest, most incredibly optimized implementation of <INSERT_MODEL_NAME_HERE>

curbstickle ,

llama 3 8b, phi 3 mini, Mistral, moondream 2, neural chat, starling, code llama, llama 2 uncensored, and llava would fit.

Num10ck ,

ELIZA was pretty impressive for the 1960s, as a chatbot for psychology.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines