There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

How can I use a local LLM on Linux to generate a long story?

I’m interested in automatically generating lengthy, coherent stories of 10,000+ words from a single prompt using an open source local large language model (LLM) on low-spec hardware like a laptop without GPU and with i5-8250U, 16GB DDR4-2400MHz. I came across the “Awesome-Story-Generation” repository which lists relevant papers describing promising methods like “Re3: Generating Longer Stories With Recursive Reprompting and Revision”, announced in this Twitter thread from October 2022 and “DOC: Improving Long Story Coherence With Detailed Outline Control”, announced in this Twitter thread from December 2022. However, these papers used GPT-3, and I was hoping to find similar techniques implemented with open source tools that I could run locally. If anyone has experience or knows of resources that could help me achieve long, coherent story generation with an open source LLM on low-spec hardware, I would greatly appreciate any advice or guidance.

Andromxda ,
@Andromxda@lemmy.dbzer0.com avatar

Try out some of the options listed in this comment ttrpg.network/comment/6729305

astrsk ,
@astrsk@kbin.social avatar

GPT4All

tallricefarmer ,
@tallricefarmer@sopuli.xyz avatar

I feel like i’ve seen someone ask this exact question here not too long ago. Was it you?

julianh ,

You can get a really cool, coherent story of any length you want by writing one or hiring a writer.

AlligatorBlizzard ,

I’ve won NaNoWriMo twice and I can confirm that writing your own does not necessarily result in a cool or coherent story. One of the two is likely better than an LLM could come up with, though.

thebardingreen ,
@thebardingreen@lemmy.starlightkel.xyz avatar

I’ve written more than enough words to win, while failing to finish my story. I’ve also played a lot with local LLMs. Can confirm on all counts.

possiblylinux127 ,

You aren’t going to get a response that long. That is just the limitations of LLM’s. If you do manage to get something that long it won’t make sense as it can’t hold enough context as it generates.

vhstape ,
@vhstape@lemmy.sdf.org avatar

Ollama provides a Python API which may be useful. You could have it generate the story in chunks, having it generate a list of key points which get passed to subsequent prompts. Maybe…

d416 ,

The limited context lengths for local LLMs will be a barrier to write 10k words in a single prompt. Approaches to this is to have the LLM have a conversation with itself or other LLMs. There are prompts out there that can simulate this, but you will need to intervene every few hundred words or so. Check out ‘AutoGen’ frameworks that can orchestrate this for you. CrewAI is one of the better ones. hope this helps

breadsmasher ,
@breadsmasher@lemmy.world avatar

Have a peruse of this article. Various different options for running LLMs locally

matilabs.ai/2024/02/07/run-llms-locally

ChasingEnigma OP ,

I already use an LLM locally. What I’m looking is a simple way to automate the process of making the LLM write long stories.

breadsmasher ,
@breadsmasher@lemmy.world avatar

I absolutely misunderstood your post, my bad

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines