I was coming from Lighttpd which at the time had a very similar config syntax to Nginx. It was pretty much a no brainer, considering I wanted to shift to an automated Letsencrypt renewal process at the same time.
Sadly I wrote some python web services for CGI (not django/flask) that cannot be run anymore, since NGINX only supports FCGI, rather than just CGI as far as I can tell
I remember that Asus did this back in the day at least, not sure if they still do. But I remember having rss feeds for at least 2 of my motherboards in my reader, back when rss was actually widely used. It’s been like 10-15 years though…
For LLMs it entirely depends on what size models you want to use and how fast you want it to run. Since there’s diminishing returns to increasing model sizes, i.e. a 14B model isn’t twice as good as a 7B model, the best bang for the buck will be achieved with the smallest model you think has acceptable quality. And if you think generation speeds of around 1 token/second are acceptable, you’ll probably get more value for money using partial offloading.
If your answer is “I don’t know what models I want to run” then a second-hand RTX3090 is probably your best bet. If you want to run larger models, building a rig with multiple (used) RTX3090 is probably still the cheapest way to do it.
Regular reminder to anyone answering about rule 3. And if you do give links to top-level domains, please use b64 encoding or some other obfuscation mechanism.
My aunt works in oil, if it was made illegal tomorrow that wpuld dirsupt her life. I would be glad to help her and glad that the future of everybody just got brighter?
If oil was made illegal tomorrow, society would collapse.
Supply chains would collapse, artificial fertilizer would not be made, crops would die, massive famine would set in, medicine would not be able to be made, power generation would stall, including emergeny generators, and vast numer of people would die globally.
Congratulations, who ever makes oil illegal, will be responsible for the biggest mass death globally ever.
Me personally, as a newb regarding proxy and homelab, I use nginx because it was super easy to set up (proxmox script) there were many tutorials available and it just works great. I had to debug some things and this also worked great, so just a perfect package.
This is way better looking than mine, I’m new to Linux and I got a long way to go before I can do something like this but despite the hassle of using Linux there’s no way I’m going back to Windows
kbin.life
Oldest