Even though most people don’t agree with the stats, I think it makes sense because Arch users are never satisfied with their setup. It could cause many of them to choose an average number.
Fair moderation. The biggest problem with the largest instances is that they are heavily skewed towards communist ideals and censorship, and mods will ban you for holding (locally) controversal opinions despite not breaking any rules. And sometimes the rules are too arbitrary and get used as a scapegoat to ban you for your opinion.
Programming.dev has been a very good example of how moderation should be done, but it is for programmers, thus may not be appealing to the typical user, and they end up on lemmy.ml instead and get banned because the mod was in a bad mood and didn’t like your opinion.
I saw that already. Programming.dev was right away on point about hiding some of my RSS bot’s posts, unless the users were subscribed, because it was spamming their users’ feeds and they didn’t want that. They’re clearly invested in their users having a good experience instead of, I guess, wanting to order them around? I’m not familiar but it looks like programming.dev is doing it right.
I agree. The moderation on Lemmy is halfway to Reddit’s. There are random rules for no reason. I don’t fully get it.
As far as I know, lemmy.ml and hexbear are the only heavily communist and censorship prone servers out of the top twelve. They were here first, but we really need to stop perpetuating the notion that they represent or dominate Lemmy as a whole, along with the idea that they represent a typical moderation experience on this platform.
I feel like the numerous well-moderated instances don’t get enough credit. The actions of lemmy.ml moderators tend to shape the narrative about Lemmy moderation, which is unfair to other servers and repels new users from the platform. Other instances aren’t perfect with moderation either, but at least they generally try to moderate in good faith and with some degree of neutrality, which is the most you can really ask for.
The primary influence that remains is lemmy.ml still hosts a disproportionate number of major communities, but that’s slowly changing.
Fair point. I said the biggest, but as you said, lemmy has been outgrowing the original instances. lemmy.ml hosting so many major communities is still a problem, but if that is slowly changing, I see a good future in Lemmy. OP seems decent so let’s hope it grows into a fine instance.
I think a 650 W PSU should be enough for a workload of 490 W idle. Please, correct me, if I am wrong.
You mean 490W under load, right? One would hope that your computer uses less than 100W idle, otherwise it’s going to get toasty in your room :) I would say this depends on how much cheaper a 650W PSU is, and how likely it is you’ll upgrade your GPU. It really sucks saving up for a ridiculously expensive new GPU and then realizing you also need to fork out an additional €150 to replace your fully functional PSU. On the other hand, going from 650W to 850W might double the cost of the PSU, and it would be a waste of money if you don’t buy a high end GPU in the future. For PSU, check out cultists.network/140/psu-tier-list/ .If you’re buying a decent quality unit I wouldn’t worry about efficiency loss from running at a lower % of its rated max W, I doubt it’s going to be enough to be noticeable on your power bill.
I’ve always had Nvidia GPUs and they’ve worked great for me, though I’ve stayed with X11 and never bothered with Wayland. If you’re conscious about power usage, many cards can be power limited + overclocked to compensate. For example I could limit my old RTX3080 to 200W (it draws up to 350W with stock settings) and with some clock speed adjustments I would only lose about 10% fps in games, which isn’t really noticeable if you’re still hitting 120+ fps. My current RTX3090 can’t go below 300W (stock is 370W) without significant performance loss though.
If you have any interest in running AI stuff, especially LLM (text generation / chat), then get as much VRAM as you possibly can. Unfortunately I discovered local LLMs just after buying the 3080, which was great for games, and realized that 12GB VRAM is not that much. CUDA (i.e. Nvidia GPUs) is still dominant in AI, but ROCm (AMD) is getting more support so you might be able to run some things at least.
Another mistake I made when speccing my PC was to buy 2*16GB RAM. It sounded like a lot at the time, but once again when dealing with LLMs there are models which are larger than 32GB that I would like to run with partial offloading (splitting work between GPU and CPU, though usually quite slow). Turns out that DDR5 is quite unstable, and I don’t know if it’s my motherboard or the Ryzen CPU which is to blame, but I can’t just add 2 more RAM. I.e. there are 4 slots, but it would run at 3800MHz instead of the 6200Mhz that the individual sticks are rated for. Don’t know if Intel mobos can run 4x DDR5 sticks at full speed.
And a piece general advice, in case this isn’t common knowledge at this point; Be wary when trying to find buying advice using search engines. Most of the time it’ll only give you low quality “reviews” which are written only to convince readers to click on their affiliate links :( There are still a few sites which actually test the components and not just AI generate articles. Personally I look for tier lists compiled by users (Like this one for mobos), and when it comes to reviews I tend to trust those which get very technical with component analyses, measurements and multiple benchmarks.
kbin.life
Active