There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

kbin.life

peter , to asklemmy in People who struggled with procrastination and now stopped, what made you stop procrastinating?
@peter@feddit.uk avatar

They’re not browsing lemmy I’ll tell you that for free

Rocketpoweredgorilla ,
@Rocketpoweredgorilla@lemmy.ca avatar

Well hello there fellow procrastinator.

adj16 , to asklemmy in Help me stop accidentally hurting my dog

I have no advice for you, as I live in a very humid place without very much risk of static shocks. I just want to say this question and post are hilarious.

boogetyboo OP ,
@boogetyboo@aussie.zone avatar

Haha I admit to using a ragebait headline for attention

Chainweasel ,

There might be a solution in their comment though, do you have a humidifier?

sp00nix ,

This is the way. My last place was so dry, I would get zapped touching the metal frame in my desk and reboot my PC. I installed a humidifier into the central heat, no more zaps!

PonyOfWar , to asklemmy in Who is a Youtuber you find to be overrated ?

MrBeast and his many imitators. Don’t get the appeal and the constant shouting voice is very annoying to me. I guess I’m just too old.

Doorbook ,

The target audience is kids.

electrogamerman ,

That just makes it worse

LapGoat , to asklemmy in Do you believe in God?
@LapGoat@pawb.social avatar

nah, religion seems like a scam that usually results in unhinged beliefs and abuse.

Not a fan generally speaking.

if you dig into any religions beliefs, it goes into some wild fairy tail stuff that just…doesnt happen.

Not to mention that folks tend to base their morals on religion, and religions have very flawed morals.

the difference between god and myself is that if I could, I would prevent a child from getting bone cancer.

Oka ,

Religion did have good morals in theory. Not in practice.

Also, unrelated to your points, religion didn’t evolve. It stayed about the same for thousands of years, despite new science.

theKalash ,

Religion did have good morals in theory

Which one is that?

Shiggles ,

That jesus dude had some pretty liberal thoughts. Buddhism was a nice reaction to the caste system. The method of delivery may not be inherently moral, but it is possible to manipulate a population in a way overall beneficial to society.

theKalash ,

That jesus dude had some pretty liberal thoughts

He personally, maybe. I didn’t know the guy. The religion that grew around him, though … not so much.

I’m not sure if it’s because of his father or he just had terrible editors for his posthumous book release. But some of the stuff in there is quite abhorrent.

folkrav ,

It’s quite easy to find a lot of legitimately disgusting stuff in there, true. I’m on the antireligious apatheist side of things, so you don’t have to convince me on that. But I wouldn’t go as far as saying some religions’ fundamental pillars don’t have any good messages behind it. “Love one another” alone isn’t too bad at face value, isn’t it?

theKalash ,

We a have so many other books now that contain all those good messages, even a lot more with more relevance to modern life, without all the terrible stuff and non-sense.

It just makes no sense to keep a 2000 old book around for a couple of good messages that are already thaught in many other, more modern stories and context.

folkrav ,

The point was “do religions have any good in them”, not “are religious texts still relevant”.

theKalash ,

No, that was not the point. They point was “do Relgions have good morals” and the answere is clearly no.

folkrav ,

I see. You seem to interpret it as “are they moral as a whole”. I interpreted it as “do they have any good morals”. I don’t think either affirmation is contradictory.

theKalash ,

I interpreted it as “do they have any good morals”

That seems like quite a low bar. Basically the broken clock being right twice a day.

No relgious person goes around and says “never mind that jesus and god stuff, I’m just in it because of the “you shalt not kill””. It’s always about bundling in all the irrelevant crap. Those couple good stories about helping neighbours doesn’t offset that.

folkrav ,

Yeah, indeed. Was just explaining that it’s how I interpreted the comment you answered to initially, thus my response.

LapGoat ,
@LapGoat@pawb.social avatar

i didnt say religion only had bad morals. broken clocks and such.

but christianity in specific has a lot of flawed morals that christians handwave. like Mary being 12 when she gave birth to Jesus, or pretty much everything old testament.

claims of a perfect and just omnipotent god while stuff like that flies is sloppy.

galaxies_collide ,

If you need to rely on an external force and fear of hell to have morals, you’re not a good person.

UdeRecife , to asklemmy in What is an absurdity that has been normalized by society?
@UdeRecife@literature.cafe avatar

Having opinions about other peoples gender, sexual orientation, private matters. Also legislating about that.

angrymouse ,

Well, I agree that it is absurd, but it is nothing new

SirStumps ,
@SirStumps@lemmy.world avatar

It’s kind of like religion for me. Don’t try to preach to me and I don’t care what you do.

Anarchist ,
@Anarchist@hexbear.net avatar

hell yes. no government has any right to dictate our genders or who we love.

Whimsical , to asklemmy in What is an absurdity that has been normalized by society?

Once got in a conversation about nuclear power that hit the point of “Yes nuclear is safer and more efficient but what about the jobs of the coal employees? Do you want them all to starve?”

Took a while to digest because there’s a lot of normalization surrounding it, but after a while I realized what I had been told was:

“We have to intentionally gimp our efficiency in both energy production and pollution generation in order to preserve a harder, more costly industry, because otherwise people wouldn’t have a task that they need to do in order to feed themselves.”

Kinda disillusioned me with the underpinnings of capitalism, just how backwards it was to have to think this way. We can’t justify letting people live unless they’re necessary to society in some way - which might’ve made solid sense in older, very very different times in human history, but now means that so much of our culture is tied up in finding more excuses to make people do work that isn’t really necessary at all.

New innovations happen, and tasks are made easier, and that doesn’t actually save anyone any work, because everyone still has to put in 40 hours a week. New tech lets you do it in 10 hours? Whoops, actually that means that you’re out of a job, replaced with an intern or something. Making “life” easier makes individual lives harder, what the fuck? That isn’t how things should be at all!

Not exactly an easy situation to crack, but to circle back to the point of the thread - I hate how normal it is to argue on the basis that we need to create jobs, everywhere, all the time. I wish we’d have a situation where people can brag for political clout about destroying jobs instead, about reducing the amount of work people need to do to live and live comfortably, instead of trying to enforce this system where efficiency means making people obsolete means making people starve.

rodbiren ,

Woa there comrade. Trying to build a world where extracting value from labor isn’t they ultimate goal? You’ll never be a disillusional billionaire wannabe grinding your youth and passion into the labor that powers the elite classes whims with that attitude. Don’t you want to see Jeff Bezos sorta go to space? That can’t happen with spreading the wealth. Stay hungry my friend.

I_Has_A_Hat ,

The robots are creating art and music while the humans are working harder than ever.

Meowoem ,

That’s fun to say but not really a reflection of reality, factories full of machine operators don’t exist like they used to - my parents talk about what would be like when the local factory day ended and everyone would flood the streets, fill the bars and everyone would be in their overalls… They actually still make the same product in a slightly different location, only about fifty people work there but they produce far more units.

It’s the same in every industry, and all the extra profits are going into the pockets of the owners who live increasingly luxurious lifestyles. If the huge efficiency gains we’ve seen in recent decades were used to benefit society then we’d be living far better lives, but they’re being used to buy absurdly over priced art to hang in super yachts and show off to their rich buddies.

Reverendender ,

Hi, where can I sign up for your blog and donate to your campaign?

PanaX , to asklemmy in What is an absurdity that has been normalized by society?

Destroying our only habitable planet.

doofer_name ,

I feel like this should have way more upvotes.

nous , to linux in Hey Linux devs - Build a GUI or gtfo

Hey Linux devs - Build a GUI or gtfo

No you can GTFO if that is your attitude towards people volunteering their time to bring you an open OS and all the tools you need for free.

Yes, there is still a lot of room for improvement but attacking devs for not providing a GUI is not a good way to interact with the community. If you really want to see improvements then you need to help make those improvements with constructive discussions not hostile statements. We owe you nothing.

mub OP ,

My title was intentionally flipant. But I thin the automatic assumption that command line is always fine for linux desktop needs to evolve. Not to say it hasn’t, but there are definitely some basic gaps.

nous ,

Flippantly insulating the Linux devs is not the way to improve things. It has evolved and continues to do so. There are far more GUI tools for managing things then there has ever been. The only thing you have mentioned in your post is AMD GPU overclocking - not something I would consider a novice task nor something most people are going to want to do. So the priority to get a GUI to do this is quite low. Hell, it looks like there are no userland tools at all - only raw kernel interfaces. So it is really something we are lacking any tooling at all - let along GUI tools.

Better to advocate for these tools than insult devs for not having yet created them.

grue ,

My title was intentionally flipant.

No, your title was rude and condescending. “Flippant” is a different thing.

constantokra ,

Not evolving is a feature. I started using linux in the 90s, and you know what? About 90% of the stuff I learned then is still completely relevant.

I hate GUI apps for most things, because you have to search to figure out how to do anything. With CLI apps you read the man page and you know how to use it.

GregorGizeh , to asklemmy in What subscription finally gave you "subscription fatigue"?

It started with the Netflix enshittification. I have had a Spotify and Netflix account essentially since these services were available, and that was great. Now only the Spotify sub is worth it, though I started to loathe that one as well because it at some point deleted all my local files or replaced them with what it thought matched them in their database.

Also every fucking app, no matter how mundane, wants to sell me a subscription. I have a web based game boy emulator on my phone, it works fine but everything beyond the absolute basic functions is paywalled behind a subscription. Not even a one time purchase.

GreenMario ,

Bro. RetroArch, gambatte or MGBA core. Thank me later.

Caitlynn ,
@Caitlynn@feddit.de avatar

This

flat ,
@flat@reddthat.com avatar

if you truly need something web based, eclipse is completely free.

argv_minus_one ,

If it requires a subscription, it doesn’t exist.

That attitude has served me well. So far, at least.

massive_bereavement ,
@massive_bereavement@kbin.social avatar

I don't do mail though, I know some do and are successful but mail is too important for me (and everyone subjected to my technical whims) to fuck it up.

argv_minus_one ,

I meant software and media. With mail, somebody’s running a server and policing spammers, which costs time and money.

Landmammals ,

Enshittification didn’t kill Netflix. What killed it was all the studios pulling their content licenses so they could start their own Netflix. Enshittification happened afterwards as Netflix desperately tried to make itself constantly profitable. They killed a lot of good shows and messed with the algorithm that showed people what they actually wanted to see.

GregorGizeh , (edited )

I know what happened, I was there… Guess I should have used a different term for all the content being in one place for a good price shifting to being in a dozen places for exorbitant prices each than enshittification.

Landmammals ,

I’m torn between feeling bad for Netflix because they tried to do something cool and got the rug pulled out from under them as soon as it started to work, and mad at them for fucking up their algorithm and studio so badly

Cabeza2000 , to asklemmy in What’s one word to prove you lived trough the 90’s?

Netscape

qjkxbmwvz ,

Linux and Mac users can hold on to a little piece of that history with the wonderful xscreensaver suite (its author, jwz, was a Netscape dude).

Juno ,

Add to that, flying toasters

wtvr ,

Jwz! He was like nerd Jesus to us 90s computer geeks

const_void ,

Even better just use Firefox it descended from Netscape Navigator.

morgan_423 , to asklemmy in How would you save face if you got caught sniffing seats at work?
@morgan_423@lemmy.world avatar

I’m getting major “I’m asking for a friend” vibes off of this post.

WtfEvenIsExistence , to memes in Its that time again

How about letting sh.itjust.works take the place, because their shit seems to always just works.

CookieJarObserver ,
@CookieJarObserver@sh.itjust.works avatar

We have a uptime thats beyond reasonable, there were some smaller problems some days ago, but they just made things a little slower.

And Lemmyml is full of tankies.

There are however many other instances as well, being on many small ones eases the burden for all and makes single point of failure problems less likely.

JDubbleu ,

100% agree. I’m yet to notice programming.dev go down which makes sense when you consider the target demographic and that the admins probably fit right into it.

victron ,
@victron@programming.dev avatar

I feel so safe here lol

Zink ,

Once I made my non-world account here, Lemmy is like a whole new rock solid experience!

Combined with the choice in apps, even though they’re still evolving, it’s great.

victron ,
@victron@programming.dev avatar

Yeah, it’s incredible how your choice of instance can give you a completely different experience (in terms of stability). Still, you can sub to any community you want, so your instance is irrelevant, unless it’s defederated from a server you like.

Zink ,

Yeah, I think there’s a sweet spot in using a mid sized instance such as this. It’s big enough to have a bunch of users hitting communities all over the fediverse, and to have some resources dedicated to the server, but small enough to have its own identity and not be a target.

The magic happens because of how your home instance caches communities from other instances. So your experience depends on the reliability of the instance you’re using, but not so much the reliability of other instances(within reason).

ElPussyKangaroo ,

I love how this is now similar to when people would have usernames relevant to the conversation in Reddit.

Now, y’all just swoop in with the relevant instance 😂

static_motion ,

Which was part of my reasoning for joining there, along with the fact that I’m a dev myself so it aligns with my interests.

randint ,

If you think lemmy.ml is full of tankies, you should check out lemmygrad.ml and hexbear.net. They are a hundred times worse

CookieJarObserver ,
@CookieJarObserver@sh.itjust.works avatar

The two others are defederated by most because they are mostly actual commie nazis than anything else.

randint ,

I hope mine, lemm.ee, defederates from those instances too

CookieJarObserver ,
@CookieJarObserver@sh.itjust.works avatar

You can check that, bottom of the page and instances.

randint ,

I know that they have not been defederated from lemm.ee. I meant that I hope lemm.ee defederates from them in the future.

CookieJarObserver ,
@CookieJarObserver@sh.itjust.works avatar

Well, if not come to Shitjustworks, we defederated from lemmygrad (hexbear is basically irrelevant, I’ve never seen a post from there, just some idiots I’ve blocked)

randint ,

Thanks for the invitation. Though I’ve had plenty of unpleasant encounters with hexbears, I think I’ll stay on lemm.ee for now. Maybe I’ll move in the future when things get even worse.

loaf ,
@loaf@sh.itjust.works avatar

I back this, as a sh.ithead

burgersc12 ,

You have my bow

OrangeXarot ,

it just works ¯_(ツ)_/¯

16bitvoid ,
@16bitvoid@lemmy.world avatar

Just an anecdote, but I’ve been waiting for (I think) close to 36 hours for the email verification to complete my sign-up there. Since there’s no way to resend it, I’ve been stuck in sign-up limbo.

WtfEvenIsExistence ,

Meh, I can create an account using a Temp Email address. What’s your email provider?

16bitvoid ,
@16bitvoid@lemmy.world avatar

I’ve tried creating two accounts with the same issue. One with a Gmail address and one with a Fastmail account. Neither has received a verification email. I’ve checked the spam folders on both too.

Sh_ItDoesnt_Work ,

Hello from my new account at sh.itjust.works. I just created this, you can check my profile. I used a temp email: temp-mail[dot]org

After you sign up, you can change the email to some random invalid email so your account is safe from getting hijacked via password reset through email. Changing email doesn’t require re-verification as far as I can tell.

Perhaps your IP got flagged for some reason. Are you using a VPN or Tor? Those may trigger some spam filters. Is your network a public one? (Eg: University, Hospital, Workplace, etc.)

16bitvoid ,
@16bitvoid@lemmy.world avatar

That’s really weird. I’ve been staying at a hotel the last few days and maybe the IP is flagged for some reason, like you said. Well, I guess those two handles are just lost then.

FellowEnt , to linux in [Rant] I swear to fucking god. Windows is harder to use than Linux. Have any of you ever USED Windows lately? Holy fuck.

Definitely a skill issue at play here.

radioactiveradio ,

Yeah their K/D must be terrible in windows.

BigNote ,

I don’t think “skill” is the right word here. It’s more of a basic competence issue.

mindbleach , to nostupidquestions in What would 128 bits computing look like?

The PS3 had a 128-bit CPU. Sort of. “Altivec” vector processing could split each 128-bit word into several values and operate on them simultaneously. So for example if you wanted to do 3D transformations using 32-bit numbers, you could do four of them at once, as easily as one. It doesn’t make doing one any faster.

Vector processing is present in nearly every modern CPU, though. Intel’s had it since the late 90s with MMX and SSE. Those just had to load registers 32 bits at a time before performing each same-instrunction-multiple-data operation.

The benefit of increasing bit depth is that you can move that data in parallel.

The downside of increasing bit depth is that you have to move that data in parallel.

To move a 32-bit number between places in a single clock cycle, you need 32 wires between two places. And you need them between any two places that will directly move a number. Routing all those wires takes up precious space inside a microchip. Indirect movement can simplify that diagram, but then each step requires a separate clock cycle. Which is fine - this is a tradeoff every CPU has made for thirty-plus years, as “pipelining.” Instead of doing a whole operation all-at-once, or holding back the program while each instruction is being cranked out over several cycles, instructions get broken down into stages according to which internal components they need. The processor becomes a chain of steps: decode instruction, fetch data, do math, write result. CPUs can often “retire” one instruction per cycle, even if instructions take many cycles from beginning to end.

To move a 128-bit number between places in a single clock cycle, you need an obscene amount of space. Each lane is four times as wide and still has to go between all the same places. This is why 1990s consoles and graphics cards might advertise 256-bit interconnects between specific components, even for mundane 32-bit machines. They were speeding up one particular spot where a whole bunch of data went a very short distance between a few specific places.

Modern video cards no doubt have similar shortcuts, but that’s no longer the primary way the perform ridiculous quantities of work. Mostly they wait.

CPUs are linear. CPU design has sunk eleventeen hojillion dollars into getting instructions into and out of the processor, as soon as possible. They’ll pre-emptively read from slow memory into layers of progressively faster memory deeper inside the microchip. Having to fetch some random address means delaying things for agonizing microseconds with nothing to do. That focus on straight-line speed was synonymous with performance, long after clock rates hit the gigahertz barrier. There’s this Computer Science 101 concept called Amdahl’s Law that was taught wrong as a result of this - people insisted ‘more processors won’t work faster,’ when what it said was, ‘more processors do more work.’

Video cards wait better. They have wide lanes where they can afford to, especially in one fat pipe to the processor, but to my knowledge they’re fairly conservative on the inside. They don’t have hideously-complex processors with layers of exotic cache memory. If they need something that’ll take an entire millionth of a second to go fetch, they’ll start that, and then do something else. When another task stalls, they’ll get back to the other one, and hey look the fetch completed. 3D rendering is fast because it barely matters what order things happen in. Each pixel tends to be independent, at least within groups of a couple hundred to a couple million, for any part of a scene. So instead of one ultra-wide high-speed data-shredder, ready to handle one continuous thread of whatever the hell a program needs next, there’s a bunch of mundane grinders being fed by hoppers full of largely-similar tasks. It’ll all get done eventually. Adding more hardware won’t do any single thing faster, but it’ll distribute the workload.

Video cards have recently been pushing the ability to go back to 16-bit operations. It lets them do more things per second. Parallelism has finally won, and increased bit depth is mostly an obstacle to that.

So what 128-bit computing would look like is probably one core on a many-core chip. Like how Intel does mobile designs, with one fat full-featured dual-thread linear shredder, and a whole bunch of dinky little power-efficient task-grinders. Or… like a Sony console with a boring PowerPC chip glued to some wild multi-phase vector processor. A CPU that they advertised as a private supercomputer. A machine I wrote code for during a college course on machine vision. And it also plays Uncharted.

The PS3 was originally intended to ship without a GPU. That’s part of its infamous launch price. They wanted a software-rendering beast, built on the Altivec unit’s impressive-sounding parallelism. This would have been a great idea back when TVs were all 480p and games came out on one platform. As HDTVs and middleware engines took off… it probably would have killed the PlayStation brand. But in context, it was a goofy path toward exactly what we’re doing now - with video cards you can program to work however you like. They’re just parallel devices pretending to act linear, rather than they other way around.

Vlyn ,

There’s this Computer Science 101 concept called Amdahl’s Law that was taught wrong as a result of this - people insisted ‘more processors won’t work faster,’ when what it said was, ‘more processors do more work.’

You massacred my boy there. It doesn’t say that at all. Amdahl’s law is actually a formula how much speedup you can get by using more cores. Which boils down to: How many parts of your program can’t be run in parallel? You can throw a billion cores at something, if you have a step in your algorithm that can’t run in parallel… that’s going to be the part everything waits on.

Or copied:

Amdahl’s law is a principle that states that the maximum potential improvement to the performance of a system is limited by the portion of the system that cannot be improved. In other words, the performance improvement of a system as a whole is limited by its bottlenecks.

mindbleach ,

Gene Amdahl himself was arguing hardware. It was never about writing better software - that’s the lesson we’ve clawed out of it, after generations of reinforcing harmful biases against parallelism.

Telling people a billion cores won’t solve their problem is bad, actually.

Human beings by default think going faster means making each step faster. How you explain that’s wrong is so much more important than explaining that it’s wrong. This approach inevitably leads to saying ‘see, parallelism is a bottleneck.’ If all they hear is that another ten slow cores won’t help but one faster core would - they’re lost.

That’s how we got needless decades of doggedly linear hardware and software. Operating systems that struggled to count to two whole cores. Games that monopolized one core, did audio on another, and left your other six untouched. We still lionize cycle-juggling maniacs like John Carmack and every Atari programmer. The trap people fall into is seeing a modern GPU and wondering how they can sort their flat-shaded triangles sooner.

What you need to teach them, what they need to learn, is that the purpose of having a billion cores isn’t to do one thing faster, it’s to do everything at once. Talking about the linear speed of the whole program is the whole problem.

Vlyn ,

You still don’t get it. This is about algorithmic complexity.

Say you have an algorithm that has 90% that can be done in parallel, but you have 10% that can’t. No matter how many cores you throw at it, be it 4, 10, or a billion, the 10% will be the slowest part that you can’t optimize with more cores. So even with an unlimited amount of cores, your algorithm is still having to wait on the last 10% that runs on a single core.

Amdahl’s law is simply about those 10% you can’t speed up, no matter how many cores you have. It’s a bottleneck.

There are algorithms you can’t run in parallel, simply because the results depend on each other. For example in a cipher where you first calculate block A, then to calculate block B you rely on block A. You can’t do block A and B at the same time, it’s not possible. Yes, you can use multi-threading to calculate A, then do it again to calculate B, but overall you still have waiting times while you wait for each result, which means no matter how fast you get, you always have a minimum time that you’ll need.

Throwing more hardware at this won’t help, that’s the entire point. It helps to a certain degree, but at some point the parts you can’t run in parallel will hold you back. This obviously doesn’t count for workloads that can be done 100% in parallel (like rendering where you can split the workload up without issues), Amdahl’s law doesn’t apply there as the amount of single-core work would be zero in the equation.

The whole thing is used in software development (I heard of Amdahl’s law in my university class) to decide if it makes sense to multi-thread part of the application. If the work you do is too sequential then multi-threading won’t give you much of a benefit (or makes it run worse, as you have to spin up threads and synchronize results).

mindbleach ,

I am a computer engineer. I get the math.

This is not about the math.

Speeding up a linear program means you’ve already failed. That’s not what parallelism is for. That’s the opposite of how it works.

Parallel design has to be there from the start. But if you tell people adding more cores doesn’t help, unless!, they’re not hearing “unless.” They’re hearing “doesn’t.” So they build shitty programs and bemoan poor performance and turn to parallelism to hurry things up - and wow look at that, it doesn’t help.

I am describing a bias.

I am describing how a bias is reinforced.

That’s not even a corruption of Amdahl’s law, because again, the actual dude named Amdahl was talking to people who wanted to build parallel machines to speed up their shitty linear code. He wasn’t telling them to code better. He was telling them to build different machines.

Building different machines is what we did for thirty or forty years after that. Did we also teach people to make parallelism-friendly programs? Did we fuck. We’re still telling students about “linear portions” as if programs still get entered on a teletype and eventually halt. What should be a 300-level class about optimization is instead thrown at people barely past Hello World.

We tell them a billion processors might get them a 10% speedup. I know what it means. You know what it means. They fucking don’t.

Every student’s introduction to parallelism should be a case where parallelism works. Something graphical, why not. An edge-detect filter that crawls on a monster CPU and flies on a toy GPU. Not some archaic exercise in frustration. Not some how-to for turning two whole cores into a processor and a half. People should be thinking in workloads before they learn what a goddamn pointer is. We betray them, by using a framing of technology that’s older than disco. Amdahl’s law as she is taught is a relic of the mainframe era.

Telling kids about the limits of parallelism before they’ve started relying on it has been an excellent way to ensure they won’t.

Vlyn ,

At this point you’re just arguing to argue. Of course this is about the math.

This is Amdahl’s law, it’s always about the math:

upload.wikimedia.org/…/1024px-AmdahlsLaw.svg.png

No one is telling students to use or not use parallelism, it depends on the workload. If your workload is highly sequential, multi-threading won’t help you much, no matter how many cores you have. So you might be able to switch out the algorithm and go with a different one that accomplishes the same job. Or you re-order tasks and rethink how you’re using the data you have available.

Practical example: The game Factorio. It has thousands of conveyor belts that have to move items in a deterministic way. As to not mess things up this part of the game ran on a single thread to calculate where everything landed (as belts can intersect, items can block each other and so on). With some clever tricks they rebuilt how it works, which allowed them to safely spread the workload over several cores (at least for groups of belts). Bit of a write-up here (under “Multithreaded belts”).

Teaching software development involves teaching the theory. Without that you would have a difficult time to decide what can and what can’t benefit from multi-threading. Absolutely no one says “never multi-thread!” or “always multi-thread!”, if you had a teacher like that then they sucked.

Learning about Amdahl’s law was a tiny part of my university course. A much bigger part was actually multi-threading programs, working around deadlocks, doing performance testing and so on. You’re acting as if the teacher shows you Amdahl’s law and then says “Obviously this means multi-threading isn’t worth it, let’s move on to the next topic”.

mindbleach ,

“The way we teach this relationship causes harm.”

“Well you don’t understand this relationship.”

“I do, and I’m saying: people plainly aren’t getting it, because of how we teach it.”

“Well lemme explain the relationship again–”

Nobody has to tell people not to use parallelism. They just… won’t. In part because of how people tend to think, by default, and in part because of how we teach them to think.

We would have to tell students to use parallelism, if we expect graduates to choose it freely. It’s hard and it’s weird and you can’t just slap it on at the end. It should become what they do first.

I am telling you in some detail how focusing on linear performance, using the language of the nineteen goddamn seventies, doesn’t need to say multi-threading isn’t worth it, to leave people thinking multi-threading isn’t worth it.

Jesus, even calling it “multi-threading” is an obstacle. It makes parallelism sound like some fancy added feature. It’s the version of parallelism that shows up in late-version changelogs, when for some reason performance has become an obstacle.

Vlyn ,

Multi-threading is difficult, you can’t just slap it on everything and call it a day.

There are languages where it’s easier (Go, Rust, …) but parallelism is an advanced feature. Do it wrong and you get race conditions or dead locks. There is a reason you learn about this later in programming, but you do learn about it (and get to use it).

When we’re being honest most programmers work on CRUD applications, which are highly sequential, usually waiting on IO and not CPU cycles and so on. Saving 2ms on some operations doesn’t matter if you wait 50ms on the database (and sometimes using more threads is actually slower due to orchestration). If you’re working with highly efficient algorithms or with GPUs then parallelism has a much higher priority. But it always depends on what you’re working with.

Depending on your tech stack you might not even have the option to properly use parallelism, for example with JavaScript (if you don’t jump through hoops).

mindbleach ,

“Here’s all the ways we tell people not to use parallelism.”

I’m sorry, that’s not fair. It’s only a fraction of the ways we tell people not to use parallelism.

Multi-threading is difficult, which is why I said it’s a fucking obstacle. It’s the wrong model. The fact you’d try to “slap it on” is WHAT I AM TALKING ABOUT. You CANNOT just apply more cores to existing linear code. You MUST actively train people to write parallel-friendly code, even if it won’t necessarily run in parallel.

Javascript is a terrible language I work with regularly, and most of the things that should be parallel aren’t - and yet - it has abundant features that should be parallel. It has absorbed elements of functional programming that are excellent practice, even if for some goddamn reason they’re actually executed in-order.

Fetches are single-threaded, in Javascript. I don’t even know how they did that. Grabbing a webpage and then responding to an event using an inline function is somehow more rigidly linear than pre-emptive multitasking in Windows 95. But you should still write the damn things as though they’re going to happen in parallel. You have no control over the order they happen in. That and some caching get you halfway around most locks.

Javascript, loathesome relic, also has vector processing. The kind insisted upon by that pedant in the other subthread, who thinks the 512-bit vector units in a modern Intel chip don’t qualify, but the DSP on a Super Nintendo does. Array.forEach and Array.map really fucking ought to be parallelisable. Google could use its digital imperialism to force millions of devs to adopt better standards, just by following the spec and not processing keys in a rigid order. Bad code treating it like a simplified for-loop would break. Good code… wouldn’t.

We want people to write that kind of code.

Not necessarily code that will run in parallel. Just code that could.

Workload-centric thinking is the only thing that’s going to stop “let’s add a little parallelism, as a treat” from producing months of needless agony. Anything else has to be dissected, warped beyond recognition, and stitched back together, with each step taking more effort than starting over from scratch, and the end result still being slow and unreadable and fragile.

Spedwell , (edited )

Amdahl’s isn’t the only scaling law in the books.

Gustafson’s scaling law looks at how the hypothetical maximum work a computer could perform scales with parallelism—idea being for certain tasks like simulations (or, to your point, even consumer devices to some extent) which can scale to fully utilize, this is a real improvement.

Amdahl’s takes a fixed program, considers what portion is parallelizable, and tells you the speed up from additional parallelism in your hardware.

One tells you how much a processor might do, the only tells you how fast a program might run. Neither is wrong, but both are incomplete picture of the colloquial “performance” of a modern device.

Amdahl’s is the one you find emphasized by a Comp Arch 101 course, because it corrects the intuitive error of assuming you can double the cores and get half the runtime. I only encountered Gustafson’s law in a high performance architecture course, and it really only holds for certain types of workloads.

lte678 ,

I am unsure about the historical reasons for moving from 32-bit to 64-bit, but wouldnt the address space be a significantly larger factor? Like you said, CPUs have had vectoring instructions for a long time, and we wouldn’t move to 128-bit architectures just to be able to compute with numbers of those size. Memory bandwidth is, also as you say, limited by the bus widths and not the processor architecture. IMO, the most important reason that we transitioned to 64-bit is primarily for the larger address address space without having to use stupidly complex memory mapping schemes. There are also some types of numbers like timestamps and counters that profit from 64-bit, but even here I am not sure if the conplex architecture would yield a net slowdown or speedup.

To answer the original question: 128 bits would have no helpful benefit for the address space (already massive) and probably just slow everyday calculations down.

mindbleach ,

8-bit machines didn’t stop dead at 256 bytes of memory. Address length and bus width are completely independent. 1970s machines were often built with bit-slice memory, with however many bits of addressing, and one-bit output. If you wanted 8-bit memory then you’d wire eight chips in parallel - with the same address lines. Each chip would deliver a different part of the same logical byte.

64-bit math doesn’t need 64-bit hardware, either. Turing completeness says any computer can run the same code - memory and time allowing. As an object example, Javascript exclusively used 64-bit double floats, even when it was defined in the late 1990s, and ran exclusively on 32-bit machines.

lte678 ,

Clearly you can address more bytes than your data bus width. But then why all the “hacks” on 32-bit architectures? Like the 36-bit address bus via memory mapping on SPARCv8 instead of using paired index registers ( or ARMv7 width LPAE). From a perfomance perspective using an address width that is not the native register width/ internal data bus width is an issue. For a significant subset of operations multiple instructions are required instead of one.

Also is your comment about turing completeness to be taken seriously? We are talking about performance and practicality. Go ahead and crunch on some 64-bit floats using purely 8-bit arithmetic operations (or even using vector registers). Of course you can, but the point is that a suitable word size is more effective for certain computational tasks. For operations that are done frequently, they should ideally be done at native data-bus width. Vectored operations will also cost performance.

mindbleach ,

If timestamps and counters represent a bottleneck, you have problems larger than bit depth.

lte678 ,

Indeed, because those two things were only exemplary, meaning they would be indicative of your system having a bottleneck in almost all types workloads. Supported by the generally higher perforance in 64-bit mode.

vrighter ,

slight correction. vector processing is available on almost no common architectures. What most architectures have is SIMD instructions. Which means that code that was written for sse2 cannot and will not ever make use of the wider AVX-512 registers.

The risc-v isa is going towards the vector processing route. The same code works on machines with wide vector registers, or ones with no real parallel ability, but will simply loop in hardware.

Simd code running on a newer cpu with better simd capabilities will not run any faster. Unmodified vector code on a better vector processor, will run faster

mindbleach ,

Fancier tech co-opting an existing term doesn’t make the original use wrong.

Any parallel array operation in hardware is vector processing.

vrighter ,

fancy, vector processing predated simd. It’s how cray supercomputers worked in the 90s. You’re the one co opting an existing term :)

And it is in fact a big deal, with several advantages and disadvantages to both.

mindbleach ,
vrighter ,

from the very first paragraph in the page:

a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called vectors. This is in contrast to scalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additional single instruction, multiple data (SIMD) or SWAR Arithmetic Units.

Where it pretty much states that scalar processors with simd instructions are not vector processors. Vector processors work on large 1 dimensional arrays. Call me crazy, but I wouldn’t call a register with 16 32-bit values a “large” vector.

It also states they started in the 70s. That checks out. Which dates were you referring to?

mindbleach ,

This is rapidly going to stop being a polite interaction if you can’t remember your own claims.

SIMD predates the term vector processing, and was in print by 1966.

Vector processing is at least as old as the Cray-1, in 1975. It was already automatically parallelizing what would’ve been loops on prior hardware.

Hair-splitting about whether a processor can use vector processing or exclusively uses vector processing is a distinction that did not exist at the time and does not matter today. What the overwhelming majority of uses refer to is basically just SIMD extensions. Good luck separating the two when SIMT is a thing.

vrighter ,

I’m not hair splitting over whether they can or not. scalar processors with simd cannot do vector processing, because vector processing is not simd.

yes an array of values can be called a vector in a lot of contexts. I could also say that vector processing involves dynamically allocated arrays, since that’s what c++ calls them. A word can be used in mulmiple contexts. When the word vector is used in the term “vector processor” it specifically excludes scalar processors with simd instructions. It refers to a particular architecture of machine. Just being able to handle a sequence of numbers is not enough. Simd can do it, as can scalar processors (one at a time, but they still handle “an array of numbers”). You can’t even say that they necessarily have to execute more than one at a time. A superscalar processor without simd can do that as well.

A vector processor is a processor specifically designed to handle large lists. And yes, I do consider gpus to be vector processors (exact same shader running on better vector hardware, does run faster.) They are specifically designed for it. simd on a scalar processor is just… not

mindbleach ,

A word can be used in mulmiple contexts.

Says user insisting an umbrella term has one narrow meaning.

A meaning that would include the SoundBlaster 32.

Etterra , to asklemmy in What are some notable blunders in history that resulted in huge loss?

When the Spanish were raping the New World in the 1500s for gold, they dumped enormous quantities of platinum into the ocean because it was the wrong kind of shiny metal. Nobody in Europe had any clue how valuable the stuff was, only that it was often used to counterfeit gold. But since it wasn’t gold, or even silver, everyone thought it was worthless. This was exasperated by the fact that nobody could melt the stuff until the 1800s. But mostly it was just not yellow enough for the idiots at the time.

Hyperi0n ,

You don’t hold onto a useless material for 400 years hoping it has some value in the future.

PitzNR ,

Ever heard of the cables drawer? Bet you feel real stupid now

Hyperi0n ,

No. I pair and clean my out monthly.

collegefurtrader ,

HODL

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines