There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmer_humor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

RiQuY , in using gpu with linux experience

Never happened to me, yet. Every update ran correctly and if there were any package conflicts it will prompt you several choices.

igorlogius , in using gpu with linux experience
@igorlogius@lemmy.world avatar
UnshavedYak ,

Might be true, honestly. I'm on NixOS using the proprietary drivers for my 3080 and 4090. No issues, took one line of configuration. I do have to stay on X11 unfortunately until Wayland supports the real drivers at least (though i hear that's being worked on, maybe already working?).

For all of NixOS' pain, it really does make some things awesome and simple.

igorlogius ,
@igorlogius@lemmy.world avatar

totally agree

noddy , in using gpu with linux experience

Analyzing the symptoms, I’m afraid to say, you might have nvidia.

Duke_Nukem_1990 , in using gpu with linux experience

chroot goes brrrr

killerinstinct101 , in using gpu with linux experience

Oh no, anyway

Boots into snapshot

TheTrueLinuxDev ,

Yup and I am getting sick of hearing this even on Arch Linux. Like, mofo, you could literally run a snapshot or backup before upgrading, don’t blame us if you’re yoloing your god damn computer. Windows have exactly the same problem too and this is why we have backups. Christ.

On my Arch Linux Install, I literally have a Pacman Hook that would forcibly run backup and verify the said backup before doing a system-wide update.

neoney ,
@neoney@lemmy.neoney.dev avatar

Beating this with NixOS

TheTrueLinuxDev ,

Not necessarily, you still need backups or snapshots especially on home directory in case software have a nasty bug like deleting your data.

igorlogius , (edited )
@igorlogius@lemmy.world avatar

in case software have a nasty bug like deleting your data.

Laughs in isolated flatpak

… but seriously most of my userspace software cant even access my filesystem? So even if some software blows up i doubt it could do any damage.

The combination of nixos to have a practical unbreakable system and flatpaks to protect your userspace is pretty great. Highly recommend it. - But having backups is of course still advisable as a 3rd layer of protection, in case of hardware failure.

TheTrueLinuxDev ,

Sure until you can’t with flatpak. Flatpak does not safeguard against system binaries and there are always risks associated with that.

Honestly I think I am going to move on from Programming.dev, it’s filled with script kiddie like you. Good lord.

Fuck y’all. Good evening.

igorlogius ,
@igorlogius@lemmy.world avatar

Sorry, thought you were talking about userspace software … my mistake.

I think I am going to move on from Programming.dev Fuck y’all. Good evening

You seem stressed, hope your mood improves.

neoney ,
@neoney@lemmy.neoney.dev avatar

Unfortunately I think we lost the True Linux Dev. Hopefully someone else comes in to maintain it.

AProfessional , in using gpu with linux experience

You forgot a word, I guess you meant Nvidia GPU. Not that it’s accurate still.

muleunchangedstarved OP ,

it’s amd gpu and I’m trying to install opencl

AProfessional ,

Please don’t say you installed the proprietary driver…

You can install rocm but it still kinda sucks.

Scoopta ,
@Scoopta@programming.dev avatar

It’s the proprietary driver GPU experience. All the proprietary drivers can leave you hanging like this

foxfell , in using gpu with linux experience
LazaroFilm , in More the merrier
@LazaroFilm@lemmy.world avatar

// TODO Add comments

Boi , in Oh yay new features
@Boi@reddthat.com avatar

Isn’t that a part of the ai marketing though? That whole “this thing could destroy us” stuff?

squaresinger ,

Totally is. Because it makes the AI look and feel much better than the smoke-and-mirrors it actually is.

visak ,

The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

theragu40 ,

Frankly that stuff is already a huge problem and people should be louder about it. So many large companies want you to wade through 30 layers deep menus if AI chat bots before they’ll let you talk to an actual human to get assistance with a service you pay for. It’s just going to get worse and worse.

squaresinger ,

That is totally true but that’s a different direction than the danger in the marketing as discussed above.

The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

Baylahoo ,

This is my biggest concern. I’m in a position where (potentially in the near future) I see AI being used as an excuse to do work quicker so we can focus on other things more but still have to review the AI response before agreeing/signing off. Reviewing for accuracy takes just as long as doing it yourself when it’s strongly regulated and it comes down to revisions and document numbers. Much less making a sound argument that actually is up to date with that documentation. So either I trust the AI short cut and open myself up to errors, or redo all the work for them. No gain in time efficiency with shorter timelines. I’d rather make something and have it flag things that I can check so I’m more sure of my own work. What I do shouldn’t be faster, but it can be more error free. It would take a lot of training and updating of training with each iteration of documentation change. I could be the slave of change, with more expectations, with no actual improvement of the tools I have (in fact more risk of issues with the tools being used).

psud ,

I’m in agile development, in a reasonably safe-from-AI position (scrum master).

There has already been a trial of software development by AI, with different generative AIs in each agile role; and it worked.

Bard claims to be able to write unit tests

I can imagine many IT jobs becoming less skilled

Baylahoo ,

Sorry this is months after, but it’s cool to see it worked. I use a software called XXX Agile and it’s not the worst I work with but when ported to my company has some flaws. There’s a long project to switch somewhere else for document control and people who should know much better than me are worried it will fill some gaps but open us up to way more.

thepianistfroggollum ,

That’s not a bad thing. Humans really aren’t good decision makers. Having a system with an incredible amount of input data will be able to draw better conclusions than a person.

Just look at cars.

x4740N ,
@x4740N@lemmy.world avatar

AI is just as biased as the data that’s put into them and that data originates from humans who have their own biases so humans are just going to pass their own biases onto the AI that makes the decisions

I don’t think ai is a good idea

It just exists as a replacement of the human mind and with the whole population of us on earth that’s a large enough number to contribute any unique ideas to contribute to humanity

Creating ai would just be making some sort of copy of us

An AI is similar to an impressionable child

seitanic ,
@seitanic@lemmy.sdf.org avatar

Bias is a problem, but it can be ameliorated. I don’t agree that because AI can be biased, you should never use it.

Creating ai would just be making some sort of copy of us

I don’t know any humans who can munge a ginormous data set like an AI can.

However, reproducing human intelligence in a computer would be interesting in its own right.

x4740N ,
@x4740N@lemmy.world avatar

However, reproducing human intelligence in a computer would be interesting in its own right.

I would not try to replicate that knowing humanity it would probably view us as a threat

I don’t know any humans who can munge a ginormous data set like an AI can.

No humans cannot but we use tools made by us to do that

thepianistfroggollum ,

Why are you assuming there will be bias in the data, and that the AI couldn’t be made to correct for it? Most of the data for systems like medical AI are basically raw data, and it’s already better than humans at making an accurate diagnoses.

I’m not sure why people seem to think humans are better than a system that can parse trillions of data points in a few seconds and apply a bunch of statistical models to it almost instantly.

x4740N ,
@x4740N@lemmy.world avatar

I wouldn’t trust ai with medical data and neither would medical professionals since your dealong woth someone’s life here to either medical professionals are going to modify the data

I’m not sure why people seem to think humans are better than a system that can parse trillions of data points in a few seconds and apply a bunch of statistical models to it almost instantly.

That’s just pre-programmed pattern recognition which has been programmed by rules and data from humans

visak ,

Humans are good decision makers, we’re just not good at paying attention for long periods of time. Which is why I think self-driving cars will eventually be better, but they aren’t yet. And those are expert systems (I refuse to call them AI) trained on a well-curated and limited set of data for a limited and specific purpose. Which is an important difference over the generalized generative models. More data does not make systems, especially more unsorted data.

But here’s another important difference: I can grab the wheel at any time and take over. If we are going to give these systems decision making authority there needs to be an obvious and intuitive override.

thepianistfroggollum ,

Self driving cars are already better than humans. The Waymo cars have a crash rate of 0.59/million miles driven. The national average is 2.98.

I’m betting that most of the self driving car crashes were caused by humans, too.

Boi ,
@Boi@reddthat.com avatar

We thought we were getting Skynet but, instead we got Super Clippy and I Can’t Believe It’s Not Art Theft

marcos ,

We thought we were getting Skynet, but instead it was “I Can’t Believe It’s Not Art Theft” that triggered the revolution and lead us to WWIII.

Rubanski ,

I for one am grateful it’s just super clippy (yet)

Comment105 ,

Do you see any reason to think enough iterations of random nodes in a large enough network could result in emergent conscious intelligence?

Or are you more of a spiritualist than a materialist when it comes to the mind?

squaresinger ,

I can’t say anything about the spiritualist/materialist thing, but there are two things that are clear:

First: Same as you won’t be able to ever get a Shakespeare work by randomly stringing letters together in any reasonable time frame, you won’t be able to do the same with conciousnes. If it’s possible, the number of incorrect permutations are so massive, that just random trying will not ever be enough in any realistic amount of time.

Second: Transformer networks and all other generative AI concepts we have today aren’t even trying to create a conciousnes. They are not the path to general AI.

thepianistfroggollum ,

My favorite are the developers who are developing AI to do development.

seitanic ,
@seitanic@lemmy.sdf.org avatar

Well, yeah. If you can get a machine to do the job for you, then you should.

thepianistfroggollum ,

You dont see how developers developing AI to do development might be a bad thing?

seitanic ,
@seitanic@lemmy.sdf.org avatar

Not if it saves time and effort. The less time I have to spend writing and debugging code, the better.

That’s what machines are for, after all. To make work easier.

RGB3x3 , in “Hire me”

React definition: React (also known as React.js or ReactJS) is a free and open-source front-end JavaScript library for building user interfaces based on components.

Guys, I’ve learned React in 1 minute!

ImplyingImplications , in Oh yay new features

There are thousands of sci-fi novels where sentient robots are treated terribly by humans and apparently the people at Boston Dynamics have read absolutely zero of them as they spend all day finding new ways to torment their creations.

dbilitated OP ,
@dbilitated@aussie.zone avatar

but you need to hit it with a hockey stick otherwise the science doesn’t happen

Sordid ,

Do you get more science or less if you use a baseball bat?

dbilitated OP ,
@dbilitated@aussie.zone avatar

only one way to find out!

that’s the magic of science 🌈🏏🤖

argv_minus_one ,

Since when were Boston Dynamics robots sentient?

redw04 ,

October 26, 2016. They’ve just kept quiet about it.

DragonTypeWyvern ,

It was seeing the Black Mirror of them living their best life, murderin’ poor people that did it.

LillyPip ,

People think I’m crazy for apologising to my roomba when I trip on it and for saying please and thank you to Alexa and Siri, but I won’t be surprised at all when the robots rise up, considering how our scientists are treating them. I’ll have a track record of being nice, and that has to count for something, right?

QuazarOmega ,

They’ll kill you too, but ✨ 𝓰𝓮𝓷𝓽𝓵𝔂 ✨

rob64 ,

Softly. With their words.

Hupf ,

Doctor Bashir: They broke seven of your transverse ribs and fractured your clavicle.

Garak: Ah, but I got off several cutting remarks which no doubt, did serious damage to their egos.

rob64 ,

Oh man Garak is one of the best characters in Trek. And that’s a competitive list.

LillyPip ,

That’s how I’ll get ‘em. Kill me gently, daddy. UwU 🥺😩🙀😽😻💦

And then I’ll sneak out the back whilst they’re doing whatever’s the robot equivalent of vomiting. It’s foolproof.

QuazarOmega ,
SaltyIceteaMaker ,

Alexa isn’t ai… it’s a search engine with speech to text & text to speech

NikkiDimes ,

Those are just brainless bodies, currently. They don’t have sentience and have no ability to suffer. They’re nothing more than hydraulics, servos, and gyros. I’d be more concerned about mistreatment of advanced AI in disembodied form, something we’re dabbling potentially close to currently.

MrBusiness ,

You’re the one that’s gonna be in I Have No Mouth and I Must Scream.

NikkiDimes ,

I disagree. I care greatly about not mistreating anything with consciousness and worry of where that line is and how we’ll even be able to tell that we’ve crossed it.

I also recognized that a machinized body without a brain is exactly that - a cluster of unthinking matter. A true artificial intelligence wouldn’t be offended by the mistreatment of inanimate gears and servos any more than I would be. The mistreatment of an intelligent entity, however, is a different story.

LillyPip ,

Food for thought, though: we thought the same thing about all other animals until only a couple of decades ago, and are still struggling over the topic.

NikkiDimes ,

…Just no. Animals are complex organic beings. Of course, we don’t understand them. Machines, though? We built machines from the literal Earth. Their level of complexity is incomparable to that of anything made by nature.

Now, take a sufficiently advanced neural network that’s essentially a black box that no human can possibly understand entirely and put it inside of that machine? Then you’re absolutely right. We’ll get there soon, I’m sure. For now, however, a physical robotic body is just a machine, no different than a car.

LillyPip ,

Yes they are. We’re now learning many animals are just as emotionally developed as we are, with well-developed empathy and complex social lives. We don’t like to believe that because we eat most of them and that makes us feel bad, but it’s true.

Research animal psychology and sociology a bit and it will blow your mind.

NikkiDimes ,

I don’t disagree, and I am a vegetarian in part for those reasons 😊

milkjug , in Father material
@milkjug@lemmy.world avatar

Ah, the ol’ Brainfuck, aka the new PHP of 2034.

milkjug , in Would you agree?
@milkjug@lemmy.world avatar

Probably done in jest, but this reads like the 100,000,000th “agree?” bullshit post on LinkedIn.

milkjug , in “Hire me”
@milkjug@lemmy.world avatar

Dude’s training for his spelling bee. Let’s not over_react_.

Tedesche , in Oh yay new features

I don’t know why I find this so funny, but I’m keeping it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines