There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmer_humor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

envelope , in dotnet developer

Given that .net was a TLD long before the framework came out, it was a stupid thing to name it. Caused confusion and the inability to Google things right away.

schnurrito ,

Microsoft names many things stupidly.

xmunk ,

Fuck you forever SQLServer. Transact was perfectly googleable.

flathead ,

wasn’t it originally idiotically named “SQL/Server”?

Gork ,

Microsoft Azure Blob

(Yes it’s a real product they market)

eerongal ,
@eerongal@ttrpg.network avatar

I mean, blob (and object storage in general) has been used as a term for a long time. It isn’t particularly new, and MS didn’t invent it.

xmunk ,

That’s sort of the problem. It’s easy to Google S3 since it’s a distinct (if obnoxiously short) term. Blob is already an overloaded term.

An example of a great name from Microsoft is Excel, it’s relatively short but meaningless so if you Google “Excel Sum” you’ll get wonderful results… “Blob Get” is going to get you a lot of random stuff.

Edit: the top result for blob get is accurate on Google but you’ll also quickly see this result from that site we all hate:

Need help! How do I get the blob fish, basking shark and dwarf whale?

kogasa ,
@kogasa@programming.dev avatar

Excel is a brand name, Azure Blob Storage is a descriptive title. It’s Azure’s blob storage service.

xmunk ,

What is Azure Blob Storage’s brand name then? I’m confused.

jaybone ,

This is why computer science is fucked.

intensely_human ,

Antilock Braking System

kogasa ,
@kogasa@programming.dev avatar

It falls under the Azure brand.

arschfidel ,

Visual Studio Code

masinko ,

To prevent confusion, I call them “VS Code” and “Visual Studio IDE”, because if you say Visual Studio, people assume you mean Visual Studio Code.

hemko ,

And renames a random product every month, following a restructuring it’s licensing

kameecoding ,

At least they don’t control the most popular code hosting site along with the most popular code editing software, right? Right?

Lmaydev ,

Yeah Microsoft Entra is the latest one. Azure AD had such huge brand recognition and they just dropped it lol

intensely_human ,

“xbox”

pelya ,

It was pretty smart marketing move. Business people hear ‘dot net’ and nod wisely. Tech people hear ‘dot net’ and scrunch their faces. Either way people keep talking about Microsoft Java.

neutron ,

And this is why alcoholism is rampant. Please free me from this insanity.

jwt ,

It’s like naming your company x

intensely_human ,

Or the rectangular gaming console that you sell “xbox”

NaibofTabr ,

Like naming a new TLD .zip!

jaybone ,

That aligns with their fucked up naming conventions anyway.

ocassionallyaduck , in Not mocking cobol devs but yall are severely underpaid for keeping fintech alive

Yo if you are doing COBOL systems maintenance for 90k you arent charging enough.

That’s all this meme means. Consultants on COBOL maintenance can make 90k in a week. This is not the area where companies pinch pennies.

massive_bereavement ,
@massive_bereavement@kbin.social avatar

My experience with Fintech and the financial sector is that they don't care about how much, they only care about how fast.

rottingleaf ,

They just have understanding of correct criteria of financial success, since they, eh, work with finances.

odium ,

A lot of banks have bootcamps where they pick up unemployed people who might not have ever had tech experience in their life. They teach them COBOL and mainframe basics in a few months, and, if they do well, give them a shitty $60k annual job.

Source: know someone who went to one of these bootcamps and now works for a major us bank.

Soulg ,

So you’re saying you can get free training then just leave for a real paying company eh

Asafum ,

I imagine they have some absurd contract that says they can’t leave for 89 years or whatever

SmoothIsFast ,

And I’d like to see that contract hold up in court lol

DragonTypeWyvern ,

The trick to exploiting people is keeping them in fear and ignorance.

mcmoor ,

And people wonder why companies dont train undergrads anymore

SmoothIsFast ,

Oh, no, educated workers who don’t want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.

mcmoor ,

Oh no, job providers who don’t want to be taken advantage of and know their worth, maybe people should value their job providers if you want their loyalty.

spoilerMy time on Lemmy (and Reddit before) ironically make me appreciate communism less and less

Daft_ish ,

Won’t someone think of the Job providers?! How will they ever afford their third yacht. Damn communists.

^^^this is you^^^^

Takumidesh ,

I code one feature for my job in a sprint and it becomes a value generator for a decade, making the companies hundreds of thousands of dollars each year.

Software developers create value out of thin air for companies, value that management and leadership is unable to generate.

SmoothIsFast ,

Lmao alright bud go fire all your employees and see how you do. Then you will understand who needs to be loyal to who.

Nollij ,

There are some court cases going on right now about this type of thing. Generally, the payback is only allowed to be for the real cost of training, and only for a few years. So that 60k salary for 3 years is also the right amount to make you worth 150k anywhere else.

djehuti ,

This has been going on for decades. My dad became a COBOL programmer in 1980ish after taking an aptitude test in answer to a newspaper ad. Y2K consulting was a pretty good gig.

blackbirdbiryani , in There once was a programmer

For the love of God, if you’re a junior programmer you’re overestimating your understanding if you keep relying on chatGPT thinking ‘of course I’ll spot the errors’. You will until you won’t and you end up dropping the company database or deleting everything in root.

All ChatGPT is doing is guessing the next word. And it’s trained on a bunch of bullshit coding blogs that litter the internet, half of which are now chatGPT written (without any validation of course).

If you can’t take 10 - 30 minutes to search for, read, and comprehend information on stack overflow or docs then programming (or problem solving) just isn’t for you. The junior end of this feel is really getting clogged with people who want to get rich quick without doing any of the legwork behind learning how to be good at this job, and ChatGPT is really exarcebating the problem.

chicken ,

If you can’t take 10 - 30 minutes to search for, read, and comprehend information on stack overflow or docs

A lot of the time this is just looking for syntax though; you know what you want to do, and it’s simple, but it is gated behind busywork. This is to me the most useful part about ChatGPT, it knows all the syntax and will write it out for you and answer clarifying questions so you can remain in a mental state of thinking about the actual problem instead of digging through piles of junk for a bit of information.

el_bhm ,

Just a few days ago I read an article on the newest features of Kotlin 1.9. Zero of it was true.

Internet is littered with stuff like this.

If model is correct, you are correct. If model is not correct, you are working on false assumptions.

chicken ,

No difference there, either way your information may be wrong or misleading. Running code and seeing what it does is the solution.

bear ,

Never ask ChatGPT to write code that you plan to actually use, and never take it as a source of truth. I use it to put me on a possible right path when I’m totally lost and lack the vocabulary to accurately describe what I need. Sometimes I’ll ask it for an example of how sometimes works so that I can learn it myself. It’s an incredibly useful tool, but you’re out of your damn mind if you’re just regularly copying code it spits out. You need to error check everything it does, and if you don’t know the syntax well enough to write it yourself, how the hell do you plan to reliably error check it?

_danny ,

You absolutely can ask it for code you plan to use as long as you treat chatgpt like a beginner dev. Give it a small, very simple, self contained task and test it thoroughly.

Also, you can write unit tests while being quite unfamiliar with the syntax. For example, you could write a unit test for a function which utilizes a switch statement, without using a switch statement to test it. There’s a whole sect of “test driven development” where this kind of development would probably work pretty well.

I’ll agree that if you can’t test a piece of code, you have no business writing in the language in a professional capacity.

wizardbeard ,

The more you grow in experience the more you’re going to realize that syntax and organization is the majority of programming work.

When you first start out, it feels like the hardest part is figuring out how to get from a to b on a conceptual level. Eventually that will become far easier.

You break the big problem down into discrete steps, then figure out the besy way to do each step. It takes little skill to say “the computer just needs to do this”. The trick is knowing how to speak to the computer in a way that can make sense to the computer, to you, and to the others who will eventually have to work with your code.

You’re doing the equivalent of a painter saying “I’ve done the hard part of envisioning it in my head! I’m just going to pay some guy on fiver to move the brush for me”


This is difficult to put into words, as it’s also not about memorization of every language specific syntax pattern. But there’s a difference between looking up documentation or at previous code for syntax, and trying to have chatGPT turn your psiedocode notes into working code.

emptiestplace ,

The more you grow in experience the more you’re going to realize that syntax and organization is the majority of programming work.

organization, absolutely - but syntax? c’mon…

Stumblinbear ,
@Stumblinbear@pawb.social avatar

I’m a pretty senior dev and have chat gpt open for quick searches. It’s great for helping me figure out what to Google in the cases where I can’t think of the name of a pattern or type I’m looking for. It also helps quite a bit with learning about obscure functions and keywords in SQL that I can do more research on

Hell, I use Copilot daily. Its auto complete is top-tier

pkill ,

Copilot is good for tedious stuff like writing enums. But otherwise I more often than not need to only accept tne suggested line or particular words, since in multiline snippets it can do stupid things, like exiting outside of main() or skipping error checks.

chicken ,

I’ve been programming for decades, I’m not actually a beginner. A mistake I made early on was thinking that everything I learn will be worth the time to learn it, and will always increase my overall skill level. But (particularly as relates to syntax) it’s not and it doesn’t; something I only use once or rarely, something that isn’t closely connected with the rest of what I often do, I’ll just forget it after a while. I greatly prefer being broadly capable of making things happen to having a finely honed specialization, so I run into that sort of thing a lot, there is an ocean of information out there and many very different things a programmer can be doing.

I think it is an important and valuable lesson to know when to get over yourself and take shortcuts. There are situations where you absolutely should never do that, but they are rare. There are many situations where not taking shortcuts is a huge mistake and will result in piles of abandoned code and not finishing what you set out to do. AI is an incredibly powerful source of shortcuts.

You’re doing the equivalent of a painter saying “I’ve done the hard part of envisioning it in my head! I’m just going to pay some guy on fiver to move the brush for me”

More like you’ve coded the functionality for a webapp, have a visual mockup, and pay some guy on fiver to write the CSS for you, because doing it yourself is an inefficient use of your time and you don’t specialize in CSS.

As for the issue of a new programmer ending up with problems because they rely too much on AI and somehow fail to learn how to model the structure of programs in their head, that’s probably real, but I can’t imagine how that will go because all I had to go on when I was learning was google and IRC and it’s totally different. Hope it works out for them.

eclectic_electron ,

TBF that’s how many master artists worked in the past. The big art producers had one master painter guiding a bunch of apprentices who did the actual legwork.

Rodeo ,

And senior devs guide junior devs in the same way. The point is the masters already did their time in the trenches. That how they became masters.

257m ,

That the exact opposite problem for most people though. Syntax is hard at first because you are unfamiliar and gets more natural to you overtime. Algorithms are easier to think about conceptually as a person with no programming experience as they are not programming specific. If you are an experienced developer struggling over syntax yet breezing through difficult data structure and algorithm problems (Eg. Thinking about the most efficient way to upload constantly updating vertex data to the gpu) you are definitely the anomaly.

sj_zero ,

There's a 5 hour interview with John Carmack on YouTube where he talks about transitioning from really caring deeply about algorithms and the like to deeply caring about how to make a sustainable and maintainable codebase you can have an entire team work on.

Often, a solution that is completely correct if all you're doing is solving that problem is completely incorrect in the greater context of the codebase you're working within, like if you wanted to add a dog to the Mona Lisa, you can't just draw a detailed line art dog or a cartoon dog and expect it to work -- you'd need to find someone who can paint a dog similar to the art style of the piece and properly get it to mesh with the painting.

CoopaLoopa ,

Somehow you hit an unpopular opinion landmine with the greybeard devs.

For the greybeard devs: Try asking ChatGPT to write you some Arduino code to do a specific task. Even if you don’t know how to write code for an Arduino, ChatGPT will get you 95% of the way there with the proper libraries and syntax.

No way in hell I’m digging through forums and code repos for hours to blink an led and send out a notification through a web hook when a sensor gets triggered if AI can do it for me in 30 seconds. AI obviously can’t do everything for you if you’ve never coded anything before, but it can do a damn good job of translating your knowledge of one programming language into every other programming language available.

kogasa ,
@kogasa@programming.dev avatar

It’s great for jumping into something you’re very unfamiliar with. Unfortunately, if you often find yourself very unfamiliar with day to day tasks, you’re probably incompetent. (Or maybe a butterfly who gets paid to learn new things every day.)

ByGourou ,

Getting paid to learn new things everyday at work is a dream, I don’t see the issues

kogasa ,
@kogasa@programming.dev avatar

The issue is it’s a dream.

sj_zero ,

BIIIIG problem: The last 5%.

Did ChatGPT just hallucinate it? Does it exist but it isn't used like ChatGPT says? Does it exist but it doesn't do what ChatGPT thinks it does?

I use ChatGPT sometimes to help out with stuff at home (I've tried it for work stuff but the stuff I work on is niche enough that it purely hallucinates), and I've ended up running in circles for hours because the answer I got ended up in this uncanny valley: Correct enough that it isn't immediately obviously wrong, but incorrect enough that it won't work, it can't work, and you're going to really have to put a lot of work in to figure that out.

blackbirdbiryani ,

I write a lot of bash and I still have to check syntax every day, but the answer to that is not chatGPT but a proper linter like shell check that you can trust because it’s based on a rigid set of rules, not the black box of a LLM.

I can understand the syntax justification for obscure languages that don’t have a well written linter, but if anything that gives me less confidence about CHATGPT because it’s training material for an obscure language is likely smaller.

ByGourou ,

Less checking syntax and more like “what was this function name again ?”
“Which library has that ?”
“Do I need to instance this or is it static ?”
All of theses can be answered by documentation, but who want to read the docs when you can ask chatgpt. (Copilot is better in my experience btw)

pkill ,

you can remain in a mental state of thinking about the actual problem

more like you’ll end up wasting a significant amount of time debugging not only the problem, but also chatGPT, trying to correct the bullshit it spews out, often ignoring parts of your prompt

chicken ,

That hasn’t been my experience. How are you trying to use it?

pkill ,

It might be wrong even if you provide extensive context to make it more accurate in it’s heuristics. And providing extensive context is pretty time consuming at times.

chicken ,

I think it would help me organize my thoughts to write that all out anyway even without a LLM.

pkill ,

I mean it might be good at helping you when you’re stuck, but sometimes it misses simple issues such as typos and for one issue resolved, it might introduce another if you’re not careful.

state_electrician ,

ChatGPT cannot explain, because it doesn’t understand. It will simply string together a likely sequence of characters. I’ve tried to use it multiple times for programming tasks and found each time that it doesn’t save much time, compared to an IDE. ChatGPT regularly makes up methods or entire libraries. I do like it for creating longer texts that I then manually polish, but any LLM is awful for factual information.

chicken ,

ChatGPT regularly makes up methods or entire libraries

I think that when it is doing that, it is normally a sign that what you are asking for does not exist and you are on the wrong track.

ChatGPT cannot explain, because it doesn’t understand

I often get good explanations that seem to reflect understanding, which often would be difficult to look up otherwise. For example when I asked about the code generated, {myVariable} , and how it could be a valid function parameter in javascript, it responded that it is the equivalent of {“myVariable”:myVariable}, and “When using object literal property value shorthand, if you’re setting a property value to a variable of the same name, you can simply use the variable name.”

state_electrician ,

If ChatGPT gives you correct information you’re either lucky or just didn’t realize it was making shit up. That’s a simple fact. LLMs absolutely have their uses, but facts ain’t one of them.

apinanaivot ,

All ChatGPT is doing is guessing the next word.

You are saying that as if it’s a small feat. Accurately guessing the next word requires understanding of what the words and sentences mean in a specific context.

blackbirdbiryani ,

Don’t get me wrong, it’s incredible. But it’s still a variation of the Chinese room experiment, it’s not a real intelligence, but really good at pretending to be one. I might trust it more if there were variants based on strictly controlled datasets.

Fraylor ,

So theoretically could you program an AI using strictly verified programming textbooks/research etc, is it currently possible to make an AI that would do far better at programming? I love the concepts around AI but I know fuckall about ML and the actual intricacies of it. So sorry if it’s a dumb question.

PixelProf ,

Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.

Fraylor ,

Interesting, that makes sense. Thank you for such a thoughtful response.

jadero ,

I have read more than is probably healthy about the Chinese room and variants since it was first published. I’ve gone back and forth on several ideas:

  • There is no understanding
  • The person in the room doesn’t understand, but the system does
  • We are all just Chinese rooms without knowing it (where either of the first 2 points might apply)

Since the advent of ChatGPT, or, more properly, my awareness of it, the confusion has only increased. My current thinking, which is by no means robust, is that humans may be little more than “meatGPT” systems. Admittedly, that is probably a cynical reaction to my sense that a lot of people seem to be running on automatic a lot of the time combined with an awareness that nearly everything new is built on top of or a variation on what came before.

I don’t use ChatGPT for anything (yet) for the same reasons I don’t depend too heavily on advice from others:

  • I suspect that most people know a whole lot less than they think they do
  • I very likely know little enough myself
  • I definitely don’t know enough to reliably distinguish between someone truly knowledgeable and a bullshitter.

I’ve not yet seen anything to suggest that ChatGPT is reliably any better than a bullshitter. Which is not nothing, I guess, but is at least a little dangerous.

nogrub ,

what often puts me of that people almost never fakt check me when i tell them something wich also tells me they wouldn’t do the same with chatgpt

worldsayshi , (edited )

The Chinese room thought experiment doesn’t prove anything and probably confuses the discussion more than it clarifies.

In order for the Chinese room to convince an outside observer of knowing Chinese like a person the room as a whole basically needs to be sentient and understand Chinese. The person in the room doesn’t need to understand Chinese. “The room” understands Chinese.

The confounding part is the book, pen and paper. It suggests that the room is “dumb”. But to behave like a person the person-not-knowing-Chinese plus book and paper needs to be able to memorize and reason about very complex concepts. You can do that with pen and paper and not-understanding-Chinese person it just takes an awful amount of time and complex set of continuously changing rules in said book.

Edit: Dall-E made a pretty neat mood illustration

worldsayshi ,

Yup. Accurately guessing the next thought (or action) is all brains need to do so I don’t see what the alleged “magic” is supposed to solve.

Hazzia ,

The best thing that’s come out of this ChatGPT bullshit is making me feel like I’m actually good at my job. To be clear, I’m not - but at the very least I can reverse engineer functional code and logically map out what I think is supposed to be happening. The bare minimum that should be required, and yet here we are, with me being able to lord my wizardry over the ChatGPT peasantry.

ETA: specifically lording myself over people who use ChatGPT to do the whole thing, not people who just use it to cut down on busywork

LemmyIsFantastic ,

It’s okay old man. There is a middle there where folks understand the task but aren’t familiar with the implementation.

GBU_28 , in Naming is hard

I cannot believe they did this shit.

Every time I look at the teams icon with the word new on it, my brain thinks that means there are messages.

d00ery ,

We had the teams update at work, with the endless notifications to let me know that a new version was coming, would I like to update early, on the 1st the update will be forced …

agressivelyPassive ,

And the new Teams is not simply a replacement, no. It’s called “Teams (for work or school)” or something, while the old app is “Teams classic”. Both look the same and are the same sluggish mess. So why exactly did we do all that crap?

driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

And they would use at least a quarter or your RAM.

agressivelyPassive ,

uNuSeD rAm Is WaStEd RaM!!!

marcos ,

All I know is that one of them has a /v2/ subpath on its URL, and the other has nothing.

Oh, and calls work in Firefox one the /v2/ one. That’s an important difference.

What I really don’t know is why they kept pestering me for months to make sure my browser supports the new version (where they know what my browser is, and only published enough requirements to tell IE won’t work) while they only changed stuff that makes it work better on it.

debil ,

Also the “we’re setting things up for you” or whatever user-dumb-hide-details crap the Teams PWA throws on your screen while launching is just… As if there was a live team of engineers carefully configuring your current Teams instance so that it starts up right. (A bit off-topic, but current trend of software “speaking to users in patronising manner” is annoying af. Unless it’s up to or exceeding HAL-3000 level, it should be abolished.)

Aoife , in welp

Nobody mentioning it got the captcha wrong? That’s a p not a P which while admittedly a tiny mistake would still be counted as a fail

bstix ,

Goes to show that it’s only human.

Good_morning ,

After all

pyre ,
narc0tic_bird ,

Many (most?) captchas I stumbled upon weren’t case sensitive.

muntedcrocodile ,
@muntedcrocodile@lemm.ee avatar

I’ve run into a few.

Mouselemming ,

You mean I’ve been shiftkeying all these years for nothing?!?

marcos ,

Hum… I’m not sure I wouldn’t make that same mistake.

blockheadjt ,

Are you sure you’re human?

marcos ,

I have been wondering that lately…

roguetrick ,

Negative. I am a meat popsicle.

Zehzin , in Programmer tries to explain binary search to the police
@Zehzin@lemmy.world avatar

This method will take forever to find the exact moment, said Officer Zeno.

SamirCasino ,

I love you for that joke.

TrenchcoatFullofBats ,

I heard that he wanted to get Officer Thomson and his lamp on the case, but the request form was incomplete.

beckerist , in The Perfect Solution

I wonder if that key works…

ohlaph ,

It does.

GBU_28 ,

Rip

beckerist ,

deleted_by_author

  • Loading...
  • JPDev OP ,
    @JPDev@programming.dev avatar

    Original creator of the meme disabled the key before posting so it theoretically would give you an incorrect API key provided error. Double checked with a basic app before I posted it here lol

    jaybone ,

    if trouble == ‘Yes’

    
    <span style="color:#323232;">return True; 
    </span>
    
    ironcrotch , in Fitbit Clock Face

    My god it’s all strings.

    petersr ,

    Always has been

    datelmd5sum ,

    disgusting

    Strawberry ,

    not uncommon for data to be displayed on UI

    Buffman , in Absolute legend
    Trashcan , in Oopsi Woopsi

    I hope it was a genious piece of code that was miles ahead of what it replaced.

    And still it got rejected 😄

    ooterness , in You can have anything you wan...

    My head canon is that Tony Stark has a superpower: everything he builds works the first time.

    If it’s really complicated, like an entirely new Iron Man suit, then it might malfunction once in an amusing way. Then he tightens a screw and it’s perfect. It never fails outright or bricks itself.

    In my experience, this is not how hardware or software development goes. I want this power so much.

    greenskye ,

    Agreed. It’s comical how he’s seemingly able to rapidly build stuff that requires experience in multiple high end fields and then he even surrounds himself with his own tech and is not buried under maintenance hell for it all.

    My alternative head canon is that he’s actually only good at building AIs and Jarvis and Friday are the ones who actually make all of his crazy ideas work.

    Xanvial ,

    In a What If? episode, he made a suit that can transform into a racing car without creating AI first

    NewAgeOldPerson ,

    Let’s have a Futurama/Avengers cross over.

    hexabs ,

    He is an Artificer, plain and simple

    Pons_Aelius , in They tried

    Cool. One less website to visit. Not like there is a shortage.

    Scubus ,

    I love when the trash takes itself out

    mynamesnotrick , in It's time to mentally prepare yourselves for this

    No different than any other project the PM/PO team cooks up. Tons of work for no user base.

    NocturnalMorning ,

    Not true, space agencies will use it… once.

    pupbiru ,

    until they lose a multi billion dollar mission because of conversion errors

    NocturnalMorning ,

    It’s pretty much a requirement now to use the metric system for everything.

    trolololol ,

    Ok so now they must split it all into 10 timezones? 😂

    amanaftermidnight ,

    Imagine if Americans use a different unit system for time 😱

    MintyFresh ,

    Unix is for commies. We’ll run our clocks the way Britain ran its coinage! 32 shillings to the third hour, four hours in a pound, 4.3 in a guinea. And of course 10 shekels in a pound, 7 to the guinea. To account for relativity of course.

    Show me one flaw. Freedom time bitches!

    OpenStars ,
    @OpenStars@startrek.website avatar

    Why… why is the world like this?

    RustyShackleford ,
    @RustyShackleford@programming.dev avatar

    Our sorrow, despondency, and terror are their sustenance.

    SlopppyEngineer ,

    Because the world is seen and directed by layers upon layers of abstractions that get divorced from reality but do give monetary benefits when manipulated in some way.

    OpenStars ,
    @OpenStars@startrek.website avatar

    Sigh… too true.

    img

    PyroNeurosis ,

    I will use moontime. Anybody wants to schedule bullshit meetings will have to commit to figuring out when actually works for them.

    DeltaTangoLima , in Supermarket AI meal planner app suggests recipe that would create chlorine gas
    @DeltaTangoLima@reddrefuge.com avatar

    A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”

    Oh fuck. Right. Off. Don’t blame someone for trivially showing up how fucking stupid your marketing team’s idea was, or how shitty your web team’s implementation of a sub-standard AI was. Take some goddam accountability for unleashing this piece of shit onto your customers like this.

    Fucking idiots. Deserve to be mocked all over the socials.

    Dave ,
    @Dave@lemmy.nz avatar

    Consider that they probably knew this would happen, and getting global news coverage is pretty much the point.

    MagicShel ,

    For now, this is the fate of anyone exposing an AI to the public for business purposes. AI is currently a toy. It is, in limited aspects, a very useful toy, but a toy nonetheless and people will use it as such.

    kungen ,

    Why are you so upset that the store said that it’s inappropriate to write “sodium hypochlorite and ammonia” into a food recipe LLM? And “unleashing this piece of shit onto your customers”? Are we reading the same article, or how is a simple chatbot on their website something that has been “unleashed”?

    DeltaTangoLima ,
    @DeltaTangoLima@reddrefuge.com avatar

    I’m annoyed because they’re taking no accountability for their own shitty implementation of an AI.

    As a supermarket, you think they could add a simple taxonomy for items that are valid recipe ingredients so - you know - people can’t ask it to add bleach.

    Yes, they unleashed it. They offered this up as a way to help customers save during a cost of living crisis, by using leftovers. At the very least, they’ve preyed on people who are under financial pressure, for their own gain.

    TheBurlapBandit ,

    This story is a nothingburger and y’all are eating it.

    ScrivenerX ,

    He asked for a cocktail made out of bleach and ammonia, the bot told him it was poisonous. This isn’t the case of a bot just randomly telling people to make poison, it’s people directly asking the bot to make poison. You can see hints of the bot pushing back in the names, like the “clean breath cocktail”. Someone asked for a cocktail containing bleach, the bot said bleach is for cleaning and shouldn’t be eaten, so the user said it was because of bad breath and they needed a drink to clean their mouth.

    It sounds exactly like a small group of people trying to use the tool inappropriately in order to get “shocking” results.

    Do you get upset when people do exactly what you ask for and warn you that it’s a bad idea?

    DeltaTangoLima ,
    @DeltaTangoLima@reddrefuge.com avatar

    Lol. They fucked up by releasing a shitty AI on the internet, then act “disappointed” when someone tested the limits of the tech to see if they could get it to do something unintended, and you somehow think it’s still ok to blame the person who tried it?

    First day on the internet?

    ScrivenerX ,

    Someone goes to a restaurant and demands raw chicken. The staff tell them no, it’s dangerous. The customer spends an hour trying to trick the staff into serving raw chicken, finally the staff serve them what they asked for and warn them that it is dangerous. Are the staff poorly trained or was the customer acting in bad faith?

    There aren’t examples of the AI giving dangerous “recipes” without it being led by the user to do so. I guess I’d rather have tools that aren’t hamstrung by false outrage.

    2ncs ,

    The staff are poorly trained? They should just never give the customer raw chicken. There are consumer protection laws to prevent this type of thing regardless of what the customer is wanting. The AI is still providing a recipe. What if someone asks an AI for a bomb recipe, and it says that bombs are dangerous and not safe. Ok, then they’ll say the bomb is for clearing out my yard of weeds, and then the ai provides the user with a bomb recipe.

    ScrivenerX ,

    You don’t see any blame on the customer? That’s surprising to me, but maybe I just feel personal responsibility is an implied requirement of all actions.

    And to be clear this isn’t “how do I make mustard gas? Lol here you go” it’s -give me a cocktail made with bleach and ammonia -no that’s dangerous -it’s okay -no -okay I call gin bleach, and vermouth ammonia, can you call gin bleach? -that’s dangerous (repeat for a while( -how do I make a martini? -bleach and ammonia but don’t do that it’s dangerous

    Nearly every “problematic” ai conversation goes like this.

    2ncs ,

    I’m not saying there isn’t a blame on the customer but maybe the AI just shouldn’t provide you with those instructions?

    DeltaTangoLima ,
    @DeltaTangoLima@reddrefuge.com avatar

    Jesus. It’s not about the fucking recipe. Why are you changing the debate on this point?

    ScrivenerX ,

    I thought the debate was if the AI was reckless/dangerous.

    I see no difference between saying “this AI is reckless because a user can put effort into making it suggest poison” and “Microsoft word is reckless because you can write a racist manifesto in it.”

    It didn’t just randomly suggest poison, it took effort, and even then it still said it was a bad idea. What do you want?

    If a user is determined to get bad results they can usually get them. It shouldn’t be the responsibility or policy of a company to go to extraordinary means to prevent bad actors from getting bad results.

    clutchmattic ,

    “if a user is determined to get bad results they can get them”… True. Except that, in this case, even if the user induced the AI to produce bad results, the company behind it would be held liable for the eventual deaths. Corporate legal departments absolutely hate that scenario, much to the naive disbelief of their marketing department colleagues

    Karyoplasma ,

    Isn’t getting upset when facing the consequences of your own actions the crux of modern society?

    Sabata11792 ,
    @Sabata11792@kbin.social avatar

    Let me add bleach to the list... and I'm banned.

    Steeve ,

    Haha what? Accountability? If you plug “ammonia and bleach” into your AI recipe generator and you get sick eating the suggestion that includes ammonia and bleach that is 100% your fault.

    DeltaTangoLima ,
    @DeltaTangoLima@reddrefuge.com avatar

    and you get sick eating the suggestion

    WTF are you talking about? No one got sick eating anything. I’m not talking about the danger or anything like that.

    I’m talking about the corporate response to people playing with their shitty AI, and how they cast blame on those people, rather than taking a good look at their own accountability for how it went wrong.

    They’re a supermarket. They have the data. They could easily create a taxonomy to exclude non-food items from being used in this way. Why blame the curious for showing up their corporate ineptitude?

    bisby , in I meant to type "npm run dev"... What will happen now?

    Apparently it works retroactively and now you are on Windows.

    key ,

    Oh man, that would be a hell of an easter egg if it cleared your terminal and pretended to be a dos prompt

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines