There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

dinckelman ,

I have a lot of empathy for a lot of people. Even ones, who really don’t deserve it. But when it comes to people like these, I have absolutely none. If you make a chatbot do your corporate security, it deserves to burn to the ground

RobotToaster ,
@RobotToaster@mander.xyz avatar
YtA4QCam2A9j7EfTgHrH ,

This must sound terrible. So high pitched

aodhsishaj ,

Maybe so high pitched it’s out of the hearing range of most humans

KellysNokia ,
fuzzy_feeling ,

ahahahaha…

Telorand ,

Wow, the text generator that doesn’t actually understand what it’s “writing” is making mistakes? Who could have seen that coming?

I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.

skillissuer , (edited )
@skillissuer@discuss.tchncs.de avatar

Nobody should trust LLMs with anything

ftfy

also any inputs are probably scrapped and used for training, and none of these people get GDPR

mox , (edited )

also any inputs are probably scraped

ftfy

Let’s hope it’s the bad outputs that are scrapped. <3

curbstickle ,

Eh, I’d say mostly.

I have one right now that looks at data and says “Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause.”

Which is pretty well within what they are good at, especially if you are doing the training yourself.

blackjam_alex ,

My experience with ChatGPT goes like this:

  • Write me a block of code that makes x thing
  • Certainly, here’s your code
  • Me: This is wrong.
  • You’re right, this is the correct version
  • Me: This is wrong again.
  • You’re right, this is the correct version
  • Me: Wrong again, you piece of junk.
  • I’m sorry, this is the correct version.
  • (even more useless code) … and so on.
TaintPuncher ,

That sums up my experience too, but I have found it good for discussing functions for SQL and Powershell. Sometimes, it’ll throw something into its garbage code and I’ll be like “what does this do?” It’ll explain how it’s supposed to work, I’ll then work out its correct usage and solve my problem. Weirdly, it’s almost MORE helpful than if it just gave me functional code, because I have to learn how to properly use it rather than just copy/paste what it gives me.

Telorand ,

That’s true. The mistakes actually make learning possible!

Man, designing CS curriculum will be easy in future. Just ask it to do something simple, and ask your CS students to correct the code.

saltesc ,

All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.

All the while realising you did this to save time but at this point coding from scratch would have been faster.

sugar_in_your_tea , (edited )

I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn’t notice.

AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you’re doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It’ll do a decent job at certain prompts (i.e. generate me a function/class that…), but you’re going to need to go through line-by-line and make sure it’s actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don’t bother w/ AI, but YMMV.

An area where it’s probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, “find the docs for a function in library X that does…” I know what I want, I just may not remember the name or the module, and I certainly don’t remember the argument order.

9488fcea02a9 ,

AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion)

I’m not a developer and i havent touched code for over 10 yrs, but when i heard about my company pushing AI tools on the devs, i thought exactly what you said. It should be a tool for experienced devs who already know what they’re doing…

Lo and behold they did the opposite… They fired all the senior people and pushed AI on the interns and new grads… and then expected AI to suddenly make the jr devs work like the expensive Sr devs they just fired…

Wtf

SketchySeaBeast ,
@SketchySeaBeast@lemmy.ca avatar

I wish we could say the students will figure it out, but I’ve had interns ask for help and then I’ve watched them try to solve problems by repeatedly asking ChatGPT. It’s the scariest thing - “Ok, let’s try to think about this problem for a moment before we - ok, you’re asking ChatGPT to think for a moment. FFS.”

USSEthernet ,

Critical thinking is not being taught anymore.

djsaskdja ,

Has critical thinking ever been taught? Feel like it’s just something you have or you don’t.

sugar_in_your_tea ,

I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn’t recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I’ve had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.

I absolutely think SW engineering is going to be a great career long-term, I just can’t encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you’re passionate about it, you’re going to ignore whatever I say anyway, and you’ll succeed. But if my recommendation changes your mind, then you probably aren’t passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.

I’m not worried at all about my job or anyone on my team, I’m worried for the next batch of CS grads who chatGPT’d their way through their degree. “Cs get degrees” isn’t going to land you a job anymore, passion about the subject matter will.

pirat ,

Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.

However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.

This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.

Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.

jaggedrobotpubes ,

AI created 17 Security Corporation™️s in response to this comment.

SuperFola ,
@SuperFola@programming.dev avatar

How come the hallucinating ghost in the machine is generating code so bad the production servers hallucinate even harder and crash?

Telorand ,

You have to be hallucinating to understand.

Drunemeton ,
@Drunemeton@lemmy.world avatar

I’ve licked the frog twice! How many does it take?

Telorand ,

A-one. A-two-hoo. A-three… Crumch

MelodiousFunk ,

I take it that frog hadn’t been de-boned.

henfredemars ,

I’m not sure how AI supposed to understand code. Most of the code out there is garbage. Even most of the working code out there in the world today is garbage.

SuperFola ,
@SuperFola@programming.dev avatar

Heck, I sometimes can’t understand my own code. And this AI thing tries to tell me I should move this code over there and do this and that and then poof it doesn’t compile anymore. The thing is even more clueless than me.

elvith ,

Randomly rearranging non working code one doesn’t understand… sometimes gets working code, sometimes doesn’t fix the bug, sometimes it won’t even compile anymore? Has no clue what the problem is and only solves it randomly by accident?

Sounds like the LLM is as capable as me /s

henfredemars ,

Sometimes you even get newer and more interesting bugs!

sugar_in_your_tea ,

As a senior dev, this sounds like job security. :)

henfredemars ,

You know you’re Sr. when it doesn’t even bother you anymore. It amuses you.

sugar_in_your_tea , (edited )

My boss comes to me saying we must finish feature X by date Y or else.

Me:

https://media.makeameme.org/created/continue-you-amuse.jpg

We’re literally in this mess right now. Basically, product team set out some goals for the year, and we pointed out early on that feature X is going to have a ton of issues. Halfway through the year, my boss (the director) tells the product team we need to start feature X immediately or it’s going to have risk of missing the EOY goals. Product team gets all the pre-reqs finished about 2 months before EOY (our “year” ends this month), and surprise surprise, there are tons of issues and we’re likely to miss the deadline. Product team is freaking out about their bonuses, whereas I’m chuckling in the corner pointing to the multiple times we told them it’s going to have issues.

There’s a reason you hire senior engineers, and it’s not to wave a magic wand and fix all the issues at the last minute, it’s to tell you your expectations are unreasonable. The process should be:

  1. product team lists requirements
  2. some software dev gives a reasonable estimate
  3. senior dev chuckles and doubles it
  4. director chuckles and adds 25% or so to the estimate
  5. if product team doesn’t like the estimate, return to 1
  6. we release somewhere between 3 and 4

If you skip some of those steps, you’re going to have a bad time.

henfredemars , (edited )

In my experience, the job of a sr. revolves around expectations. Expectations of yourself, of the customer, of your bosses, of your juniors and individual contributors working with you or that you’re tasking. Managing the expectations and understanding how these things go to protect your guys and gals and trying to save management from poking out their own eyes.

And you may actually have time to do some programming.

sugar_in_your_tea ,

Can confirm. At our company, we have a tech debt budget, which is really awesome since we can fix the worst of the problems. However, we generate tech debt faster than we can fix it. Adding AI to the mix would just make tech debt even faster, because instead of senior devs reviewing junior dev code, we’d have junior devs reviewing AI code…

henfredemars , (edited )

AI can be a useful tool, but it’s not a substitute for actual expertise. More reviews might patch over the problem, but at the end of the day, you need a competent software developer who understands the business case, risk profile, and concrete needs to take responsibility for the code if that code is actually important.

AI is not particularly good at coding, and it’s not particularly good at the human side of engineering either. AI is cheap. It’s the outsourcing problem all over again and with extra steps of having an algorithm hide the indirection between the expertise you need and the product you’re selling.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines