There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Are LLMs capable of writing *good* code?

By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

GBU_28 ,

For basic boiler plate like routes for an API, an etl script from sample data to DB tables, or other similar basics, yeah, it’s perfectly acceptable. You’ll need to swap out dummy addresses, and maybe change a choice or two, but it’s fine.

But when you’re trying to organize more complicated business logic or debug complicated dependencies it falls over

JeeBaiChow ,

Dunno. I’d expect to have to make several attempts to coax a working snippet from the ai, then spending the rest of the time trying to figure out what it’s done and debugging the result. Faster to do it myself.

E.g. I once coded Tetris on a whim (45 min) and thought it’d be a good test for ui/ game developer, given the multi disciplinary nature of the game (user interaction, real time engine, data structures, etc) Asked copilot to give it a shot and while the basic framework was there, the code simply didn’t work as intended. I figured if we went into each of the elements separately, it would have taken me longer than if i’d done it from scratch anyway.

Nomecks ,

I use it to write code, but I know how to write code and it probably turns a week of work for me into a day or two. It’s cool, but not automagic.

nous ,

They can write good short bits of code. But they also often produce bad and even incorrect code. I find it more effort to read and debug its code then just writing it myself to begin with the vast majority of the time and find overall it just wastes more of my time overall.

Maybe in a couple of years they might be good enough. But it looks like their growth is starting to flatten off so it is up for debate as to if they will get there in that time.

WraithGear ,
@WraithGear@lemmy.world avatar

Its the most ok’est coder with the attention span of a 5 year old.

Arbiter ,

No LLM is trust worthy.

Unless you understand the code and can double check what it’s doing I wouldn’t risk running it.

And if you do understand it any benefit of time saved is likely going to be offset by debugging and verifying what it actually does.

FlorianSimon ,

Since reviewing code is much harder than checking code you wrote, relying on LLMs too heavily is just plain dangerous, and a bad practice, especially if you’re working with specific technologies with lots of footguns (cf C or C++). The amount of crazy and hard to detect bad things you can write in C++ is insane. You won’t catch CVE-material by just reading the output ChatGPT or Copilot spits out.

And there’s lots of sectors like aerospace, medical where that untrustworthiness is completely unacceptable.

xmunk ,

No, a large part of what “good code” means is correctness. LLMs cannot properly understand a problem so while they can produce grunt code they can’t assemble a solution to a complex problem and, IMO, it is impossible for them to overtake humans unless we get really lazy about code expressiveness. And, on that point, I think most companies are underinvesting into code infrastructure right now and developers are wasting too much time on unexpressive code.

The majority of work that senior developers do is understanding a problem and crafting a solution appropriate to it - when I’m working my typing speed usually isn’t particularly high and the main bottleneck is my brain. LLMs will always require more brain time while delivering a savings on typing.

At the moment I’d also emphasize that they’re excellent at popping out algorithms I could write in my sleep but require me to spend enough time double checking their code that it’s cheaper for me to just write it by hand to begin with.

PenisDuckCuck9001 , (edited )

Ai is excellent at completing low effort ai generated Pearson programming homework while I spend all the time I saved on real projects that actually matter. My hugging face model is probably trained on the same dataset as their bot. It gets it correct about half the time and another 25% of the time, I just have to change a few numbers or brackets around. It takes me longer to read the instructions than it takes the ai bot to spit out the correct answer.

None of it is “good” code but it enables me to have time to write good code somewhere else.

MajorHavoc , (edited )

Great question.

is there any legit reason anyone should learn advanced coding techniques?

Don’t buy the hype. LLMs can produce all kinds of useful things but they don’t know anything at all.

No LLM has ever engineered anything. And there’s no sparse (concession to a good point made in response) current evidence that any AI ever will.

Current learning models are like trained animals in a circus. They can learn to do any impressive thing you an imagine, by sheer rote repetition.

That means they can engineer a solution to any problem that has already been solved millions of times already. As long as the work has very little value and requires no innovation whatsoever, learning models do great work.

Horses and LLMs that solve advanced algebra don’t understand algebra at all. It’s a clever trick.

Understanding the problem and understanding how to politely ask the computer to do the right thing has always been the core job of a computer programmer.

The bit about “politely asking the computer to do the right thing” makes massive strides in convenience every decade or so. Learning models are another such massive stride. This is great. Hooray!

The bit about “understanding the problem” isn’t within the capabilities of any current learning model or AI, and there’s no current evidence that it ever will be.

Someday they will call the job “prompt engineering” and on that day it will still be the same exact job it is today, just with different bullshit to wade through to get it done.

chknbwl OP ,
@chknbwl@lemmy.world avatar

I appreciate your candor, I had a feeling it was cock and bull but you’ve answered my question fully.

ConstipatedWatson ,

Wait, if you can (or anyone else chipping in), please elaborate on something you’ve written.

When you say

That means they can engineer a solution to any problem that has already been solved millions of times already.

Hasn’t Google already made advances through its Alpha Geometry AI?? Admittedly, that’s a geometry setting which may be easier to code than other parts of Math and there isn’t yet a clear indication AI will ever be able to reach a certain level of creativity that the human mind has, but at the same time it might get there by sheer volume of attempts.

Isn’t this still engineering a solution? Sometimes even researchers reach new results by having a machine verify many cases (see the proof of the Four Color Theorem). It’s true that in the Four Color Theorem researchers narrowed down the cases to try, but maybe a similar narrowing could be done by an AI (sooner or later)?

I don’t know what I’m talking about, so I should shut up, but I’m hoping someone more knowledgeable will correct me, since I’m curious about this

MajorHavoc , (edited )

Isn’t this still engineering a solution?

If we drop the word “engineering”, we can focus on the point - geometry is another case where rote learning of repetition can do a pretty good job. Clever engineers can teach computers to do all kinds of things that look like novel engineering, but aren’t.

LLMs can make computers look like they’re good at something they’re bad at.

And they offer hope that computers might someday not suck at what they suck at.

But history teaches us probably not. And current evidence in favor of a breakthrough in general artificial intelligence isn’t actually compelling, at all.

Sometimes even researchers reach new results by having a machine verify many cases

Yes. Computers are good at that.

So far, they’re no good at understanding the four color theorum, or at proposing novel approaches to solving it.

They might never be any good at that.

Stated more formally, P may equal NP, but probably not.

Edit: To be clear, I actually share a good bit of the same optimism. But I believe it’ll be hard won work done by human engineers that gets us anywhere near there.

Ostensibly God created the universe in Lisp. But actually he knocked most of it together with hard-coded Perl hacks.

There’s lots of exciting breakthroughs coming in computer science. But no one knows how long and what their impact will be. History teaches us it’ll be less exciting than Popular Science promised us.

Edit 2: Sorry for the rambling response. Hopefully you find some of it useful.

I don’t at all disagree that there’s exciting stuff afoot. I also thing it is being massively oversold.

PlzGivHugs ,

AI can only really complete tasks that are both simple and routine. I’d compare the output skill to that of a late-first-year University student, but with the added risk of halucination. Anything too unique or too compex tends to result in significant mistakes.

In terms of replacing programmers, I’d put it more in the ballpark of predictive text and/or autocorrect for a writer. It can help speed up the process a little bit, and point out simple mistakes but if you want to make a career out of it, you’ll need to actually learn the skill.

ImplyingImplications ,

Writing code is probably one of the few things LLMs actually excell at. Few people want to program something nobody has ever done before. Most people are just reimplimenting the same things over and over with small modifications for their use case. If imports of generic code someone else wrote make up 90% of your project, what’s the difference in getting an LLM to write 90% of your code?

chknbwl OP ,
@chknbwl@lemmy.world avatar

I see where you’re coming from, sort of like the phrase “don’t reinvent the wheel”. However, considering ethics, that doesn’t sound far off from plagiarism.

EmilyIsTrans ,
@EmilyIsTrans@lemmy.blahaj.zone avatar

After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)

chknbwl OP ,
@chknbwl@lemmy.world avatar

Very well put, thank you.

adespoton ,

The other thing is, an LLM generally knows about all the existing libraries and what they contain. I don’t. So while I could code a pretty good program in a few days from first principles, an LLM is often able to stitch together some elegant glue code using a collection of existing library functions in seconds.

edgemaster72 ,
@edgemaster72@lemmy.world avatar

understanding what the machine spits out

This is exactly why people will still need to learn to code. It might write good code, but until it can write perfect code every time, people should still know enough to check and correct the mistakes.

chknbwl OP ,
@chknbwl@lemmy.world avatar

I very much agree, thank you for indulging my question.

667 ,
@667@lemmy.radio avatar

I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.

In the end, I got decent code that worked for the purpose I needed.

I still didn’t write any docstrings or comments.

adespoton ,

I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.

And this means that it isn’t writing professional code.

It’s great for quickly generating useful and testable code snippets though.

GBU_28 ,

It can absolutely write a docstring for a provided function. That and unit tests are like some of the easiest things for it, because it has the source code to work from

visor841 ,

For a very long time people will also still need to understand what they are asking the machine to do. If you tell it to write code for an impossible concept, it can’t make it. If you ask it to write code to do something incredibly inefficiently, it’s going to give you code that is incredibly inefficient.

saltesc ,

In my experience, not at all. But sometimes they help with creativity when you hit a wall or challenge you can’t resolve.

They have been trained off internet examples where everyone has a different style/method of coding, like writing style. It’s all very messy and very unreliable. It will be years for LLMs to code “good” and will require a lot of training that isn’t scraping.

TootSweet ,

A broken clock is right twice a day.

A_A ,
@A_A@lemmy.world avatar

Yes … and it doesn’t know when it is on time.
Also, machines are getting better and they can help us with inspiration.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines