There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Are LLMs capable of writing *good* code?

By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

Septimaeus ,

Theoretically, I would say yes it’s possible, insofar as we could break down most subtasks of the development process into training parameters. But we are a long way from that currently.

cley_faye ,

For repetitive tasks, it can almost automatically get a first template you write by hand, and extrapolate with multiple variations.

Beyond that… not really. Anything beyond single line completion quickly devolves into either something messy, non working, or worse, working but not as intended. For extremely common cases it will work fine; but extremely common cases are either moved out in shared code, or take less time to write than to “generate” and check.

I’ve been using code completion/suggestion on the regular, and it had times where I was pleasantly surprised by what it produced, but even for these I had to look after it and fix some things. And while I can’t quantify how often it happened, there are a lot of times where it’s convincing gibberish.

muntedcrocodile ,
@muntedcrocodile@lemm.ee avatar

I worry for the future generations of people who can use chatgpt to write code but have absolutely no idea what said code is doing.

bear ,

No. To specify exactly what you want the computer to do for you, you’d need some kind of logic-based language that both you and the computer mutually understand. Imagine if you had a spec you could reference to know what the key words and syntax in that language actually mean to the computer.

edgemaster72 ,
@edgemaster72@lemmy.world avatar

understanding what the machine spits out

This is exactly why people will still need to learn to code. It might write good code, but until it can write perfect code every time, people should still know enough to check and correct the mistakes.

chknbwl OP ,
@chknbwl@lemmy.world avatar

I very much agree, thank you for indulging my question.

667 ,
@667@lemmy.radio avatar

I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.

In the end, I got decent code that worked for the purpose I needed.

I still didn’t write any docstrings or comments.

adespoton ,

I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.

And this means that it isn’t writing professional code.

It’s great for quickly generating useful and testable code snippets though.

GBU_28 ,

It can absolutely write a docstring for a provided function. That and unit tests are like some of the easiest things for it, because it has the source code to work from

dandi8 ,
@dandi8@fedia.io avatar

In my experience LLMs do absolutely terribly with writing unit tests.

visor841 ,

For a very long time people will also still need to understand what they are asking the machine to do. If you tell it to write code for an impossible concept, it can’t make it. If you ask it to write code to do something incredibly inefficiently, it’s going to give you code that is incredibly inefficient.

ImplyingImplications ,

Writing code is probably one of the few things LLMs actually excell at. Few people want to program something nobody has ever done before. Most people are just reimplimenting the same things over and over with small modifications for their use case. If imports of generic code someone else wrote make up 90% of your project, what’s the difference in getting an LLM to write 90% of your code?

chknbwl OP ,
@chknbwl@lemmy.world avatar

I see where you’re coming from, sort of like the phrase “don’t reinvent the wheel”. However, considering ethics, that doesn’t sound far off from plagiarism.

dandi8 ,
@dandi8@fedia.io avatar

IMO this perspective that we're all just "reimplementing basic CRUD" applications is the reason why so many software projects fail.

daniskarma ,

For small boilerplate or very common small pieces of code, for instance a famous algorithm implementation. Yes. As they are just probably giving you the top stack overflow answer for a classic question.

Anything that the LLM would need to mix or refactor would be terrible.

slazer2au ,

No, because that would require it being trained on good code. Which is rather rare.

Nomecks ,

I use it to write code, but I know how to write code and it probably turns a week of work for me into a day or two. It’s cool, but not automagic.

Manifish_Destiny ,

I find it better at things under 100 lines. Otherwise it starts to lose context. Any ideas how to address this?

gravitas_deficiency ,

LLMs are just computerized puppies that are really good at performing tricks for treats. They’ll still do incredibly stupid things pretty frequently.

I’m a software engineer, and I am not at all worried about my career in the long run.

In the short term… who fucking knows. The C-suite and MBA circlejerk seems to have decided they can fire all the engineers because wE CAn rEpLAcE tHeM WitH AI 🤡 and then the companies will have a couple absolutely catastrophic years because they got rid of all of their domain experts.

GBU_28 ,

For basic boiler plate like routes for an API, an etl script from sample data to DB tables, or other similar basics, yeah, it’s perfectly acceptable. You’ll need to swap out dummy addresses, and maybe change a choice or two, but it’s fine.

But when you’re trying to organize more complicated business logic or debug complicated dependencies it falls over

bionicjoey ,

This question is basically the same as asking “Are 2d6 capable of rolling a 9?”

etchinghillside ,

Yes, two six-sided dice (2d6) are capable of rolling a sum of 9. Here are the possible combinations that would give a total of 9:

  • 3 + 6
  • 4 + 5
  • 5 + 4
  • 6 + 3

So, there are four different combinations that result in a roll of 9.

See? LLMs can do everything!

Batman ,

Wow that’s pretty good

xmunk ,

Now ask it how many r’s are in Strawberry!

lord_ryvan , (edited )

I asked four LLM-based chatbots over DuckDuckGo’s anonymised service the following:

“How many r’s are there in Strawberry?”


GPT-4o mini

There are three “r’s” in the word "strawberry."

Claude 3 Haiku

There are 3 r’s in the word “Strawberry”.

Llama 3.1 70B

There are 2 r’s in the word “Strawberry”.

Mixtral 8x7B

There are 2 “r” letters in the word “Strawberry”. Would you like to know more about the privacy features of this service?


They got worse at the end, but at least GPT and Claude can count letters.

chknbwl OP ,
@chknbwl@lemmy.world avatar

I have no knowledge of coding, my bad for asking a stupid question in NSQ.

etchinghillside ,

Wouldn’t exactly take the comment as negative.

The output of current LLMs is hit or miss sometimes. And when it misses you might find yourself in a long chain of persuading a sassy robot into writing things as you might intend.

chknbwl OP ,
@chknbwl@lemmy.world avatar

Thank you for extrapolating for them.

bionicjoey ,

Sorry, I wasn’t trying to berate you. Just trying to illustrate the underlying assumption of your question

JeeBaiChow ,

Dunno. I’d expect to have to make several attempts to coax a working snippet from the ai, then spending the rest of the time trying to figure out what it’s done and debugging the result. Faster to do it myself.

E.g. I once coded Tetris on a whim (45 min) and thought it’d be a good test for ui/ game developer, given the multi disciplinary nature of the game (user interaction, real time engine, data structures, etc) Asked copilot to give it a shot and while the basic framework was there, the code simply didn’t work as intended. I figured if we went into each of the elements separately, it would have taken me longer than if i’d done it from scratch anyway.

Arbiter ,

No LLM is trust worthy.

Unless you understand the code and can double check what it’s doing I wouldn’t risk running it.

And if you do understand it any benefit of time saved is likely going to be offset by debugging and verifying what it actually does.

FlorianSimon ,

Since reviewing code is much harder than checking code you wrote, relying on LLMs too heavily is just plain dangerous, and a bad practice, especially if you’re working with specific technologies with lots of footguns (cf C or C++). The amount of crazy and hard to detect bad things you can write in C++ is insane. You won’t catch CVE-material by just reading the output ChatGPT or Copilot spits out.

And there’s lots of sectors like aerospace, medical where that untrustworthiness is completely unacceptable.

EmilyIsTrans ,
@EmilyIsTrans@lemmy.blahaj.zone avatar

After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)

chknbwl OP ,
@chknbwl@lemmy.world avatar

Very well put, thank you.

adespoton ,

The other thing is, an LLM generally knows about all the existing libraries and what they contain. I don’t. So while I could code a pretty good program in a few days from first principles, an LLM is often able to stitch together some elegant glue code using a collection of existing library functions in seconds.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines