When you ask an LLM to write some prose, you could ask it "I'd like a Pulitzer-prize winning description of two snails mating" or you could ask it "I want the trashiest piece of garbage smut you can write about two snails mating." Or even "rewrite this description of two snails mating to be less trashy and smutty." In order for the LLM to be able to give the user what they want they need to know what "trashy piece of garbage smut" is. Negative examples are still very useful for LLM training.
I don’t think that will have the impact people think it will, maybe at first, but eventually it’ll just start treating “wrong” code as a negative and reference it as a “how NOT to do things” lmao
For sure, but just like that whole “poison our pictures” from artists thing, the people building these models (be it a company or researchers or even hobbyists) are going to start modifying the training process so that the AI model can recognize bad code. And that’s assuming it doesn’t already, I think without that capability from the getgo the current models would be a lot worse at what they generate than they are as is lmao
Pick a library you already use with many sub-dependencies. Make a new library with your evil code. Name it in line with the step 1 library. Oh hi there “Framework.Microsoft.Extensions.DB.Net.Compatibility” you couldn’t possibly have anything bad going on in you, plus you sound really boring to review, I’m sure it’s fine.
The C standard library function int rand(void) returns a pseudo random integer between 0 and RAND_MAX (which should be at least 2^15, depending on the actual implementation).
Depending on the distribution of the pseudo random numbers, it will be true for over > 99% of its applications.
programmer_humor
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.