It should be possible, right? It’s not like we’ve gotten worse at coding. All the bloat is a function of people not caring, and to some degree different requirements.
I should check if lemmy.sdf.org is back online. Retrocomputing would love this.
Mentioning @CanadaPlus, so I can find this easier.
I’ve heard that before recently, but tbth I really don’t want to mess with my system rn (barring updates), it mostly acts as a server lately sense I got my SteamDeck and that’s probably how it’s gonna stay as I really want to avoid accidentally breaking anything for the time being
When I get my batch 2 framework order I’m going to be a lot more willing to learn adout and experiment with Wayland, just becouse I know whatever I do on that it won’t interrupt what other people are doing who are connect to the aformentioned server on my older machine
Will I upgrade my other computer to Wayland? Maybe eventually after I’ve learned more about it and played with it some, but for the time being it’s just gonna stay as it is
And that’s a totally valid approach. I didn’t want to push anyone into Wayland, i’ve dragged my X11 setup with me for as long as i wanted. I just wanted to show that NVidia is not the barrier anymore.
I disagree. When I used it recently it was still very much subpar compared to the AMD experience. It’s usable, but not to the point that I would like to use it.
I set k to 50,000,000,000… that’s more items than my shitty computer can fit in memory (including swsp) but I am now happy to celebrate my O(1) algorithm.
Any gains from LLM now would barely offset the complexity bloat introduced in enterprise applications in the last decade alone. And that’s not even taking into account the sins of the past that are only hidden behind the topsoil lair of cargo cult architecture.
After the report that codes made by the assistance of copilot are actually shittier than code written manually I’m feeling safe until the next breakthrough in AI development. Meanwhile I’m saving up gold for the eventuality.
It’s even worse then: that means it’s probably a race condition and do you really want to run the risk of having it randomly fail in Production or during an important presentation? Also race conditions generally are way harder to figure out and fix that the more “reliable” kind of bug.
Legit happens without a race condition if you’ve improperly linked libraries that need to be built in a specific order. I’ve seen more than one solution that needed to be run multiple times, or built project by project, in order to work.
Isn’t that the definition of a race condition, though? In this case, the builds are racing and your success is tied to the builds happening to happen at the right times.
Or do you mean “builds 1 and 2 kick off at the same time, but build 1 fails unless build 2 is done. If you run it twice, build 2 does “no change” and you’re fine”?
This isn’t my experience. I’m way more focused in the morning and then it’s all downhill after lunch. By the time it’s the evening I have zero motivation to do any code.
lol, I'd love to see the fucking ruin of the world we'd live in if current LLMs replaced senior developers. Maybe it'll happen some day, but in the meantime it's job security! I get to fix all of the bugfuck crazy issues generated by my juniors using Copilot and ChatGPT.
And when “web frameworks means we don’t need web developers anymore” and when “COBOL is basically plain English, so anyone can code, so we don’t need specialists anymore”.
Millions did. It’s just that after a while the advantages stopped being convincing and the trend reversed. If the same thing happens here, expect to go jobless for a while until you’re needed again.
One of my uni lecturers does the whole “You are out of a job” thing. He’s a smart guy but he’s barley written a line of code in his life. This comes up frequently and everytime I ask him “Get CHATGPT to write fizz buzz in X86 ASM.” Without fail it will crash when trying to build everytime. This technology is very advanced but I find people get it to the the simplest tasks and then expect it to solve the most complex ones.
I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.
It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.
It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.
It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.
AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.
This is how AI is a threat to humanity. Not because it will choose to act against us, but because people will trust what it says without question and base huge decisions on faulty information.
A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.
Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.
I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.
I was helping someone with their programming homework, every time copilot suggested anything he just blindly added it, and every time i had to ask him “and why do you need those lines? What do they do?”, and he could never answer…
Sometimes those lines made sense, other times they were completely irrelevant to the problem, but he just add the suggestions on reflex without even reading them
I had to pull aside a developer to inform him that he “would be” violating our national security by pasting code online to an AI and that there were potentially repercussions far beyond his job.
I was afraid of AI coming from my job, so I decided to learn about it. And by learning about it, I learned its limitations, which are numerous.
Someday maybe it will be strong enough to take on an entire engineer – but it’s going to be a very long time until that happens. If anything, I’ve spent more time screwing with prompts making sure that they’re perfect to try to get better outputs. Really where I see our jobs going is prompt engineering, DevOps, and fine tuning
Absolutely. AI is really good at single tasks of specific types. For example, it’s great for organizing your emails, or creating filler content for a website, or helping suggest responses for customer support people. And sure, it did an amazing job creating code for a Google spreadsheet so I could easily scrape radio websites for their competitions and win festival tickets for the seventh year in a row. But in all these things it’s incredibly one dimensional, and still needs a human to guide it. People come to my demo calls thinking that AI agents are fully possible and capable. Nope, not yet.
programmer_humor
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.