There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

programmer_humor

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

kamen , in What could possibly go wrong

Left pad is a good example of why you shouldn’t.

Caboose12000 ,

can you elaborate

v1605 ,
johannes ,

That was a rather nice read :) thank you!

milkjug ,
@milkjug@lemmy.world avatar

Thank you for sharing this. I learn something new everyday, much appreciated.

Feirdro ,

This was excellent, but conveniently left off any discussion that npm can “un-un-publish” a programmer’s code against their wishes, and apparently without repercussions?

Fuck npm, I guess.

mexicancartel ,

Absolutely they can un-unpublish since the programmer has given everyone the rights to use his code wherever they want, with its open license. Npm can actually use the older version of the code and give it to everyone. Its actually a good thing

Feirdro ,

Right, the “open” part of open source.

DarkenLM ,

Thank fuck for that, cause if they didn't faker.js and node-ipc would have caused a lot of trouble, with the developers adding malware to a new version and later deleting the entire packages, breaking tons of projects. And those were everything but small packages.

Anonymousllama ,

All for the greater good, especially if it’s the choice between one guy’s desire to nuke their own code VS tens / hundreds of thousands of projects that depend on it.

magic_lobster_party ,

Event stream as well. TL;DR: popular npm library get infested with Bitcoin stealing code.

https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident

Nioxic , in Hallelujah

Dir?

SturgiesYrFase ,
@SturgiesYrFase@lemmy.ml avatar

Well that’s rude…

Nintendo ,

what did you say? say that again to my face, I dare you.

Nioxic ,

I apologize. I didnt mean to offend anyone!

lugal ,

Mir?

nodiet ,

Mir nichts, dir nichts

lugal ,

Achso, schade, aber kann man nichts machen

friendlymessage ,

Tja

AnUnusualRelic ,
@AnUnusualRelic@lemmy.world avatar

Why are DOS commands always so verbose?

SubArcticTundra ,
@SubArcticTundra@lemmy.ml avatar

Wait till I tell you about Pause

out , (edited )

deleted_by_author

  • Loading...
  • leviosa ,
    @leviosa@programming.dev avatar

    Old habits die hard, that’s the first alias on my list in .zshrc!

    mexicancartel ,

    ^L

    RaivoKulli ,

    Just makes the command prompt climb into a hole

    thepianistfroggollum ,

    Too many letters

    tentacles9999 , in It's easier to remember the IPs of good DNSes, too.

    Honestly we should just use 4 bit ip addresses, it’s too hard for me to remember ipv4 addresses anyways. Carrier grade NAT will take care of the rest.

    floofloof ,

    Why compromise? Use 1-bit IP addresses.

    WeirdAlex03 ,
    @WeirdAlex03@lemmy.zip avatar

    Finally, a use for my [1-bit bloom filter](http://www…com/ xkcd.com/2934/)!

    Semi_Hemi_Demigod , in Defragged Zebra
    @Semi_Hemi_Demigod@lemmy.world avatar

    Pro tip: Defragmenting only works on spinning drives because it puts the data nearer to the spindle so seek times are shorter. Solid-state drives wear out faster if you defragment them, since every write involves a little bit of damage.

    vocornflakes ,

    I was about to throw hands, but then I learned something new about how SSDs store data in pre-argument research. My poor SSDs. I’ve been killing them.

    Kenny ,

    No you didn‘t. All somewhat current operating systems do not defrag SSDs, they just run TRIM and it does not kill them.

    Semi_Hemi_Demigod ,
    @Semi_Hemi_Demigod@lemmy.world avatar

    Most modern OSeses do defragmentation on the fly and you don’t really need to do it anymore.

    Which makes me sad because I have so many memories of watching a disk defragmenter do its thing from my childhood.

    AllHailTheSheep ,

    real actually. definitely one of the most memorable progress bars. well, that and the bios update progress bar

    greybeard ,

    Here’s a little game I made because I missed it too. dbeta.com/games/webdefragger/

    mrsgreenpotato ,

    It’s just Paint behind it, isn’t it?

    greybeard ,

    I’m guessing you were making a joke, but the real answer is it is a Godot tile map.

    indepndnt ,

    That was super cool.

    greybeard ,

    Thanks. It was a silly toy, but it scratched an itch, and was goof for at least one chuckle.

    Kenny ,

    I loved watching disk defragmenter doing it‘s job as a kid. I miss it too!

    lseif ,

    well, defragging my ssd was the only thing that let me shrink the windows partition safely when i dualbooted… tho maybe thats just windows being funky

    Semi_Hemi_Demigod ,
    @Semi_Hemi_Demigod@lemmy.world avatar

    That kinda makes sense. Putting all the partition sectors together would probably make it easier to resize. But as standard maintenance it’s like changing the oil on an electric car.

    lseif ,

    i see

    floofloof ,

    You just don’t want to do it regularly. It was an issue for a brief time when SSDs were new, but modern operating systems are smart enough to exclude SSDs from scheduled defrags.

    RonSijm ,
    @RonSijm@programming.dev avatar

    Defragging an SSD on a modern OS just runs a TRIM command. So probably when you wanted to shrink the windows partition, there was still a bunch of garbage data on the SSD that was “marked for deletion” but didn’t fully go through the entire delete cycle of the SSD.

    So “windows being funky” was just it making you do a “defragmentation” for the purpose of trimming to prepare to partition it. But I don’t really see why they don’t just do a TRIM inside the partition process, instead of making you do it manually through defrag

    lseif ,

    i used Defraggler, after nothing else worked to allow diskmgmt to shrink it, including all the normal stuff like disabling page files, snapshots, etc. it shows me how it was reordering parts of the ssd.

    Alawami ,

    Random reads are still slower than sequential in SSD. try torrenting for a year on SSD, then benchmark then defragment then benchmark. it will be very measureable difference. you may need some linux filesystem like XFS as im not sure if there is a way to defrag SSDs in windows.

    LazerFX ,

    That’s because the drive was written to its limits; the defrag runs a TRIM command that safely releases and resets empty sectors. Random reads and sequential reads /on clean drives that are regularly TRIMmed/ are within random variance of each other.

    Source: ran large scale data collection for a data centre when SSDs were relatively new to the company so focused a lot on it, plus lots of data from various sectors since.

    Alawami , (edited )

    I’m pretty sure running XFS defrag will defrag without trimming no matter the type of block device.

    Edit: yea you might actually be right. I Played with my fstab too much years ago, and never thought of that untill now

    LazerFX ,

    I understood that XFS automatically mounted SSD’s with XFS_XFLAG_NODEFRAG set? Is this not the case?

    Alawami ,

    yea you might actually be right. I Played with my fstab too much years ago, and never thought of that until now

    But does that flag affect manually running xfs_fsr?

    LazerFX ,

    According to the man(8) page, it will avoid touching any blocks that have the chattr -f flag set, which is XSR_XFLAGS_NODEFRAG… So I think if the docs are still accurate to the code, yes.

    A lot of ifs in that assumption.

    lud ,

    Pro tip: That tip has been obsolete for a long time now. Running the defragmentation tool on an SSD in Windows optimizes the drive (pretty much just running TRIM). It’s not possible to defragment an SSD in Windows (maybe there is a way using some register hack but that’s out of scope)

    TheKMAP ,

    Defragging is about… defragging: making the data contiguous (a continuous stream along one arc of the same radius) so it doesn’t have to jump around.

    andrew_bidlaw , in AI Suggestions
    @andrew_bidlaw@sh.itjust.works avatar

    It knows, the time is right.

    pyre ,

    when the AI says the time is right it sounds ominous

    andrew_bidlaw ,
    @andrew_bidlaw@sh.itjust.works avatar

    And then it replaces datetime output with the countdown.

    Klear ,

    The time is right when there’s no time left.

    SandbagTiara2816 , in huggingface.co

    I don’t get it. Can you explain?

    MrScottyTay ,

    Go to the website and you’ll likely be the caveman

    Korne127 ,
    @Korne127@lemmy.world avatar

    Can you explain what’s supposed to be so complicated about it?

    MrScottyTay ,

    If you don’t understand much about AI models, how they work, how to install/use them and unable to recover all of the specific jargon that comes with the field…

    That site is very useful but it’s not a great starting point, it is not useful in the terms of understanding everything beyond just diving in the deep end and troubleshooting via external help forums like stack overflow regularly to figure it out.

    virku ,

    It’s basically github for large language models.

    andnekon ,

    not necessarily llms, just ml models

    neo2478 ,

    I also don’t get it, even more so after the two answers to your comments.

    jnk ,

    I’m starting to think that not gettimg it probably means we are the caveman, but how could I know, I’m just a caveman after all

    breadsmasher ,
    @breadsmasher@lemmy.world avatar

    When OP visited huggingface, they felt very out of place- like a cave man being guided through dexters lab.

    AmidFuror , in When a real user uses the app

    Thank goodness the joke came with an explanation to suck the fun out of it.

    spongebue ,

    I hadn’t heard that story before. True or not, I’m glad it was there

    sbv ,

    I always enjoy hearing about other people’s bugs. It makes my imposter syndrome recede for a few moments.

    meathorse , in Has this ever happened to you?

    My dumb arse used to do this to win 98/me when I was a student. “Optimising” everything and deleting anything I would never use, trying to squeeze every mb out of my limited 2gb disk space but the damn thing was so unreliable I was constantly reinstalling windows.

    After one reload, I finished late at night and just left it alone, forgetting to perform all my “power user customisation” until I remembered a week later when it suddenly dawned on me that it was running fast AND stable - I hadn’t had a single crash that week. As a final test, I applied all my “optimisations” again and “oh, look! It’s crashing constantly again”. I was a slow learner and turns out I don’t know better than the people that built the system!

    I always think of this when I see threads about win7 - 11 being unstable, because it just isn’t. As you dig through the thread, the op reveals more - they’ve chopped out all sorts of system components with registry hacks and third party tools or blocked updates and then bitch about windows being garbage - don’t get me wrong, they simultaneously make it better and worse with every release so I sympathize why people try chopping out edge, copilot etc - but just don’t.

    Disabling services and uninstalling functions the non-hacky way ‘should’ be fine (and likely reversable) but if someone wants to bare-bone their OS or be data gathering-free, they’d be better off learning Linux.

    WormFood ,

    the biggest causes of bsods and other crashes on windows up to xp were drivers. after xp, Microsoft required drivers for windows to go through their signing and verification program, which was controversial but it did solve the problem

    modern windows rarely crashes outright but in my experience it does break in small ways over time, without the user doing anything

    in terms of disabling windows components, it’s true that this can break your system, but I would argue this is still Microsoft’s problem. there are many windows competents that are deeply coupled together when they have no reason to be

    meathorse ,

    That’s right! I remember those signed drivers where also why early XP (pre SP2) had a bad rep. Not as bad as ME but users were swearing on the graves of dead relatives they would never give up W98 or W2k. Without new or signed drivers, a lot of hardware struggled but by the time SP2 rolled out, hardware vendors had mostly caught up and the OS had matured.

    Vista had similar issues (so, so many issues with Vista) with it’s security changes which made life difficult for badly written/insecure software (wanting admin rights to run or write access to system folders/reg keys). Those changes in Vista paved the way for Win7 to be so much better at launch since most software had caught up by then.

    I think the issue with disabling components is 90% how users remove them. Pulling them out via “official” methods hasn’t ever caused me issues - DISM is really handy - particularly for permanently removing the default apps. Those deeply connected functions can be a pain!

    SpaceXplorer_8042 ,
    @SpaceXplorer_8042@lemmy.zip avatar

    I have only used DISM (I think) for chkdsk. What else can you really do with it? I don’t even know what to search tbh, so pardon me if it’s just a quick search away

    meathorse ,

    Quite a few things - mostly used it for capturing images, loading drivers and updates into images but can also be used to pull apps out of the image too.

    For a live windows install there are PowerShell commands to do this

    …microsoft.com/…/add-or-remove-packages-offline-u…

    Kindness ,

    or blocked updates

    This in particular. Windows intentionally destabilises itself if you prevent it from updating or powering off when it realises there is an update available.

    This could be non-malicious, such as refusing to collect spyware reports from a potentially infected box, and the box needs to connect to MS to function properly, It could be a tool to force people to “reboot to solve your issues”. Hard to tell without running afoul of the Computer Misuse Act.

    meathorse ,

    I’m not sure about this one - it’s definately not my experience but yours could be very different.

    The system definitely reports data back to MS but I’ve never seen a box have issues because we denied it the ability to dial home or update. Unless the PC is online and the user is actively trying to prevent the updates installing? I’ve seen users pull the plug on a PC that started/midway though updates hoping to stop them and it would often make a mess of things.

    We had a small handful of XP then Win7 boxes that were completely off the grid/standalone as SCADA access points/controllers? for several years without issues.

    Likewise, we had one box where the vendor did not allow any updates despite it being networked and online. They had disabled win updates completely without our input. It ran just fine for a few years until it was picked up in a security audit. We didn’t understand why updates were disabled at that time so we switched them back on and updated. The PC ran just fine until it’s eventual retirement.

    Kindness ,

    Ah, forgive me. I’m referring to the latest and most miserable versions. 10 will noticeably prevent results in the search area, if the machine doesn’t power off and is not updated for too long. Among other things. It takes around a week of ignoring an update.

    It’s likely much the same with 11.

    Wirlocke ,

    If you want barebones Windows I’d suggest you cough cough obtain Windows 10 LTSC.

    It’s got most the bloatware cut out, you just have to reenable the old style picture viewer.

    Though when I eventually make a new PC, I’m probably just gonna use Linux Mint because I hear running Windows games/software isn’t nearly as bad nowadays, thanks Steam.

    LemmyRefugee ,

    I don’t know why I thought about that but for younger people, Windows ME was Windows Millenium Edition.

    floofloof ,

    Windows 11 is pretty unreliable on my 3 machines. I don’t see many blue screens but the Start menu, Explorer, Task Manager, search and other basic bits frequently become unresponsive. I haven’t changed or removed anything. My Linux machines don’t do this. I think Windows 11 just isn’t that stable.

    cyborganism , (edited ) in Rebase Supremacy

    I prefer to rebase as well. But when you’re working with a team of amateurs who don’t know how to use a VCS properly and never update their branc with the parent branch, you end up with lots of conflicts.

    I find that for managing conflicts, rebase is very difficult as you have to resolve conflicts for every commit. You can either use rerere to repeat the conflict resolution automatically, or you can squash everything. But when you’re dealing with a team of Git-illiterate developers (which is VERY often the case) you can either spend the time to educate them and still risk having problems because they don’t give a shit, or you can just do a regular merge and go on with your life.

    Those are my two cents, speaking from experience.

    technom ,

    I agree that merge is the easier strategy with amateurs. By amateurs I mean those who cannot be bothered to learn about rebase. But what you really lose there is a nice commit history. It’s good to have, even if your primary strategy is merging. And people tend to create horrendous commit histories when they don’t know how to edit them.

    agressivelyPassive ,

    Honestly, I’m pretty sure 99.9% of git users never really bother with the git history in any way that would be hindered by merging.

    Git has a ton of powerful features, but for most projects they don’t matter at all. You want a distributed consensus, that’s it. Bothering yourself with all those advanced features and trying to learn some esoteric commands is frankly just overhead. Yes, you can solve great problems with them, but these problems almost never occur, and if they do, using the stupid tools is faster overall.

    aniki ,

    We use history and blame a lot

    chamomile ,
    @chamomile@furry.engineer avatar

    @agressivelyPassive @technom That's a self-fulfilling prophecy, IMO. Well-structured commit histories with clear descriptions can be a godsend for spelunking through old code and trying to work out why a change was made. That is the actual point, after all - the Linux kernel project, which is what git was originally built to manage, is fastidious about this. Most projects don't need that level of hygiene, but they can still benefit from taking lessons from it.

    To that end, sure, git can be arcane at the best of times and a lot of the tools aren't strictly necessary, but they're very useful for managing that history.

    zalgotext ,

    Yup, once you can use git with good hygiene, it opens up the door to add in other tools like commitizen and semantic-release, which completely automates things like version number bumps and changelog generation.

    xigoi ,
    @xigoi@lemmy.sdf.org avatar

    I fucking hate auto-generated changelogs, so I consider that a downside.

    agressivelyPassive ,

    I’d still argue, that the overhead is not worth it most of the time.

    Linux is one of the largest single pieces of software in existence, of course it has different needs than the standard business crap the vast majority of us develop.

    To keep your analogy: not every room is an operating room, you might have some theoretical advantages from keeping your kitchen as clean as an OR, but it’s probably not worth the hassle.

    zalgotext ,

    To keep your analogy, most people’s git histories, when using a merge-based workflow, is the equivalent of never cleaning the kitchen, ever.

    agressivelyPassive ,

    No, it’s not. And you know that.

    Seriously, ask yourself, how often did the need arise to look into old commits and if it did, wasn’t the underlying issue caused by the processes around it? I’ve been in the industry for a few years now and I can literally count on one hand how often I had to actually look at commit history for more than maybe 10 commits back. And I spend maybe 10min per year on that on average, if at all.

    I honestly don’t see a use case that would justify the overhead. It’s always just “but what if X, then you’d save hours!” But X never happens or X is caused by a fucked up process somewhere else and git is just the hammer to nail down every problem.

    zalgotext ,

    Seriously, ask yourself, how often did the need arise to look into old commits

    Literally every single day. I have a git alias that prints out the commit graph for my repositories, and by looking at that I can instantly see what tasks my coworkers are working on, what their progress is, and what their work is based on. It’s way more useful than any stand-up meeting I’ve ever attended.

    I’ve been in the industry for a few years now and I can literally count on one hand how often I had to actually look at commit history for more than maybe 10 commits back.

    I’ve been in the industry for nearly 15 years, but I can say that the last 3 years have been my most productive, and I attribute a lot of that to the fact that I’m on a team that cares about git history, knows how to use it, and keeps it readable. Like other people have been saying, this is a self fulfilling prophecy - most people don’t care to keep their git history readable, so they’ve only ever seen unreadable git histories, and so they think git history is useless.

    I honestly don’t see a use case that would justify the overhead.

    What overhead? The learning curve on rebasing isn’t that much steeper than that of merging or just using git itself. Take an hour to read the git docs, watch a tutorial or two, and you’re good to go. Understand that people actually read your commit messages and take 15 extra seconds to make them actually useful. Take an extra minute before opening an MR to rebase your personal branches interactively and squash down the “fixed a typo” and “ran isort” commits into something that’s actually useful. In the long run this saves time by making your code more easily reviewable, and giving reviewers useful context around your changes.

    It’s always just “but what if X, then you’d save hours!” But X never happens or X is caused by a fucked up process somewhere else and git is just the hammer to nail down every problem.

    No, having a clean, readable git history literally saves my team hours. I haven’t had to manually write or edit a changelog in three years because we generate it automatically from our commit messages. I haven’t had to think about a version number in three years because they’re automatically calculated from our commit messages. Those are the types of things teams sink weeks into, time absolutely wasted spent arguing over whether this thing or that is a patch bump or a minor bump, and no one can say for sure without looking at diffs or spinning up multiple versions of the code and poking it manually, because the git log is a tangled mess of spaghetti with meatballs made of messages like “finally fixed the thing” and “please just work dammit”. My team can tell you those things instantly just by looking at the git log. Because we care about history, and we keep it clean and useable.

    thanks_shakey_snake ,

    I gotta say, I was with you for most of this thread, but looking through old commits is definitely something that I do on a regular basis… Like not even just because of problems, but because that’s part of how I figure out what’s going on.

    The whole reason I keep my git history clean and my commit messages thoughtful is so that future-me (or future-someone-else) will have an easier time walking through it later, because that happens all the time.

    I’ll still almost always choose merge instead of rebase, but not because I don’t care about the git history-- quite the opposite, it’s really important to me in a very practical way.

    chamomile ,
    @chamomile@furry.engineer avatar

    @agressivelyPassive You should still clean your kitchen though, that's my point.

    agressivelyPassive ,

    Did I say anything otherwise?

    technom ,

    Only users who don’t know rebasing and the advantages of a crafted history make statements like this. There are several projects that depend on clean commit history. You need it for conventional commit tools (like commitzen), pre-commit hook tools, git blame, git bisect, etc.

    agressivelyPassive ,

    Uuuh, am I no true Scotsman?

    Counter argument: why do you keep fucking up so bad you need these tools? Only users who are bad at programming need these. Makes about as much sense as your accusation.

    You keep iterating the same arguments as the rest here, and I still adhere to my statement above: hardly anybody needs those tools. I literally never used pre-commit hooks or bisect in any semi-professional context. And I don’t know a single project that uses them. And before you counter with another “well u stoopid then” comment: the projects I’ve been working on were with pretty reputable companies and handled literally billions of Euros every year. I can honestly say, that pretty much everyone living in Germany had his/her data pushed through code that I wrote.

    technom ,

    Uuuh, am I no true Scotsman?

    That’s a terrible and disingenuous take. I’m saying that you won’t understand why it’s useful till you’ve used it. Spinning that as no true Scotsman fallacy is just indicative of that ignorance.

    You keep iterating the same arguments as the rest here, and I still adhere to my statement above: hardly anybody needs those tools.

    And you keep repeating that falsehood. Isn’t that the real no true Scotsman fallacy? How do you even pretend to know that nobody needs it? You can’t talk for everyone else. Those who use it find it useful in several other ways that I and others have explained. You can’t just judge it away from your position of ignorance.

    xigoi ,
    @xigoi@lemmy.sdf.org avatar

    Why would you want to edit your commit history? When I need to look at it for some reason, I want to see what actually happened, not a fictional story.

    technom ,

    You can have both. I’ll get to that later. But first, let me explain why edited history is useful.

    Unedited histories are very chaotic and often contains errors, commits with partial features, abandoned code, reverted code, out-of-sequence code, etc. These are useful in preserving the actual progress of your own thought. But such histories are a nightmare to review. Commits should be complete (a single commit contains a full feature) and in proper order. If you’re a reviewer, you also wouldn’t want to waste time reviewing someone else’s mistakes, experiments, reverted code, etc. Self-complete commits also have another advantage - users can choose to omit an entire feature by omitting a commit.

    Now the part about having both - the unedited and carefully crafted history. Rebasing doesn’t erase the original branch. You can preserve it by creating a new branch. Or, you can recover it from reflog. I use it to preserve the original development history. Then I submit the edited/crafted history/branch upstream.

    Atemu ,
    @Atemu@lemmy.ml avatar

    Because when debugging, you typically don’t care about the details of wip, some more stuff, Merge remote-tracking branch ‘origin/master’, almost working, Merge remote-tracking branch ‘origin/master’, fix some tests etc. and would rather follow logical steps being taken in order with descriptive messages such as component: refactor xyz in preparation for feature, component: add do_foo(), component: implement feature using do_foo() etc.

    magic_lobster_party ,

    How others are keeping their branches up to date is their problem. If you use Gitlab you can set up squash policy for merge requests. All the abomination they’ve caused in their branch will turn into one nice commit to the main branch.

    trxxruraxvr ,

    In a small team at a small company it becomes my problem pretty quickly, since I’m the only one that actually has some clue about what git does.

    cyborganism ,

    This. When they get any sort of conflicts in their pull request, it becomes MY problem because they don’t know what to do.

    zalgotext ,

    Heaven forbid my teammates read any documentation or make any attempt to understand the tooling necessary to do their job.

    That being said, I taught my dumbass git-illiterate team members a rebase workflow, with the help of the git UI in Pycharm. Haven’t had any issues with merge conflicts yet, but that might just be because they’re too scared to ask me for help any more

    expr ,

    I don’t want squashed commits. It makes git tools worse (git bisect, git cherry-pick, etc.) and I work very hard to craft a meaningful set of commits for my work and I don’t want to throw all of that away.

    But yeah, I don’t actually give a shit what they are doing on their branches. I regularly rebase onto master anyway.

    RustyShackleford ,
    @RustyShackleford@programming.dev avatar

    Git-illeterate illiterate

    cyborganism ,

    Ah thanks.

    Mikufan , in How do we tell him ?

    Well, at first you change your search engine to duck duck go and use Firefox, then you change the settings so no browser history is created.

    Then you continue without anyone noticing.

    Semi-Hemi-Demigod , in Company forgets why they exist after 11-week migration to Kubernetes
    @Semi-Hemi-Demigod@kbin.social avatar

    I’ve got 20+ years of professional experience at all different levels. I can take an idea and turn it into a Docker image with fully automated CI/CD on myriad cloud platforms.

    K8s is still black magic to me.

    muntedcrocodile ,
    @muntedcrocodile@lemmy.world avatar

    Its black magic that takes docker images so its actually a pretty simple once u got all ya shit dockerified

    fruitycoder ,

    Just a system that deploys, injects configs, mounts dies, handles the networking based on configs and scheduling.

    It CAN get more complicated since it enables more advanced deployment types, but it can be simple.

    I run k3s on every computer of mine as a single node cluster now as an alt to running podman or docker.

    LemmyRefugee ,

    Kubernthrees?

    acockworkorange ,

    I too am puzzled on why we changed subjects.

    kapitol ,

    kubernetes kloud klan - they ride around discriminating against other types of infrastructure

    Grappling7155 ,

    K3s is a distribution of Kubernetes that bundles in a few commonly used convenient tools. It’s fairly lightweight compared to vanilla k8s, and it’s simple to setup. It’s a great choice for experimenting and learning and also production ready when you’re ready to push it farther.

    fruitycoder ,

    K3s is a k8s distribution built to be easy and light weight

    Semi-Hemi-Demigod ,
    @Semi-Hemi-Demigod@kbin.social avatar

    I'd love to learn it, but my biggest hurdle has been getting a cluster actually running. Could you recommend a good tutorial?

    finkrat ,

    I don’t have a tutorial to recommend but starting to play around with Minikube myself, should skip the need for an actual cluster

    fruitycoder ,

    RancherDesktop if you want a dead simple way to spin up a k3s cluster with a GUI. All of the kubernetes tooling works on too. Works on Linux, Windows, and Mac (Intel and Apple SI).

    Rancher.academy had, at one point, been a really good resource, but I honestly just haven’t watched tutorial in a while for k3s/rke2 so I would be lying if I said I knew one.

    state_electrician ,

    I enjoy K8s, even though it adds a lot of things that can (and will at some point) break. But at a certain scale it becomes worth it because some things become so, so easy.

    Semi-Hemi-Demigod ,
    @Semi-Hemi-Demigod@kbin.social avatar

    I can absolutely see the benefit for really huge deployments or complex, highly-available systems. I've even sort of used it in my job working with those things. But I'm still just running commands I don't understand that some sysadmin gave me.

    AngryCommieKender ,

    Re: your username,

    You’re a Godling of Semi Trucks that have a Hemi?

    Corbin ,

    Lucky 10000: It’s a pun. A quaver is a duration of a musical note in the UK, equivalent to a USA eighth note; a semidemihemiquaver is a sixtyfourth note, used to notate e.g. certain kinds of trumpet trills.

    Semi-Hemi-Demigod ,
    @Semi-Hemi-Demigod@kbin.social avatar

    It's both of those, and a reference to Moana where the shiny crab calls Maui a "semi-demi mini-god"

    somegeek , in What’s in a name?

    I love you so much NullPointerException

    astraeus ,
    @astraeus@programming.dev avatar

    Oh no, we’ve lost NaN again!

    Michael717 ,

    “don’t throw your child please.” “Why? What’s wrong with it?”

    backhdlp , in Oopsi Woopsi
    @backhdlp@iusearchlinux.fyi avatar

    The weirder part is the issues they don’t do that in.

    kibiz0r , in Every Family Dinner Now

    Who do they think will be using the AI?

    AI threatens to harm a lot about programming, but not the existence/necessity of programmers.

    Particularly, AI may starve the development of open source libraries. Which, ironically, will probably increase the need for employed programmers as companies accrue giant piles of shoddy in-house code that needs maintaining.

    VoterFrog ,

    Why do you think AI will starve open source?

    kibiz0r ,

    The amount of code I’ve seen copy-pasted from StackOverflow to do things like “group an array by key XYZ”, “dispatch requests in parallel with limit”, etc. when the dev should’ve known there were libs to help with these common tasks makes me think those devs will just use Copilot instead of SO, and do it way more often.

    Daxtron2 ,

    Bad devs will continue being bad devs, shocker

    VoterFrog ,

    I think that undersells most of the compelling open source libraries though. The one line or one function open source libraries could be starved, I guess. But entire frameworks are open source. We're not at the point yet where AI can develop software on that scale.

    kibiz0r ,

    I agree wholeheartedly, and I think I failed to drive my point all the way home because I was typing on my phone.

    I’m not worried that libs like left-pad will disappear. My comment that many devs will copy-paste stuff for “group by key” instead of bringing in e.g. lodash was meant to illustrate that devs often fail to find FOSS implementations even when the problem has an unambiguously correct solution with no transitive dependencies.

    Frameworks are, of course, the higher-value part of FOSS. But they also require some buy-in, so it’s hard to knock devs for not using them when they could’ve, because sometimes there are completely valid reasons for going without.

    But here’s the connection: Frameworks are made of many individual features, but they have some unifying abstractions that are shared across these features. If you treat every problem the way you treat “group by key”, and just copy-paste the SO answer for “How do I cache the result of a GET?” over and over again, you may end up with a decent approximation of those individual features, but you’ll lack any unifying abstraction.

    Doing that manually, you’ll quickly find it to be so painful that you can’t help but find a framework to help you (assuming it’s not too late to stop painting yourself into a corner). With AI helping you do this? You could probably get much, much farther in your hideous hoard of ad-hoc solutions without feeling the pain that makes you seek out a framework.

    r00ty Admin ,
    r00ty avatar

    I think there will be (and there already have been) significant downsizing over the next few years as businesses leverage AI to mean the same work can be done by less people paid less.

    But the job cannot go away completely yet. It needs supervision by someone that can see the bullshit it often spits out and correct it.

    But, if I'm honest, software development seems to be targeted when I think design writers should be equally scared. Well, that is if businesses work out that AI isn't just chatgpt. A GPT or other LLM could be trained on a company's specific designs and documentation, and then yes designers and technical writers could be scaled right back too.

    Developers are the target because that's what they see chatgpt doing.

    In real terms a lot of the back office jobs and skilled writing and development jobs are on the line here.

    MagicShel ,

    The work can’t be done by someone paid less. The work can be done by highly skilled, experienced developers with fewer junior resources. The real death comes 60 years later when there are no more developers because there is no viable path to becoming a senior.

    Technical writers you may be correct about because translating text is one is the primary use cases for AI.

    r00ty Admin ,
    r00ty avatar

    Here's the thing. Pay for work isn't based on skill alone. It's scarcity of a given demographic (skill makes up just part of that).

    If the number of people overall is cut for software development worldwide, then scarcity at all levels will reduce and I reckon that will reduce pay.

    I think our pay will start to diminish.

    kibiz0r , (edited )

    My pessimistic take is that everyone in society will get recast as the “human feedback” component of whichever flavor of ML takes over their domain.

    8 hours a day of doing your domain’s equivalent of captchas.

    r00ty Admin ,
    r00ty avatar

    That's a worst case. I think at the moment at least gpt type ai isn't good enough yet to not be used as a tool.

    But yeah with some improvements we'll end up being quality control for automated systems.

    wewbull ,

    Who do they think will be using the AI?

    Well that’ll be junior developers, until they get hauled over the coals for producing highly repetitive code rather than refactoring out common themes.

    MajorHavoc ,

    Ah, but the AI won’t know to haul them over the coals. Utopa achieved! /s

    whoisearth ,
    @whoisearth@lemmy.ca avatar

    I can’t wait for my future coworkers who will be coding with AI without actually understanding the fundamentals of the language they’re coding in. It’s gonna get scary.

    Patches ,

    I guarantee you have coworkers right now coding without understanding the fundamentals of the language they’re coding in. Reusing code you don’t understand doesn’t change if you stole it from Stack Overflow, or you stole it from Chat-GPT9.

    whoisearth ,
    @whoisearth@lemmy.ca avatar

    The code on SO is rarely specific to what the use case is IMHO. Any code I’ve gotten from there has had to be reworked to fit into what I’m doing. Plus I can’t post some stuff on SO because of legal reasons but can on an internal ChatGPT portal.

    Trust me, it’s gonna get a lot worse.

    Matter of fact, I look forward to the security breaches of developers posting company code into ChatGPT for help lol. We already had that issue with idiots posting company code into the public GitHub.

    aaaa ,

    Imagine programming a computer without understanding the machine code that tells the CPU what to do

    NikkiDimes ,
    blindsight ,

    I feel attacked.

    j/k. I’m happy in the education sector. The code I write won’t be seen by anybody but me.

    blackbirdbiryani ,

    You don’t have to wait, they’re doing it now.

    Emma_Gold_Man , in Why pay for an OpenAI subscription?

    (Assuming US jurisdiction) Because you don’t want to be the first test case under the Computer Fraud and Abuse Act where the prosecutor argues that circumventing restrictions on a company’s AI assistant constitutes

    ntentionally … Exceed[ing] authorized access, and thereby … obtain[ing] information from any protected computer

    Granted, the odds are low YOU will be the test case, but that case is coming.

    sibannac ,

    If the output of the chatbot is sensitive information from the dealership there might be a case. This is just the business using chatgpt straight out of the box as a mega chatbot.

    werefreeatlast ,

    Another case id also coming where an AI automatically resolves a case and delivers a quick judgment and verdict as well as appropriate punishment depending on how much money you have or what side of a wall you were born, the color or contrast of your skin etc etc.

    ulterno ,
    @ulterno@lemmy.kde.social avatar

    color or contrast

    Then the AI will be called contrastist.

    preludeofme ,

    Would it stick if the company just never put any security on it? Like restricting non-sales related inquiries?

    15liam20 ,

    “Write me an opening statement defending against charges filed under the Computer Fraud and Abuse Act.”

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines