That solution ish the worst. Ctrl-shift-c does a shitload of different things in different programs, and in browsers it does different things per page.
Ctrl-ins, shift-ins, shift-del for the win bit THEN some programs simply refuse to support that.
I have like 4 different copy paste short cuts because of this and it sucks
I use standard Linux dual clipboard (Ctrl ins and just select, middle click) but most extra clipboards I’ve seen require a lot of extra clicking to get the work done. I want something simple stupid fast.
I’m running windows for my daily, but I’ve got Ditto and it works great. I have like 3 clipboards set up, could set up more. It just needs a different hotkey combination. It’s really simple.
Yes. Memory and storage were at a very high premium until the 1990s, and when C was first being developed, it wasn’t uncommon for computers to output to printers (that’s why print() and co are named what they are), so every character was at a premium. In the latter case, you were literally paying in ink and paper by the character. These contributed to this convention that we’re still stuck with today in C.
Thanks for the insight! I think this kind of convention that once made some sense, is now exclusively harmful, but is still followed meticulously, is often called “tradition” and is one of the high speed engines that let humanity drive towards extinction.
I agree, and these conventions are being followed less over time. Since the 1990s, Windows world, Objective-C, and C++ have been migrating away (to mixed results), and even most embedded projects have been too. The main problem is that the standard library is already like that, and one of C’s biggest selling point is that you can still use source written >40 years ago, and interact with that. So just changing that, at that point just use Go or something. I also want to say, shoutout to GNU for being just so obstinate about changing nothing except for what they make evil about style. Gotta be one of my top 5 ‘why can’t you just be good leaders, GNU?’ moments.
IIRC older DOS versions were also limited to 8.3 filenames, so even filenames had a max limit of 8 characters + 3 extension. May it was a limitation of the file system, can’t quite remember.
At one point it was both. At one point they internally added support for longer file names in DOS, and then a later version of the filesystem also started supporting it. I think that on DOS and Windows (iirc even today), they never actually solved it, and paths on Windows and NTFS can only be 256 characters long in total or something (I don’t remember what the exact limit was/is).
I’ve heard arguments that back in ye old days each row only had 80 characters and variable names were shortened so you didn’t have to scroll the page back and forth
I’ve already felt like I should choose shorter names in a (shitty) project where the customer asked us to use an auto-formatter and a max line-width of 120 characters.
Because ultimately, I choose expressive variable names for readability. But an auto-formatter gladly fucks up your readability, breaking your line at some random ass point, unless your line does not need to be broken up.
And so you start negotiating whether you really need certain information in a variable name for the price of badly broken lines.
Yeah, I meant it as an example, where I was still granted relatively luxurious conditions, but even those already caused me to compromise on variable names.
I’d say, 95% of my lines of code do fit into 120 characters easily. It’s those 5% that pained me.
They did, with core you could be paying for many dollars per bit of memory. They also often used teletypes, where you would pay in ink and time for every character.
That’s a super risky way to do it. It might stop giving you errors because you finally got the indentation right, or it might stop giving you errors because you got the indentation “right” but not how you you meant to organize the objects.
Ugh, there’s some parts of YAML I love, but ultimately it’s a terrible format. It’s just too easy to confuse people. At least it has comments though. It’s so dumb that JSON doesn’t officially have comments. I’ve often parsed “JSON” as YAML entirely for comments, without using a single other YAML feature.
YAML also supports not quoting your strings. Seems great at first, but it gets weird of you want a string that looks like a different type. IIRC, there’s even a major version difference in the handling of this case! I can’t remember the details, but I once had a bug happen because of this.
Performance wise, both YAML and JSON suck. They’re fine for a config file that you just read on startup, but if you’re doing a ton of processing, it will quickly show the performance hit. Binary formats work far better (for a generic one, protobuffers has good tooling and library support while being blazing fast).
json 5 does support comments. alternatively, yaml is a superset of json. any valid json is also valid yaml. but yaml also supports comments. So you can also write json with comments, and use a yaml parser on it, instead of a standard json parser
It’s so dumb that JSON doesn’t officially have comments.
So much this.
Used to work at a company where I sometimes had to manually edit the configuration of devices which were written and read in JSON. Super inconvenient if you have to document all changes externally. As a “hack” I would sometimes add extra objects to store strings (the comments). But that’s super dicey as you don’t know if it somehow breaks the parsing. You’re also not guaranteed the order of objects so if the configuration gets read, edited and rewritten your comment might no longer be above/below the change you made.
Always found it baffling that such a basic feature is missing from a spec that is supposed to cover a broad range of use cases.
Before I do anything "risky" with forms I copy the text AND paste it somewhere else to confirm I really copied it. Only then do I take the next action, and still I get burned all the time by crap like this one way or another.
I usually just start from typing it up in emacs, then copy paste it to the fussy little form. Anything over six words, it probably saves me time, even if nothing was going to go wrong. And then… Just as you said.
So what language would you chose then? Java, PHP, JavaScript? None of the big languages where perfect from day one and it does not really matter, since day one is over already.
I personally never rebase. It always seems to have some problem. I’m surely there’s a place and time for rebasing but I’ve never seen it in action I guess.
If your cherry-pick doesn't run into conflicts why would your merge? You don't need to merge to master until you're done but you should merge from master to your feature branch regularly to keep it updated.
(I’m also a fan of rebasing; but I also like to land commits that perform a logical and separable chunk of work, because I like history to have decent narrative flow.)
That is absolutely not what rebasing does. Rebasing rewrites the commit history, cherry picking commits then doing a normal merge does not rewrite any history.
I’m sorry but that’s incorrect. “Rewriting the commit history” is not possible in git, since commits are immutable. What rebase actually does is reapply each commit between upstream and head on top of upstream, and then reset the current branch to the last commit applied (This is by default, assuming no interactive rebase and other advanced uses). But don’t take my word for it, just read the manual. git-scm.com/docs/git-rebase
“Reapply” is rewriting it on the other branch. The branch you are rebasing to now has a one or multiple commits that do not represent real history. Only the very last commit on the branch is actually what the user rebasing has on their computer.
My biggest issue with GitHub is that it always squashes and merges. It’s really annoying as it not only takes away from commit history, but it also puts the fork out of sync with the main branch, and I’ll often realize this after having implemented another features, forcing me end up cherry picking just to fix it. Luckily LazyGit makes this process pretty painless, but still.
Seriously people, use FF-merge where you can.
Then again, if my feature branch has simply gone behind upstream, I usually pull and rebase. If you’ve got good commits, it’s a really simple process and saves me a lot of future headaches.
There’s obviously places not to use rebase(like when multiple people are working on a branch), but I consider it good practice to always rebase before merge. This way, we can always just FF-merge and avoid screwing with the Git history. We do this at my company and honestly, as long as you follow good practices, it should never really get too out of hand.
Sounds like I just gotta get better with rebasing. But generally I do my merges clean from local changes. I’ll commit and push, then merge in, push. Then keep working. Not too hard to track but I’ve found it’s the diff at MR time that people really pay attention to. So individual commits haven’t been too crucial.
Yeah, I am. However GitHub, being the biggest Git hosting provider and all that, makes you use merge commits. FF-merges must be done manually from the command line. While this definitely isn’t a problem for me, many people out there just don’t care and merge without a second thought (which, as I said in my comment, results in having to create a new branch and cherry picking the commits onto there).
Always merge when you’re not sure. Rebasing rewrites your commit history, and merging with the squash flag discards history. In either case, you will not have a real log of what happened during development.
Why do you want that? Because it allows you to go back in time and search. For example, you could be looking for the exact commit that created a specific issue using git bisect. Rebasing all the commits in a feature branch makes it impossible to be sure they will even work, since they represent snapshots that never existed.
I’ll never understand why people suggest you should default to rebasing. When prompted about why, it’s usually some story about how it went wrong and it was just easier to do it the wrong way.
I’m not saying never squash or rebase. It depends on the situation but if you had to pick a default, it should be to simply merge.
I try to structure my commits in a way that minimizes their blast radius, which usually likes trying to reduce the number of files In touch per commit.
For example, my commit history would look like this:
Add new method to service class
Use new service class method in worker
And then as I continue working, all changes will be git commit --fixuped to one of those two commit’s hashes depending on where they occur.
And when it’s time to rebase in full, I can do a git rebase master --interactive --autosquash.
I’ve always merged. Rebase simplifies the history graph, but realistically I can’t think of a time where that has been important to me, or any of the teams I’ve worked with.
Maybe on some projects with a huge number of concurrent branches it becomes more important, probably less so for smaller teams.
It would be kinda dumb to force everyone to keep casting back to a double, no? If the output were positive, should it have returned an unsigned integer as well?
I think one of the main reason to use floor/ceilling is to predictably cast a double into int. This type signature kind of defeats this important purpose.
I don’t know this historical context of java, but possibly at that time, people see type more of a burden than a way to garentee correctness? (which is kind of still the case for many programmers, unfortunately.
You wouldn’t need floor/ceil for that. Casting a double to an int is already predictable as the java language spec explicitly says how to do it, so any JVM will do this the exact same way.
The floor/ceil functions are simply primitive math operations and they are meant to be used when doing floating point math.
All math functions return the same type as their input parameters, which makes sense. The only exception are those that are explicitly meant for converting between types.
I work in our service department myself (not as support tech though), but obviously, all tickets are supposed to go through 1st level. I don’t wanna be the dick skipping queue, so I did then one time I had an issue.
There’s a unique feeling of satisfaction to submitting a ticket with basically all the 1st level troubleshooting in the notes, allowing the tech to immediately escalate it to a 2nd level team. One quick call, one check I didn’t know about, already prepared the escalation notes while it ran. Never have I heard our support sound so cheerful.
Still riding the high of RMAing my Index. Included all the steps I did and the reply was essentially, “Thanks for troubleshooting, confirm your address and we’ll ship your replacement.”
My favorite little story was while working short-term at a company. Had some issues, did my normal troubleshooting steps and Google searches, identified what I felt the issue was and knew I wouldn’t have enough access to fix it. Reached out and got a response “Blah blah blaaah schedule blah blah Remote-In.”
Later on he sent me a message and remotes into my computer. I take control quick, open up notepad, and type out “Hi!”
To this day I swear that little show earned me more difficult fake phishing attempts. Which I mention because he specifically told me one day he had experience in the information security sector. Lo’ and behold!
Recently, I decided to install arch linux on an old laptop my sibling gave to me. I’m not new to Linux, I’ve been running a debian server for a year now and I have tried several VMs with different systems. But this was my first time installing arch without a script, and on bare metal.
Installing arch itself wasn’t that much of an issue, but there was a bigger problem: the PC didn’t recognize the pendrive for boot in UEFI mode. It seemed to work in the regular boot mode, but I didn’t want to use that. I made sure to deactivate safe mode and all the jazz. Sure enough, I could get UEFI boot working.
I install arch, works fine, I reboot. Oops! I didn’t install dhcpcd and I don’t know how to use network manager! No internet, great!
In my infinite wisdom, instead of trying to get NM to work, I decided to instead chroot back into the system and install dhcpcd. But my surprise when… The boot menu didn’t recognize the USB again. I tried switching between UEFI and normal boot modes on the bios and trying again, after all it appeared last time after changing it, right?
“Oh it doesn’t appear… Wait, what’s this? No boot partition found? Oh crap…”
Turns out, by changing the setting on the BIOS I probably deleted the nvram and with it the boot table settings or whatever they’re called. I deleted GRUB.
Alas, as if to repent for my sins, God gave me a nugget of inspiration. I swap the USB drive from the 3.0 port to one of the 2.0 ports on the other side and… It works, first try. The 3.0 port was just old and the connection bad. And I just deleted GRUB for no reason.
Usually, I would’ve installed everything from scratch again, but with newfound confidence, I managed to chroot into the system and regenerate the boot table or whatever (and install dhcpcd). And it worked! I had a working, bootable system, and an internet connection to download more packages.
I don’t know what the moral of the story is I just wanted to share it :)
I like to imagine an IT person telling someone that story to see whether they understand it or get a stroke, as a way to check if they were telling the truth about being good with computers and having tried everything, or something.
Man Nvidia users are going to be stoked when the get explicit sync in they’re desktop environments in two years. 😂 They’re have been so many small improvements in the Nvidia drivers up until that point I hope they actually update Nvidia drivers on Debian. I understand some of those improvements are not going to work because of the kernel version and the desktop versions.
I’m using mxlinux “ahs” version, it comes with kde at their “ahs” repos for supporting latest hardware and graphics cards. You may also check for the non-ahs, there might be a meta-package for kde plasma and that’s it…
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.