This prospect doesnt bother me in the least. I’ve already been replaced 5 times in my life so far. The soul is a spook. Let my clone smother me in my sleep and deal with the IRS instead.
Makes me wonder how many times I’ve been replaced. Also makes me wonder if I just died yesterday and today I’m actually a new person. I have no evidence that yesterday happened except for a memory of it, and let’s face it, since it was a public holiday, that’s a pretty foggy memory
I wonder about that. During the deepest part of sleep does your brain have enough activity to maintain a continuous stream of consciousness? If you go through two sleep cycles in a night does yesterday you die, and you from the first sleep cycle who only dreamed die, and you’re a new consciousness in the morning?
yeah, went down this rabbit hole recently: what if I’m the .001% that lives until <max age variable for my genome>? or what if ‘me’ is an amalgam of all the ones that die, and I get to live all those lives until the variable runs out.
Damn dude. Was each time a death? I think a someone’s following me around and snuffing me out. Mandela Effects keep happening. Also I’m getting elf ears? Reality is weird.
I’m sorry I understand those words not in those orders though, are you saying the soul is an olde timey anti-black racial slur or that it’s inherently scary?
Spook is from the german “spuking” which means haunting. Its use in this context comes from the german philosopher Max Stirner who is infamous for the memes where X is declared to be a spook.
Understanding what exactly spooks are is somewhat challenging, and plenty of people get the wrong ubderstanding about what is meany by spooks. But at least in the meme way of using the word, a spook is anything you think is a fairy tale, or nonsense that you don’t care about.
Also, if this is an organization setting, I’m extremely disappointed in your PR review process. If someone is committing vendor code to the repo someone else should reject the pull.
Pretty sure they meant to not have review. Dropping peer review in favor of pair programming is a trendy idea these days. Heh, you might call it “pairs over peers”. I don’t agree with it, though. Pair programming is great, but two people, heads together, can easily get on a wavelength and miss the same things. It’s always valuable to have people who have never seen the new changes take a look. Also, peer review helps keep the whole team up to date on their knowledge of the code base, a seriously underrated benefit. But I will concede that trading peer review for pair programming is less wrong than giving up version control. Still wrong, but a lot less wrong.
Well, to share my perspective – sorry, I mean, to explain to you why you’re wrong and differing opinions are unacceptable:
I find that pairing works best for small teams, where everyone is in the loop what everyone else is working on, and which don’t have a bottleneck in terms of a minority having much more skill or knowledge in the project.
In particular, pairing is far more efficient at exchanging information. Not only is it a matter of actively talking to one another just being quicker at bringing information across, there is also a ton of information about code, which will not make it into the actual code.
While coding, you’ve tried two or three approaches, you couldn’t write it as you expected or whatever. The final snippet of code looks as if you wrote it, starting in the top-left and finishing bottom-right, with maybe one or two comments explaining a particularly weird workaround, but I’d wager more than 90% of the creation process is lost.
This means that if someone needs to touch your code, they will know practically none of how it came to be and they will be scared of changing more about it than at all necessary. As a result, all code that gets checked in, needs to be as perfect as possible, right from the start.
Sharing all the information from the creation process by pairing, that empowers a team to write half-baked code. Because enough people know how to finish baking it, or how to restructure it, if a larger problem arises.
Pairing is fickle, though. A bad management decision can easily torpedo it. I’m currently in a project, where we practically cannot pair, because it’s 4 juniors that are new to the project vs. 2 seniors that built up the project.
Not only would we need to pair in groups of three to make that work at all, it also means we need to use the time of the seniors as efficiently as possible and rather waste the time of the juniors, which is where a review process excels at.
Yeah… Usually if you join a company with bad practices it’s because the people who already work there don’t want to do things properly. They tend to not react well to the new guy telling them what they’re doing wrong.
Only really feasible if you’re the boss, or you have an unreasonable amount of patience.
Usually, the boss (or people above the boss) are the one’s stopping it. Engineers know what the solution is. They may still resent the new guy saying it, though, because they’ve been through this fight already and are tired.
Dude, put content warnings on this. I have trauma from shared drives and fucking Jared leaving the Important File open on his locked computer while he takes off for a week, locking out access to anyone else.
Correct me if I’m wrong, but it’s not enough to delete the files in the commit, unless you’re ok with Git tracking the large amount of data that was previously committed. Your git clones will be long, my friend
I don’t understand how we’re all using git and it’s not just some backend utility that we all use a sane wrapper for instead.
Everytime you want to do anything with git it’s a weird series or arcane nonsense commands and then someone cuts in saying “oh yeah but that will destroy x y and z, you have to use this other arcane nonsense command that also sounds nothing like you’re trying to do” and you sit there having no idea why either of them even kind of accomplish what you want.
There are tons of wrappers for git, but they all kinda suck. They either don’t let you do something the cli does, so you have to resort to the arcane magicks every now and then anyways. Or they just obfuscate things to the point where you have no idea what it’s doing, making it impossible to know how to fix things if (when) it fucks things up.
It’s because git is a complex tool to solve complex problems. If you’re one hacker working alone, RCS will do an acceptable job. As soon as you add a second hacker, things change and RCS will quickly show its limitations. FOSS version control went through CVS and SVN before finally arriving at git, and there are good reasons we made each of those transitions. For that matter, CVS and SVN had plenty of arcane stuff to fix weird scenarios, too, and in my subjective experience, git doesn’t pile on appreciably more.
You think deleting an empty directory should be easy? CVS laughs at your effort, puny developer.
It’s because git is a complex tool to solve complex problems. If you’re one hacker working alone, RCS will do an acceptable job. As soon as you add a second hacker, things change and RCS will quickly show its limitations. FOSS version control went through CVS and SVN before finally arriving at git, and there are good reasons we made each of those transitions. For that matter, CVS and SVN had plenty of arcane stuff to fix weird scenarios, too, and in my subjective experience, git doesn’t pile on appreciably more.
Yes it is a complex tool that can solve complex problems, but me as a typical developer, I am not doing anything complex with it, and the CLI surface area that’s exposed to me is by and large nonsense and does not meet me where I’m at or with the commands or naming I would expect.
I mean NPM is also a complex tool, but the CLI surface area of NPM is “npm install”.
So basic, well documented, easily understandable commands like git add, git commit, git push, git branch, and git checkout should have you covered.
the CLI surface area that’s exposed to me is by and large nonsense and does not meet me where I’m at
What an interesting way to say “git has steep learning curve”. Which is true, git takes time to learn and even more to master. You can get there solely by reading the man pages and online docs though, which isn’t something a lot of other complex tools can say (looking at you kubernetes).
Also I don’t know if a package manager really compares in complexity to git, which is not just a version control tool, it’s also a thin interface for manipulating a directed acyclic graph.
So basic, well documented, easily understandable commands like git add, git commit, git push, git branch, and git checkout should have you covered.
You mean: git add -A, git commit -m “xxx”, git push or git push -u origin --set-upstream, etc. etc. etc. I get that there’s probably a reason for it’s complexity, but it doesn’t change the fact that it doesn’t just have a steep learning curve, it’s flat out remarkably user unfriendly sometimes.
git add with no arguments outputs a message telling you to specify a path.
Yes, but a more sensible default would be -A since that is how most developers use it most of the time.
git commit with no arguments drops you into a text editor with instructions on how to write a commit message.
Git commit with no arguments drops you into vim, less a text editor and more a cruel joke of figuring out how to exit it.
Again, I recognize that git has a steep learning curve, but you chose just about the worst possible examples to try and prove that point lol.
Git has a steep learning curve not because it’s necessary but because it chose defaults that made sense to the person programming it, not to the developer using it and interacting with it.
It is great software and obviously better than most other version control systems, but it still has asinine defaults and it’s cli surface is over complicated. When I worked at a MAANG company and had to learn their proprietary version control system my first thought was “this is dumb, why wouldn’t you just use git like everyone else”, then I went back to Git and realized how much easier and more sensible their system was.
No it wouldn’t. You’d have git beginners committing IDE configs and secrets left and right if -A was the default behavior.
vim, less a text editor and more a cruel joke of figuring out how to exit it.
Esc, :, q. Sure it’s a funny internet meme to say vim is impossible to quit out of, but any self-respecting software developer should know how, and if you don’t, you have google. If you think this is hard, no wonder you struggle with git.
it chose defaults that made sense to the person programming it, not to the developer using it and interacting with it.
Just because you don’t like the defaults doesn’t mean they don’t make sense. It just means you don’t understand the (very good) reasons those defaults were chosen.
Git has a steep learning curve not because it’s necessary but because it chose defaults that made sense to the person programming it, not to the developer using it and interacting with it.
Git’s authors were the first users. The team that started the linux kernel project created it and used it because no other version control tool in existence at that time suited their needs. The subtle implication that you, as a user of git, know better than the authors, who were the original users, is laughable.
No it wouldn’t. You’d have git beginners committing IDE configs and secrets left and right if -A was the default behavior.
No, you wouldn’t because no one is a git beginner, they’re a software developer beginner who need to use git. In that scenario, you are almost always using repos that are created by someone else or by some framework with precreated git ignores.
You know what else it could do? Say “hey, youve said add with no files selected, press enter to add all changed files”
Esc, :, q. Sure it’s a funny internet meme to say vim is impossible to quit out of, but any self-respecting software developer should know how, and if you don’t, you have google. If you think this is hard, no wonder you struggle with git.
Dumping people into an archaic cli program that doesn’t follow the universal conventions for exiting a cli program, all for the the goal of entering 150 characters of text that can be captured through the CLI with one prompt, is bad CLI design.
There is no reason to ever dump the user to an external editor unless they specifically request it, yet git does, knowing full well that that means VIM in many cases.
And no, a self respecting software developer wouldn’t tolerate standards breaking, user unfriendly software and would change their default away from VIM.
Git’s authors were the first users. The team that started the linux kernel project created it and used it because no other version control tool in existence at that time suited their needs. The subtle implication that you, as a user of git, know better than the authors, who were the original users, is laughable.
Lmao, the idea that we should hero worship every decision Linus Torvalds ever made is the only thing laughable here.
I think in this case, “depth” was am inferior solution to achieve fast cloning, that they could quickly implement. Sparse checkout (“filter”) is the good solution that only came out recently-ish
Lol if an employer can’t have an intelligent discussion about user friendly interface design I’m happy to not work for them.
Every interview I’ve ever been in there’s been some moment where I say ‘yeah I don’t remember that specific command, but conceptually you need to do this and that, if you want I can look up the command’ and they always say something along the lines of ‘oh no, yeah, that makes conceptual sense don’t worry about it, this isn’t a memory test’.
These things are not related. Git uses the system default editor, which is exactly what a cli program dropping you into an editor should use. If that’s Vim and you don’t like that, you need to configure your system or take it up with your distro maintainers.
No, it should prompt you to enter your one sentence description in the CLI itself, and kick you out to an editor only if you provide a flag saying you like writing paragraph long commit descriptions.
Git is complicated, but then again, it’s a tool with a lot of options. Could it be nicer and less abstract in its use? Sure!
However, if you compare what goes does, and how it does, to it’s competitors, then git is quite amazing. 5-10 years ago it was all svn, the dark times. Simpler tool and an actual headache to use.
What are you smoking? Shallow clones don’t modify commit hashes.
The only thing that you lose is history, but that usually isn’t a big deal.
–filter=blob:none probably also won’t help too much here since the problem with node_modules is more about millions of individual files rather than large files (although both can be annoying).
git clone --depth=1 <url> creates a shallow clone. These clones truncate the commit history to reduce the clone size. This creates some unexpected behavior issues, limiting which Git commands are possible. These clones also put undue stress on later fetches, so they are strongly discouraged for developer use. They are helpful for some build environments where the repository will be deleted after a single build.
Maybe the hashes aren’t different, but the important part is that comparisons beyond the fetched depth don’t work: git can’t know if a shallowly cloned repo has a common ancestor with some given commit outside the range, e.g. a tag.
Blobless clones don’t have that limitation. Git will download a hash+path for each file, but it won’t download the contents, so it still takes much less space and time.
If you want to skip all file data without any limitations, you can do git clone --filter=tree:0 which doesn’t even download the metadata
Yes, if you ask about a tag on a commit that you don’t have git won’t know about it. You would need to download that history. You also can’t in general say “commit A doesn’t contain commit B” as you don’t know all of the parents.
You are completely right that –depth=1 will omit some data. That is sort of the point but it does have some downsides. Filters also omit some data but often the data will be fetched on demand which can be useful. (But will also cause other issues like blame taking ridiculous amounts of time.)
Neither option is wrong, they just have different tradeoffs.
See this is the kind of shit that bothers me with Git and we just sort of accept it, because it’s THE STANDARD. And then we crank attach these shitty LFS solutions on the side because it don’t really work.
What was perforce’s solution to this? If you delete a file in a new revision, it still kept the old data around, right? Otherwise there’d be no way to rollback.
Yes but Perforce is a (broadly) centralised system, so you don’t end up with the whole history on your local computer. Yes, that then has some challenges (local branches etc, which Perforce mitigates with Streams) and local development (which is mitigated in other ways).
For how most teams work, I’d choose Perforce any day. Git is specialised towards very large, often part time, hyper-distributed development (AKA Linux development), but the reality is that most teams do work with a main branch in a central location.
The joke is that there are some people who think that by uploading themselves into a machine “to live forever,” their consciousness will also be transferred, like when you travel by bus from one city to another. In reality, you “upload yourself,” but that yourself is not you, but a copy of you. So, once the copy is done, you will still be in your original body, and the copy will “think” it is you, but it’s not you. It’s a copy of you! So, you continue to live in your body until you die, and, well, for you - that’s it. You’re dead. You’re not living. You’re finished. Everything is black. Void. Null. Done - unless you believe in the afterlife, so you’ll be in heaven, hell, purgatory or whatever, but the point is, you’re not longer on Earth “living forever.” That’s just some other entity who thinks it is you, but it’s not you (again, because you’re dead.)
This is represented by the parameters being passed by value (a copy) instead of by reference (same data) in the poster’s image.
It wouldn’t be you, it would just be another person with the same memories that you had up until the point the copy was made.
When you transfer a file, for example, all you are really doing is sending a message telling the other machine what bits the file is made up of, and then that other machines creates a file that is just like the original - a copy, while the original still remains in the first machine. Nothing is even actually transferred.
If we apply this logic to consciousness, then to “transfer” your brain to a machine you will have to make a copy, which exist simultaneously with the original you. At that point in time, there will be two different instances of “you”; and in fact, from that point forward, the two instances will begin to create different memories and experience different things, thereby becoming two different identities.
Kil’n People by David Brin - it’s a futuristic Murder Mystery Novel about a society where people copy their consciousnesses to temporary clay clones to do mundane tasks for them. Got some really interesting discussions about what constitutes personhood!
programmer_humor
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.