Reviewing large PR’s is hard. Breaking apart large PR’s that are all related changes into smaller PR’s is also hard.
If I submit a big one, I usually leave notes in the description explaining where the “core” changes are and what they are trying to accomplish. The goal being to give the reviewers a good starting point.
I also like to unit test the shit out of my code which helps a lot. The main issue there is getting management to embrace unit tests. Unit tests often double the effort up front but save tons of time in the long run. We’re going to spend the time one way or the other. Better to do it up front when it’s “cheaper” because charging it to the tech debt credit card racks up lots of expensive interest.
I am definitely guilt for that, but I find this approach really productive. We use small bug fixes as an opportunity to improve the code quality. Bigger PRs often introduce new features and take a lot of time, you know the other person is tired and needs to move on, so we focus on the bigger picture, requesting changes only if there is a bug or an important structural issue.
I always try to review the code anyway. There’s no guarantee that what they wrote is doing what you want it to do. Sometimes I find the person was told to do something and didn’t realize it actually needs to do Y and not just X, or visa versa.
I like to shoot for the middle ground: skim for key functions and check those, run code locally to see if it does roughly what I think it should do and if it does merge it into dev and see what breaks.
Small PRs get nitpicked to death since they’re almost certainly around more important code
So you’re always behind, patching up small bits of code that don’t comply with your guidelines, while letting big changes with, by deduction, worse code quality through?
json doesn’t have ints, it has Numbers, which are ieee754 floats. if you want to precisely store the full range of a 64 bit int (anything larger than 2^53 -1) then string is indeed the correct type
json doesn’t have ints, it has Numbers, which are ieee754 floats.
No. numbers in JSON have arbitrary precision. The standard only specifies that implementations may impose restrictions on the allowed values.
This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
What’s wrong with having a some year old software? Does it do what you need? Yes. Then what? I have all I need on Debian. Why should I care of new updates. Security? Yes we have Debian security because of that. Look, y’all had the xyz backdoor package in your systems because it was new. Me as a Debian stable user I didn’t have to deal with it. Did I lose something by not having the latests software? No. Well maybe less crashes.
Most privative software also gets weekly updates. Does it make it better? No. You may prefer that.
Also I don’t get the point about the version numbering of Debian packages. Every team uses the versioning they want.
From my experience software that updates a lot tends to break old features a lot too.
Debian suporting freesoftware projects or other stuff doesn’t look as a relevant argument. I mean if you prefer using privative stuff and using that kind of software. Do whatever you like with your Google/Facebook/Apple friends.
But don’t come intoxicate the community with this bullshit.
programmer_humor
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.