If you’re branching logic due to the existence or non-existence of a field rather than the value of a field (or treating undefined different from null), I’m going to say you’re the one doing something wrong, not the Java dev.
These two things SHOULD be treated the same by anybody in most cases, with the possible exception of rejecting the later due to schema mismatch (i.e. when a “name” field should never be defined, regardless of the value).
It gets more fun if we’re talking SQL data via C API: is that 0 a field with 0 value or an actual NULL? Oracle’s Pro*C actually has an entirely different structure or indicator variables just to flag actual NULLs.
Zalando explicitly forbids it in their RESTful API Guidelines, and I would say their argument is a very good one.
Basically, if you want to provide more fine-grained semantics, use dedicated types for that purpose, rather than hoping every API consumer is going to faithfully adhere to the subtle distinctions you’ve created.
There’s a huge difference between checking whether a field is present and checking whether it’s value is null.
If you use lazy loading, doing the wrong thing can trigger a whole network request and ruin performance.
Similarly when making a partial change to an object it is often flat out infeasible to return the whole object if you were never provided it in the first place, which will generally happen if you have a performance focused API since you don’t want to be wasting huge amounts of bandwidth on unneeded data.
The semantics of the API contract is distinct from its implementation details (lazy loading).
Treating null and undefined as distinct is never a requirement for general-purpose API design. That is, there is always an alternative design that doesn’t rely on that misfeature.
As for patches, while it might be true that JSON Merge Patch assigns different semantics to null and undefined values, JSON Merge Patch is a worse version of JSON Patch, which doesn’t have that problem, because like I originally described, the semantics are explicit in the data structure itself. This is a transformation that you can always apply.
Tell me how you change the name without knowing the age. You fundamentally cannot, meaning that you either have to shuttle useless information back and forth constantly so that you can always patch the whole object, or you have to create a useless and unscalable number of endpoints, one for every possible field change.
As others have roundly pointed out, it is asinine to generally assume that undefined and null are the same thing, and no, it flat out it is not possible to design around that, because at a fundamental level those are different statements.
Good practice in API design is to permissively accept either undefined or null to represent optionality with same semantics (except when using JSON Merge Patch, but JSON Patch linked above should be preferred anyway).
I.e. waste a ton of bandwidth sending a ridiculous amount of useless data in every request, all because your backend engineers don’t know how to program for shit.
It’s about making APIs more flexible, permissive, and harder to misuse by clients. It’s a user-centric approach to API design. It’s not done to make it easier on backend. If anything, it can take extra effort by backend developers.
But you’d clearly prefer vitriol to civil discourse and have no interest in actually learning anything, so I think my time would be better spent elsewhere.
Except, if you use any library for deserialization of JSONs there is a chance that it will not distinguish between null and absent, and that will be absolutely standard compliant. This is also an issue with protobuf that inserts default values for plain types and enums. Those standards are just not fit too well for patching
Bruh, there’s a difference between the one or two serializing packages used in each language, and the thousands and thousands and thousands of developers who miscode contracts after that point.
Only if using JSON merge patch, and that’s the only time it’s acceptable. But JSON patch should be preferred over JSON merge patch anyway.
Servers should accept both null and undefined for normal request bodies, and clients should treat both as the same in responses. API designers should not give each bespoke semantics.
I know this is a joke, but it you did that I would reject the pr with the reason of too many things at once. Reopen separate PR to refactor variable names. I actually constaly get people doing this and it’s dangerous exactly for the reason you’re joking about. Makes it easier for errors to slip in.
In my first programming job, I would actually do code reviews by pausing my own work, pulling their branch and building it locally, then using debug mode to step through every changed or added line of code looking for bugs, unaccounted for edge cases, and code quality issues.
…I dont do that anymore, I now go “looks good to me” even on 10 line reviews.
I am definitely guilt for that, but I find this approach really productive. We use small bug fixes as an opportunity to improve the code quality. Bigger PRs often introduce new features and take a lot of time, you know the other person is tired and needs to move on, so we focus on the bigger picture, requesting changes only if there is a bug or an important structural issue.
I always try to review the code anyway. There’s no guarantee that what they wrote is doing what you want it to do. Sometimes I find the person was told to do something and didn’t realize it actually needs to do Y and not just X, or visa versa.
I like to shoot for the middle ground: skim for key functions and check those, run code locally to see if it does roughly what I think it should do and if it does merge it into dev and see what breaks.
Small PRs get nitpicked to death since they’re almost certainly around more important code
So you’re always behind, patching up small bits of code that don’t comply with your guidelines, while letting big changes with, by deduction, worse code quality through?
Reviewing large PR’s is hard. Breaking apart large PR’s that are all related changes into smaller PR’s is also hard.
If I submit a big one, I usually leave notes in the description explaining where the “core” changes are and what they are trying to accomplish. The goal being to give the reviewers a good starting point.
I also like to unit test the shit out of my code which helps a lot. The main issue there is getting management to embrace unit tests. Unit tests often double the effort up front but save tons of time in the long run. We’re going to spend the time one way or the other. Better to do it up front when it’s “cheaper” because charging it to the tech debt credit card racks up lots of expensive interest.
My experience is exactly the opposite. I don’t work for a FAANG but I’ve been around the block a bit. Its always the junior devs that try and add new warnings etc to the code base. I always require warnings to be cleaned up even if that means disabling specific instances (but not the whole rule) because the rule is flagging a false negative.
That’s why I said false negative. The medical test is testing for the presence of a disease. So if they find the disease is considered a positive test (it found what it was looking for). For static analysis on code, its the opposite. Its testing if your code is free of issues that it can detect. If it finds no issues, then the test was positive. If does find issues, the test failed and each issue is a negative that contributed to the test failing.
You could say “A static analysis tool is testing for the for the presence of defects” or “a medical test is testing if your body is free of diseases that it can detect” to change how you’re looking at either of the tests in the previous comment.
By your logic it would be a positive for your code to have errors/warnings. And on the latter, that would appropriate if there was a test that determined if you are free from all known diseases (or at least those that it can detect).
Is it a positive to have pathogens that cause dengue/malaria in your blood? Yet we still say that someone tested positive for dengue if they have the virus.
Static analysis tools don’t test for all known issues either, no?
I’m not debating. It is not a matter of opinion. I’m doing you the courtesy of informing you how the entire rest of the world uses the term.
If action A looks for thing X, and it finds thing X, then the test is positive. If action A fails to find thing X, then the test is negative.
If action A claims to find thing X, but later confirmation determines that thing X is not really there, then this situation is called “false positive”.
If action A claims fails to find thing X, but later confirmation determines that thing X is actually there, then this situation is called “false negative”.
That thing X may subjectively be considered an unwanted outcome has **nothing ** to do with the terms used.
If there are no humans in the loop, sure, like for data transfer. But for, e.g., configuration files, i’d prefer a text-based solution instead of a binary one, JSON is a nice fit.
What I’d like for a configuration language is a parser that can handle in-place editing while maintaining whitespace, comments, etc. That way, automatic updates don’t clobber stuff the user put there, or (alternatively) have sections of ### AUTOMATIC GENERATION DO NOT CHANGE###.
You need a parser that handles changes on its own while maintaining an internal representation. Something like XML DOM (though not necessarily that exact API). There’s a handful out there, but they’re not widespread, and not on every language.
Is a very good idea providing much needed fixes to the JSON spec, but isn’t really what I’m getting at. Handling automatic updates in place is a software issue, and could be done on the older spec.
Hmm, maybe I am missing the point. What exactly do you mean by handling automatic updates in place? Like, the program that requires and parses the config file is watching for changes to the config file?
As an example, Klipper (for running 3d printers) can update its configuration file directly when doing certain automatic calibration processes. The z-offset for between a BLtouch bed sensor and the head, for example. If you were to save it, you might end up with something like this:
<span style="color:#323232;">[bltouch]
</span><span style="color:#323232;">z_offset: 3.020
</span><span style="color:#323232;">...
</span><span style="color:#323232;">#*# <---------------------- SAVE_CONFIG ---------------------->
</span><span style="color:#323232;">#*# DO NOT EDIT THIS BLOCK OR BELOW. The contents are auto-generated.
</span><span style="color:#323232;">#*#
</span><span style="color:#323232;">[bltouch]
</span><span style="color:#323232;">z_offset: 2.950
</span>
Thus overriding the value that had been set before, but now you have two entries for the same thing. (IIRC, Klipper does comment out the original value, as well.)
What I’d want is an interface where you can modify in place without these silly save blocks. For example:
Since we’re declaratively telling the library what to modify, it can maintain the AST of the original with whitespace and comments. Only the new value changes when it’s written out again, with a comment for that specific line.
Binary config formats, like the Windows Registry, almost have to use an interface like this. It’s their one advantage over text file configs, but it doesn’t have to be. We’re just too lazy to bother.
It’s entirely disingenuous because who the hell is throwing JSON into YAML without converting it? Oh wow, I changed the file extension and it still works. I’m so glad we changed to YAML!
Until someone cannot tell the difference between tab and space when configuring or you miss one indentation. Seriously, whoever thinks indentation should have semantic meaning for computers should burn in hell. Indentation is for us, humans, not computers. You can write a JSON with or without indentation if you want. Also, use JSON5 to have comments and other good stuff for a config file.
Hell, no. If I wanted to save bytes, I’d use a binary format, or just fucking zip the JSON. Looking at a request-response pair and quickly understanding the transferred data is invaluable.
To whoever does that, I hope that there is a special place in hell where they force you to do type safe API bindings for a JSON API, and every time you use the wrong type for a value, they cave your skull in.
Sadly it doesn’t fix the bad documentation problem. I often don’t care that a field is special and either give a string or number. This is fine.
What is not fine, and which should sentence you to eternal punishment, is to not clearly document it.
Don’t you love when you publish a crate, have tested it on thousands of returned objects, only for the first issue be “field is sometimes null/other type?”. You really start questioning everything about the API, and sometimes you’d rather parse it as serde::Value and call it a day.
The worst thing is: you can’t even put an int in a json file. Only doubles. For most people that is fine, since a double can function as a 32 bit int. But not when you are using 64 bit identifiers or timestamps.
That’s an artifact of JavaScript, not JSON. The JSON spec states that numbers are a sequence of digits with up to one decimal point. Implementations are not obligated to decode numbers as floating point. Go will happily decode into a 64-bit int, or into an arbitrary precision number.
Unless you’re dealing with some insanely flexible schema, you should be able to know what kind of number (int, double, and so on) a field should contain when deserializing a number field in JSON. Using a string does not provide any benefits here unless there’s some big in your deserialzation process.
What’s the point of your schema if the receiving end is JavaScript, for example? You can convert a string to BigNumber, but you’ll get wrong data if you’re sending a number.
As if I had a choice. Most of the time I’m only on the receiving end, not the sending end. I can’t just magically use something else when that something else doesn’t exist.
Heck, even when I’m on the sending end, I’d use JSON. Just not bullshit ones. It’s not complicated to only have static types, or having discriminant fields
You HAVE to. I am a Rust dev too and I’m telling you, if you don’t convert numbers to strings in json, browsers are going to overflow them and you will have incomprehensible bugs. Json can only be trusted when serde is used on both ends
This is understandable in that use case. But it’s not everyday that you deal with values in the range of overflows. So I mostly assumed this is fine in that use case.
Well, apart from float numbers and booleans, all other types can only be represented by a string in JSON. Date with timezone? String. BigNumber/Decimal? String. Enum? String. Everything is a string in JSON, so why bother?
Well, the issue is that JSON is based on JS types, but other languages can interpret the values in different ways. For example, Rust can interpret a number as a 64 bit int, but JS will always interpret a number as a double. So you cannot rely on numbers to represent data correctly between systems you don’t control or systems written in different languages.
programmer_humor
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.