To whoever does that, I hope that there is a special place in hell where they force you to do type safe API bindings for a JSON API, and every time you use the wrong type for a value, they cave your skull in.
Sadly it doesn’t fix the bad documentation problem. I often don’t care that a field is special and either give a string or number. This is fine.
What is not fine, and which should sentence you to eternal punishment, is to not clearly document it.
Don’t you love when you publish a crate, have tested it on thousands of returned objects, only for the first issue be “field is sometimes null/other type?”. You really start questioning everything about the API, and sometimes you’d rather parse it as serde::Value and call it a day.
The worst thing is: you can’t even put an int in a json file. Only doubles. For most people that is fine, since a double can function as a 32 bit int. But not when you are using 64 bit identifiers or timestamps.
That’s an artifact of JavaScript, not JSON. The JSON spec states that numbers are a sequence of digits with up to one decimal point. Implementations are not obligated to decode numbers as floating point. Go will happily decode into a 64-bit int, or into an arbitrary precision number.
Unless you’re dealing with some insanely flexible schema, you should be able to know what kind of number (int, double, and so on) a field should contain when deserializing a number field in JSON. Using a string does not provide any benefits here unless there’s some big in your deserialzation process.
What’s the point of your schema if the receiving end is JavaScript, for example? You can convert a string to BigNumber, but you’ll get wrong data if you’re sending a number.
I’m not following your point so I think I might be misunderstanding it. If the types of numbers you want to express are literally incapable of being expressed using JSON numbers then yes, you should absolutely use string (or maybe even an object of multiple fields).
I am not sure what could be the example, my point was that the spec and the RFC are very abstract and never mention any limitations on the number content. Of course the implementations in the language will be more limited than that, and if limitations are different, it will create dissimilar experience for the user, like this: Why does JSON.parse corrupt large numbers and how to solve this
This is what I was getting at here programming.dev/comment/10849419 (although I had a typo and said big instead of bug). The problem is with the parser in those circumstances, not the serialization format or language.
I disagree a bit in that the schema often doesn’t specify limits and operates in JSON standard’s terms, it will say that you should get/send a number, but will not usually say at what point will it break.
This is the opposite of what C language does, being so specific that it is not even turing complete (in a theoretical sense, it is practically)
Then the problem is the schema being under specified. Take the classic pet store example. It says that the I’d is int64. petstore3.swagger.io/#/store/placeOrder
If some API is so underspecified that it just says “number” then I’d say the schema is wrong. If your JSON parser has no way of passing numbers as arbitrary length number types (like BigDecimal in Java) then that’s a problem with your parser.
I don’t think the truly truly extreme edge case of things like C not technically being able to simulate a truly infinite tape in a Turing machine is the sort of thing we need to worry about. I’m sure if the JSON object you’re parsing is some astronomically large series of nested objects that specifications might begin to fall apart too (things like the maximum amount of memory any specific processor can have being a finite amount), but that doesn’t mean the format is wrong.
And simply choosing to “use string instead” won’t solve any of these crazy hypotheticals.
As if I had a choice. Most of the time I’m only on the receiving end, not the sending end. I can’t just magically use something else when that something else doesn’t exist.
Heck, even when I’m on the sending end, I’d use JSON. Just not bullshit ones. It’s not complicated to only have static types, or having discriminant fields
You HAVE to. I am a Rust dev too and I’m telling you, if you don’t convert numbers to strings in json, browsers are going to overflow them and you will have incomprehensible bugs. Json can only be trusted when serde is used on both ends
This is understandable in that use case. But it’s not everyday that you deal with values in the range of overflows. So I mostly assumed this is fine in that use case.
Well, apart from float numbers and booleans, all other types can only be represented by a string in JSON. Date with timezone? String. BigNumber/Decimal? String. Enum? String. Everything is a string in JSON, so why bother?
Well, the issue is that JSON is based on JS types, but other languages can interpret the values in different ways. For example, Rust can interpret a number as a 64 bit int, but JS will always interpret a number as a double. So you cannot rely on numbers to represent data correctly between systems you don’t control or systems written in different languages.
No problem with strings in JSON, until some smart developer you get JSONs from decides to interchangeably use String and number, and maybe a boolean (but only false) to show that the value is not set, and of course null for a missing value that was supposed to be optional all along but go figure that it was
What will probably happen is Trump will say a hundred stupid things no-one bats an eye to. Biden will screw up one soundbite and conservative media will twist and beat that dead horse until it get reincarnated.
Yes. They might follow you, but that’s mostly out of curiosity and the fact that you’re tall enough to be their leader. Sometimes they might even run at you, but that’s mostly just to catch up and/or get closer - They’re not charging at you. Stop, turn around, and T-pose, and they’ll stop as well, waiting to see what you’re up to.
Cows alone are pretty chill and playful. Think of them like huge dogs, but without the instinct for hunting. If there are young ones with them you wanna give them some extra space for obvious reasons.
The evidence isn’t even that strong, there i just aren’t that many people willing to risk becoming a pariah to dispute them.
If you are a Christian, there is no doubt Jesus existed. Any oblique reference to a rabbi who was persecuted hundreds years ago is considered evidence that Jesus existed. But no contemporaneous documentation exists.
If you’re not a Christian, debunking all of those vague references that might be proof of a Jewish leader named Jesus just isn’t particularly important, won’t persuade anyone who believes Jesus was(is) God, and will paint a target on your back for terrorists.
Wait… you mean to tell me there’s not a collective of atheist Wikipedia writers that have dedicated their lives to the absence of religion and citing themselves on refuting evidence on Wikipedia?!?
Wouldn’t it be weird of every Wikipedia article on the historical validity of Jesus was written by Christian scholars that have dedicated their lives to their religion? It would be wild if they were just citing themselves in these Wiki articles in order to sell some books, wouldn’t it?
It’s weird how many people in this thread are vaguely debating the validity of the historical research into this question when one person has posted a link to a well cited article on this very very heavily studied subject.
I don’t feel compelled to argue an interpretation. The facts are well documented and their interpretations by experts available. What anyone chooses to do with these are of no real concern to me.
Yeah there are plenty of historians who have done good work studying this and the academia is mostly settled. Not to say there’s no controversy, but there’s definitely an orthodox opinion.
I don’t feel compelled to argue an interpretation. The facts are well documented and their interpretations by experts available. What anyone chooses to do with these are of no real concern to me.
but then
It’s weird how many people in this thread are vaguely debating the validity of the historical research into this question when one person has posted a link to a well cited article on this very very heavily studied subject.
Well cited article aren’t proof of existenceof a man. Is spiderman real if enough people cite the comics? A group of influential people could gather and make their own circle of these myths and present it as a fact. And it isn’t fucking new.
Religions and all their influence could force a lot of heavily studied subject to be skewed for their benefit. Hell, there were studies that were treated as standard making sugar and alcohol heavily beneficial for human beings. And we’re talking about a person.
In my experience, when it comes to debating the validity of religion, people tend to get far more emotional than other topics. People who are normally level-headed and quite logical tend to completely lose their ability to think rationally. And I mean both the people who argue for religion and against it.
I shouldn’t bother responding to this, but I have to point out that this weird assumption that scholars of Christianity are all Christian partisans seems pretty similar to people who say that climatologists are all biased in favor of a global warming hoax.
You don’t think anyone goes into studying a field to challenge the orthodoxy? That’s the fastest way to get famous. Even if the rest of your field hates you, you can make an incredibly lucrative career out of being “the outsider”. I literally linked to a collection of experts who agree with you.
If you don’t believe the experts, I guess it’s fine. But it’s weird when people use expertise on a subject as proof of bias to discredit expertise. It’s just such a silly thing to do.
I think it’s a weird to assume the wiki-link that you posted is in support of the “Christ Myth Theory” (as they call it).
Read the contents of the wiki link you sent and check all of the citations, you’ll see that the Christian Scholars that contributed to writing the article aim to dismiss the theory by citing their own books.
There were a lot of people that shared that name, and a lot of people were crucified at that time.
That implies each source says: “A man called Jesus was crucified”. The article you provided (if you read it) should have told you otherwise.
Flavius Josephus: Antiquities of the Jews, year 93-94: “About this time there lived Jesus, a wise man, if indeed one ought to call him a man. For he was one who performed surprising deeds and was a teacher of such people as accept the truth gladly. He won over many Jews and many of the Greeks. He was the Christ. And when, upon the accusation of the principal men among us, Pilate had condemned him to a cross, those who had first come to love him did not cease. He appeared to them spending a third day restored to life, for the prophets of God had foretold these things and a thousand other marvels about him. And the tribe of the Christians, so called after him, has still to this day not disappeared.”
Tacitus’s Annals, year 117: Christus, from whom the name had its origin, suffered the extreme penalty during the reign of Tiberius at the hands of one of our procurators, Pontius Pilatus
I didn’t provide any article. I read the one you linked.
In this most recent response, you are annotating sources from 93, and 117. Those years are notably (at minimum) 60 years after the supposed resurrection; and as such are not first hand accounts.
They very likely was someone named Jesus, because there were many people with that name. There was very likely someone named Jesus that was crucified, because many people were crucified. There’s 0 evidence or recorded documentation that a resurrection ever happened. That’s the big one.
You suck ass at reading. The title of this post is asking about “Jesus Christ,” which we all know to mean the son of God and the guy that resurrected after 3 days.
The title of this post is asking about “Jesus Christ,” which we all know to mean the son of God and the guy that resurrected after 3 days.
lol no… this thread is not talking about anything like that hahaha. Read it.
Obviously people don’t come back from the dead or transform into cheddar cheese; we don’t need historical research to tell us that.
His given name was יֵשׁוּעַ or Yeshua, which is Jesus in one speech-type, عيسى (ʿIsà) in another, as well as a lot of other variants.
‘Christus’ in Latin seems to refer to the same person; Tacitus wrote “called Christians by the populace. Christus, from whom the name had its origin, suffered the extreme penalty during the reign of Tiberius at the hands of one of our procurators, Pontius Pilatus”
What do you think of what Ehrman says here at 1h45m25s that the mythicist theory isn’t taken seriously by the academy because it’s mostly pushed by people who seem eager to dunk on religion.
Toxicity is one thing for sure but I don’t like how the commercialization of MP has shaped it.
Indie games have a very different feel in their online gameplay compared to “commercial” games.
Even way back, HL1 online and those online experiences felt so different because it was designed to be about the group experience rather than level up and get a skin, buy a weapon, our skill tree is massive. Sure technology was holding it back but I wish I could see what it would’ve been without the massive push for $$$.
I only want to play single player games. I’m not a super big gamer, but I just want campaigns. I recently got a PS5 and I’ve been struggling to find newer games that have a great single player campaign. RDR2 is my style, it’s my favorite game. The gameplay itself is a little problematic, but it’s gorgeous and the story just gets me where I live. And that’s what I want.
Oh come on. It’s called AI, as in artificial intelligence. None of these companies have ever called it a text generator, even though that’s what it is.
I get that it’s cool to hate on how AI is being shoved in our faces everywhere and I agree with that sentiment, but the technology is better than what you’re giving it credit for.
You don’t have to diminish the accomplishments of the actual people who studied and built these impressive things to point out that business are bandwagoning and rushing to get to market to satisfy investors. like with most technologies it’s capitalism that’s the problem.
LLMs emulate neural structures and have incredible natural language parsing capabilities that we’ve never even come close to accomplishing before. The prompt hacks alone are an incredibly interesting glance at how close these things come to “understanding.” They’re more like social engineering than any other kind of hack.
The trouble with phrases like ‘neural structures’ and ‘language parsing’ is that these descriptions still play into the “AI” narrative that’s been used to oversell large language models.
Fundamentally, these are statistical weights randomly wired up to other statistical weights, tested and pruned against a huge database. That isn’t language parsing, it’s still just brute-force calculation. The understanding comes from us, from people assigning linguistic meaning to patterns in binary.
Brain structures aren’t so dissimilar, unless you believe there’s some metaphysical quantity to consciousness this kind of technology will be how we do achieve general AI
Living, growing, changing cells are pretty damn dissimilar to static circuitry. Neural networks are based on an oversimplified model of neuron cells. The model ignores the fact neurons are constantly growing, shifting, and breaking connections with one another, and flat out does not consider structures and interactions within the cells.
Metaphysics is not required to make the observation that computer programmes are magnitudes less complex than a brain.
Neural networks are based on an oversimplified model of neuron cells.
As a programmer who has studied neuroanatomy and the structure/function of neurons themselves, I remain astonished at how not like real biological nervous systems computer neural networks still are. It’s like the whole field is based on one person’s poor understanding of the state of biological knowledge in the late 1970s. That doesn’t mean it’s not effective in some ways as it is, but you’d think there’d be more experimentation in neural networks based on current biological knowledge.
The one thing that stands out to me the most is that programmatic “neurons” are basically passive units that weigh inputs and decide to fire or not. The whole net is exposed to the input, the firing decisions are worked through the net, and then whatever output is triggered. In biological neural nets, most neurons are always firing at some rate and the inputs from pre-synaptic neurons affect that rate, so in a sense the passed information is coded as a change in rate rather than as an all-or-nothing decision to fire or not fire as is the case with (most) programmatic neurons. Implementing something like this in code would be more complicated, but it could produce something much more like a living organism which is always doing something rather than passively waiting for an input to produce some output.
And TBF there probably are a lot of people doing this kind of thing, but if so they don’t get much press.
Pretty much all artificial neural nets I have seen don’t do all or nothing activation. They all seem to have activation states encoded as some kind of binary number. I think this is to mimic the effects of variable firing rates.
The idea of a neural network doing stuff in the background is interesting though.
The fact that you believe software based neural networks are, as you put it, “static circuitry” betrays your apparent knowledge on the subject. I agree that many people overblow LLM tech, but many people like yourself grossly underestimate it as well.
Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.
I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.
Gaming support is still very much a work in progress all up and down the software stack. Stable distros like Debian tend to ship older proven versions of packages so their packaged software can be up to 18mo behind current releases. The NTSync kernel code that should improve Windows game performance isn’t even scheduled for mainline merge until the 6.10 kernel window in a few weeks - that’s not likely to be in a stable Debian release for a 12-18mo.
TL;DR: Gaming work is very much ongoing and Arch moves faster than Debian does. Shipping 12-18mo old versions of core software on the Steam deck would degrade performance.
It’s pretty common to use debian unstable as a base. stable is not the only release that debian offers, and despite their names they tend to be more dependable than other distros idea of stable.
stable is not the only release that debian offers,
Did you mean to say “branch” rather than “release”? Debian only releases stable. Everything else is part of the process of preparing and supporting stable.
Testing branch may work well or it may not. Its goal is to refine packages for the next stable release so it has an inherent strive towards quality, but it doesn’t have a commitment to “quality now” like stable does, just to “quality eventually”.
Testing’s quality is highest towards the start of each release cycle when it picks up from the previous stable release and towards the end when it’s getting ready to become the next stable. But the cycle is 2 years long.
In my experience, Debian unstable has been less stable than “pure” rolling release distributions. Basing on unstable also means you have to put up with or work around Debian’s freeze periods.
When I was a kid, for some reason I really wanted coal for Christmas and I was diappointed that only the bad kids got it. My parents decided to mess with me one year by hiding all my actual presents and only putting a piece of coal in my stocking. I was thrilled and thought it was so cool. I have no idea why I thought it was cool, I was a weird kid. My parents gave up on the joke before I even realized that none of the presents under the tree had my name on them. I was entirely happy with the piece of coal.
Ironically, it’s become one of my favorite Christmas memories and it’s one of few presents I still have as an adult.
“We think there is a fundamental misconception about piracy. Piracy is almost always a service problem and not a pricing problem. If a pirate offers a product anywhere in the world, 24 x 7, purchasable from the convenience of your personal computer, and the legal provider says the product is region-locked, will come to your country 3 months after the US release, and can only be purchased at a brick and mortar store, then the pirate’s service is more valuable.”
I agree fully. I basically never download music anymore, because I can get all the music I can think of on Spotify for a few bucks a month. And when everything was on Steam I just got everything from there. Now that all the games companies are bringing out their own stores and launchers, that’s starting to change again.
This is a lesson that the movie & TV industry seems hell-bent on not learning.
It at it’s worst in my opinion with streaming services like Netflix and Hulu. They’re all starting to get so fragmented that they’re not much better than just paying for cable anymore, and that was their whole appeal to begin with. Now you have to sub to like 3 or 4 different services to get all the content you want (sometimes more) and they all seem to be phasing out their ad free tiers. It’s like they forgot what made them so popular to begin with.
Also, my pirated game runs without trackers and forced updates. Steam version somehow insist on overlays, overnight updates, and likes to show me ads unless I google how to toggle them off.
I agree fully. I basically never download music anymore, because I can get all the music I can think of on Spotify for a few bucks a month.
I recently started music pirating because I listen to a lot of genres and I want to shuffle them. If I use Spotify, I am limited to their shitty shuffler, but if I download my music offline, I can shuffle however I want. My favorite algorithm to shuffle my huge bunch of music is to shuffle them by genre. Now I get to listen to interesting music with full control over the algorithm used.
Also, there are frequent power cuts in my area, so an offline library always proves useful. I also visit places where internet connections are not available.
Adding proper metadata to releases. Why are we still trying to decipher release titles, why not add a little metadata JSON file to every release and make the info available to the search API?
Also keeping multiple different versions of a release in Arr apps, like ebook and audiobook in different languages. Right now I’d need 4 Readarr instances to get the English and German audiobook and ebook versions of a book, and don’t even think about letting them manage the same root folder!
and following proper naming conventions too. why can’t releasers decided to choose one single naming convention together so it makes our job better to automate things?
I actually like the release titles. It’s encoded in the name that way, there’s a somewhat good standard for it, and it’s one file. I rarely need more info than what’s in the release title. And I would dislike having to carry a separate json with me.
The world does suck right now. All the more reason to find something like a cat or some other thing that makes you happy to help ignore all the bullshit.
I’d ask to switch with you, except I know very well that anyone’s life can be much more complicated than it seems on the surface, and happiness does not automatically come from any of that. Therapy doesn’t help everyone be happier, but it’s something worth trying or trying again.
I would imagine that anything that's in an airtight sealed container, such as chip bags, would be fine. That would also include cans. Your refrigerator and freezer, also, would probably count as a sealed container.
Smoke in a building fire can contain all sorts of weird chemicals from burning plastics and whatnot that could get deposited onto stuff, so even if you can't see any soot in your apartment I wouldn't dismiss all concerns. How tight is your budget?
If you can, perhaps talk with your boss about the situation. “I am hungry as my apartment building had a fire and all my food might be covered in toxins”, is a one off that gets some extra dispensation.
Edit: your response as 6 hours ago. You either are the chips, or are at work.
kbin.life
Top