My experience with contributing to gitlab has actually not been as you describe. Fairly fast responses, obviously targeted releases so I knew when to try and finish any Mr adjustments, bots that provided excellent aid and even ability to ask for subsystem specialist help, when CI shot out confusing errors that appeared unrelated. Frankly, I was impressed. I understand not every feature or bug would go this way, but if you follow their guidelines, get product road map positioning, it works. The amount of commits going in to main are incredible. The number of MRs they handle is equally impressive.
All of that said, I’ve still got issues in gitlab that are seven+ years old, without any movement. But I get it that they have to prioritize and contributions are a different story.
A long, long time ago in an internship far, far away, I encountered a user who did not need management. He remembered his passwords without writing them down, even as they changed. He could be trusted to apply software patches himself and return the media the same day. He needed nothing more from us than a friendly hello.
It has been over six hundred million seconds since then and I have yet to encounter another user such as this.
I mean, you aren’t wrong in your joke. Godot’s pattern is slightly but not much better. Unreal is far more structured and realistic but still lacking in a lot of ways.
Unreal is what I have the most experience in. It’s very strict and structured. Everything in Unreal is a UObject. There are Actors and Actor Components. Every feature has a requirement, such as, some of the AI features require you to have your AI as Pawns which are Actors that can have a Controller (an actor that manages the connection to a player) which then there is a PlayerController and an AI Controller which can hold a behavior tree to tell the AI how to control their pawn.
In Godot, things are far less structured. Godot has everything based on Nodes like Unreal but it expects you to build out whatever you want. So it doesn’t come with Behavior Trees or the concept of a player. It expects you to build these things. Mainly because it’s a budding engine that hasn’t had the maturity or time put into it like Unreal.
Unity is a mix of both. Unity has a huge freeform nature to it. Again Unity starts with a class, everything in Unity is an Object. It has the concepts of Components that attach to a GameObject (which GameObject and Object are different). There is no Actor class, and no defined way to move an actor across a floor, a controller object in Unity is seen as the place where all the logic to control the actor is. So Unity has its own structure but it’s also less built out like Godot. As such the AssetStore in Unity has taken the task of providing whatever the developer needs. So a Behavior Tree system on Unity differs from project to project from whatever they made or bought off of the store. Unreal allows you to, of course, use something else for behavior trees but no one does because their base implementation works and works well. It’s standardized.
So overall, Unreal is standardized, strict, and gives you a ton of features. Unity is less strict, provides less standardized features, and forces developers to make their own things. Godot furthermore is less strict, has very little built-out, and the standardization it’s attempted to create gets changed in the next major update because it’s very new.
Except that somewhere down that chain someone is almost certainly going to choose to kill people, so by passing the trolley on down to them you're responsible for killing a lot more than if you ended it right now.
And since every rational person down the line is going to think that, they'll all be itching to pull the "kill" lever first chance they get. So you know that you need to pull the kill lever immediately to minimize the number of deaths.
Only the person pulling the lever is responsible for his/her action though. There is a difference between passively passing on and actively murder someone
If I hand a machete to Jason Voorhees I think I'm at least partly responsible for the people he hits with it. I know what he's going to do with that thing.
I guess it comes down to the weight you give the word "possible" in your sentence. If possible means extremely likely (and there are logical reasons to believe so) then taking responsibility makes sense.
Except you're not passing a machete to Jason Voorhees. That would be "double it and pass it to the next person who you know is going to pull the lever."
You're passing a machete to the next person in line. You don't know who that is. They may or may not pass the machete down the line. Considering I would not expect a person chosen at random to kill someone when handed a machete, it seems unethical for me to kill someone with a machete just to prevent handing it to someone else.
Or it keeps doubling even well after its surpassed the human population, and we all have to keep hitting "pass" in turns forever, and if even a single person gives up then boom.
That’s only if he’s next in line though. If you pass a machete to someone who might one day eventually pass it onto him, is that as bad? I suppose at some point there’s an ethical cutoff lol
The farther away he is the worse it is because the more people he gets to kill. If for some reason I absolutely had to pass the machete down the line then the best case is for the very next person I hand it to to be Jason. But even better if it's me.
In this case it isn’t even a guarantee that anyone has to die as the problem is presented, the tram can just continue to be passed along. The default setting for the lever is “go to next” so to not pull the lever is easier both physically and morally.
The individual that pulls the lever is the same individual that would take action to harm others for no benefit, and even in real life I can’t morally take responsibility for a person who runs over a child by purpose after I let his/her car merge in front of me just before a school crossing
I guess then the issue would be: do you ever find out the result of your actions? If no, then I guess it’s sort of a “glass half empty/full” kind of thing, because you could just pass it on and assume the best and just go live your life quite happily.
Although if you did find out the result, imagine being first, pulling the lever and then finding out nobody else would have.
If it’s infinite (up to the current human population), we’re all tied up on the tracks. Unless we’re leaving out the exact number of people that would bring it to approximately the full population, I guess.
As long as I’m not on the tracks, I’ll take the hit and kill one instead of risking a potential genocide.
JSON data within a database is perfectly fine and has completely justified use cases. JSON is just a way to structure data. If it’s bespoke data or something that doesn’t need to be structured in a table, a JSON string can keep all that organized.
We use it for intake questionnaire data. It’s something that needs to be on file for record purposes, but it doesn’t need to be queried aside from simply being loaded with the rest of the record.
Edit: and just to add, even MS SQL/Azure SQL has the ability to both query and even index within a JSON object. Of course Postgres’ JSONB data type is far better suited for that.
While I understand your point, there’s a mistake that I see far too often in the industry. Using Relational DBs where the data model is better suited to other sorts of DBs. For example, JSON documents are better stored in document DBs like mongo. I realize that your use case doesn’t involve querying json - in which it can be simply stored as text. Similar mistakes are made for time series data, key-value data and directory type data.
I’m not particularly angry at such (ab)uses of RDB. But you’ll probably get better results with NoSQL DBs. Even in cases that involve multiple data models, you could combine multiple DB software to achieve the best results. Or even better, there are adaptors for RDBMS that make it behave like different types at the same time. For example, ferretdb makes it behave like mongodb, postgis for geographic db, etc.
Using Relational DBs where the data model is better suited to other sorts of DBs.
This is true if most or all of your data is such. But when you have only a few bits of data here and there, it’s still better to use the RDB.
For example, in a surveillance system (think Blue Iris, Zone Minder, or Shinobi) you want to use an RDB, but you’re going to have to store JSON data from alerts as well as other objects within the frame when alerts come in. Something like this:
While it’s possible to store this in a flat format in a table. The question is why would you want to. Postgres’ JSONB datatype will store the data as efficiently as anything else, while also making it queryable. This gives you the advantage of not having to rework the the table structure if you need to expand the type of data points used in the detection software.
It definitely isn’t a solution for most things, but it’s 100% valid to use.
There’s also the consideration that you just want to store JSON data as it’s generated by whatever source without translating it in any way. Just store the actual data in it’s “raw” form. This allows you to do that also.
Edit: just to add to the example JSON, the other advantage is that it allows a variable number of objects within the array without having to accommodate it in the table. I can’t count how many times I’ve seen tables with “extra1, extra2, extra3, extra4, …” because they knew there would be extra data at some point, but no idea what it would be.
Reminds me of a scam call center person telling Kitboga “your IP address is tied to your house address. You don’t get a new one unless you move houses”
I’ve only recently branched out from router defaults…only reason was that I wanted to VLAN off my home network, and mostly just so [Home Assistant-controlled] smart devices can’t talk to the Internet at all.
Whenever I’m given the chance at work, I let my feelings be known about using “consumer grade addressing schema” in production clusters. Sure, I use it at home, but anything beginning with “192.168” looks like my moms wifi, and has no right being part of a production network.
This comment was sponsored by the 172.16.0.0/12 gang
Bro that reminds me when I was in university and I used to tutor fellow students with the goal of getting laid. As soon as I got laid I stopped tutoring. Now unfortunately I’m married and have kids because of that.
The original commenter @cumskin_genocide did neither specify their own sex nor the gender of the people they tutored.
Multiple people under that comment simply assumed that OP is male and was tutoring girls. That is heteronormative. Yes, I formulated that with a bit of snark. But come on.
Pads, calibers and new discs. That seemed like a lot of money for 17 year old kid. I worked there for a couple of months. I learned a few things, like working on brakes is not for me.
Look, we already got rid of “Master/Slave” in favor of things like “Parent/Child”, “Active/Standby”, or “Primary/Secondary”. We’re not making more changes because right-wingers are afraid of everything.
tbh i think “master” terminology is only bad if paired with “slave”. the word itself kinda just lost it’s original meaning
but I don’t really care about git’s change. im only using master out of habit
Oh yeah that was a shitshow. I made a point to keep “master” in my repos and configurations because it’s the other meaning of master - one of the many others. Words are allowed to mean different things, ya know? If I’m drinking some coke I’m certainly not drugging myself (…I hope).
After all, the command to attach to a master is not “git slave”, it’s “git pull”.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.