Update: thank you for pointing out to me which community this was posted on.
I’m going to leave this post up as a cautionary tale for people like me who don’t pay enough attention!
But Linux is cool cuz it’s so fast and it doesn’t break.
Long as I’ve been using it anyway.
So now linux is going to be much slower, going to break and be more susceptible to security breaches?
I’m not a programmer, is the upside supposed to be that with so many more programmers able to work on the kernel, those issues will be able to be fixed by the extra programmers?
It’s not like there’s anything wrong with Linux right now.
This is why I try my damnedest not to write in weakly typed languages.
string + object makes no logical sense, but the language will be like “'no biggie, you probably meant string + string so let’s convert the object to string”! And so all hell breaks loose when the language’s assumption is wrong.
That kind of thing. But the principle of least surprise definitely applies. If you get to the point where you’re adding two booleans and a string, I feel like the language should at least say something. At least until the technology exists for it to physically reach out of your screen and slap you.
Somehow I miss those days. Now you need weeks of training to understand the black magic behind all the build/deployment stuff in whatever cloud provider your company decided to use…
We got our own platform based on kubernetes and cncf stuff and we don’t have to care anymore about the metal underneath. AWS? OTC? Azure? Thats just a target parameter, platform does the rest. It’s great.
How often do you switch cloud providers that this is even a real rather than a hypothetical benefit? (Compared to the cost of dealing with a much more complicated stack.)
I manage a stack like this, we have dedicated hardware running a steady state of backend processing, but scale into AWS if there’s a surge in realtime processing needed and we don’t have the hardware. We also had an outage in our on prem datacenter once which was expensive for us (I assume an insurance claim was made), but scaling to AWS was almost automatic, and the impact was minimal for a full datacenter outage.
If we wanted to optimize even more, I’m sure we could scale into Azure depending on server costs when spot pricing is higher in AWS. The moral of the story is to not get too locked into any one provider and utilize some of the abstraction layers so that AWS, Azure, etc are just targets that you can shop around for by default, without having to scramble.
Because 48 bits over 32 bits does not really solve the problems with ip4. 128 bits basically gives one ip4 address space to each square meter of earth. Ip6 also drops all the unused and silly parts of ip4 too.
128 bits basically gives one ip4 address space to each square meter of earth.
That sounds like terminal stage capitalism to me. Why would we want every tree in the Amazons to be cybergorized with its own IP? I don’t know Rick, 64 kbits bits ought to be enough for everybody, and I’m already risking it.
Our network architecture has the tendency to waste IP addresses. A subnet may have 10 devices but have 256 IPs (e.g. a /24 network like 192.168.0.0 to 192.168.0.255) - that’s 246 wasted addresses. This wastage is kinda unavoidable since we’d need to keep our routing tables from being too fragmented.
With that in mind it is entirely possible for 64-bit addressing space to not be enough, unless we revert to methods like NAT which come with their own disadvantages.
We have already used up about one /11 block of the IPv6 internet. That’s 128-11=117 bits. If we replace the standardized /64 subnets of IPv6 with old /24 subnets typical in IPv4 networks, you get 61 bits. That’s dangerously close to the upper limit of a hypothetical 64-bit IPv5 internet.
Because bits are not expensive anymore, and if we used 64 bits, we might run out faster than the time needed to convert to a new standard. (After all, IPv4 is still around 26 years after IPv6 was drafted.) Also see the other notes about how networks get segmented in non-optimal ways. It’s a good thing to not have to worry about address space when designing your network.
Adobe also recently snuck into their ToS that they could use whatever you made with their products for training AI and then gaslit everyone saying “we never said that” and changed their ToS. You know where you can’t access my stuff? Offline.
If I see someone boasting about programming with AI, in 0.1% of cases they use it responsibly (as a tool to quickly get introduced into a topic and brainstorm ideas) and the rest of times they’re probably a script kiddie letting ChatGPT do advent of code or smth and calling themselves programmers.
Same thing with all the folks who took the “copy pasting from stackoverflow” joke literally.
I regularly have to find guidance online through code examples, but you need to understand what the code you’ve found actually does under the hood for when it inevitably has issues because it wasn’t made for your specifc use case.
I feel like there is almost no chance of a copilot program working as expected without having an understanding of the code it writes. It makes some hilariously bad choices at times, and frequently drops and changes code that was added previously.
As someone who has often been asked for help or advice by other programmers, I know with 100% certainty that I went to university and worked professionally with people who did this, for real.
“Hey, can you take a look at my code and help me find this bug?”
(Finding a chunk of code that has a sudden style-shift) “What is this section doing?”
“Oh that’s doing XYZ.”
“How does it work?”
“It calculates XYZ and (does whatever with the result).”
(Continuing to read and seeing that it actually doesn’t appear to do that) “Yes, but how is it calculating XYZ?”
“I’m not 100% sure. I found it in the textbook/this ‘teach yourself’ book/on the PQR website.”
I mean no, but also… yes? Like having a one person dev team is a little ridiculous for a game selling as well as Manor Lords. 50 people is a lot, but do you really think the game would have less features a year from now if the dev hired like 3 people to help?
Obviously development would slow down in the short term, but a one person dev team is asking for disaster
Ideally the solo dev and visionary would cease development and move into a product owner role. Bringing other devs up to speed on the code base while also maintaining quality, vision, and cultivating a team is no trivial task. Not to mention this particular dev may not want or be able to such things.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.