Sometimes I think Go was specifically made for Google to dictate its own preferences on the rest of us like some kind of power play. It enforces one single style of programming too much.
Ew, that’s awful. Go is not one of my programming languages but I had always held it in high esteem because Ken Thompson and Rob Pike were involved in it.
Honestly, it does not happen often that I have a ln unused variable that I want to keep. In my mind it is the same thing when wanting to call a function that does not exists. Also my editor is highlighting error Long before I try to compile, so this is fine too for me.
Worse than not having a unused variable check at all? Dunno, the underscore assignment are very visible for me and stand out on every code read and review.
Yes, worse, because now if you want to use the underscore assignment to indicate that you really want to discard that variable - it gets confused with underscore assignments that were put there “temporarily” for experimentation purpose.
Say I’m having some issue with a function. I comment out half the function to see if that’s where the weirdness is. Golang says “unused variable, I refuse to compile this dogshit!” I completely fool Golang by just using _ = foo. Yes, I was correct, that’s where the problem was. I rewrite that section of the code, and test it out, things work perfectly. Only now, it turns out I’m not using foo anymore, and Golang has no idea because I so cleverly fooled it with _ = foo.
Now, something that could be caught by a linter and expressed as a warning is missed by the language police entirely, and may make it into production code.
Police the code that people put into a repository / share with others. Don’t police the code that people just want to test on their own.
The underscore is used in production code too. It’s a legitimate way to tell the compiler to discard the object because you don’t intend to use the pointer/value.
From what I’ve heard from Google employees Google is really stringent with their coding standards and they usually limit what you can do with the language. Like for C++ they don’t even use half the fancy features C++ offers you because it’s hard to reason about them.
I guess that policy makes sense but I feel like it takes out all the fun out of the job.
Just about any place I know that uses C++ also does that with C++ so that’s nothing unusual for C++ specifically. It’s too big of a language to reason about very well if you don’t, so you’ve gotta find a subset that works.
Too many patterns. If you do not do this every author will have a different use of the language and you will have to read a book of documentation each time you change files.
I think this is a good thing. The styles are just opinions anyway and forcing everyone to just follow a single style takes a lot of bikeshedding away, which I really like.
“No, wait, it’s not what you think! There’s a continuous integration system, a commit would’ve triggered a new build! It might have paged the oncall! Babe! The test suite has been flaky lately!”
My entire career is based on “yeah but you’re good with computers and programming!” I just wanted to do fine arts and paint for fuck sake. And I could have made a career out of it, as history as since shown! Ah well. Maybe my kids will fare better, we’ll see.
Do you really think the reason people hate Java is because it uses an intermediate bytecode? There’s plenty of reasons to hate Java, but that’s not one of them.
.NET languages use intermediate bytecode and everyone’s fine with it.
Any complaints about Java being an intermediate language are due to the fact that the JVM is a poorly implemented dumpster fire. It’s had more major vulnerabilities than effing Adobe Flash, and runs like molasses while chewing up more memory than effing Chrome. It’s not what they did, it’s that they did it badly.
And WASM will absolutely never replace normal JS in the browser. It’s a completely different use case. It’s awesome and has a great niche, but it’s not really intended for normal web page management use cases.
Do you really think the reason people hate Java is because it uses an intermediate bytecode? There’s plenty of reasons to hate Java, but that’s not one of them.
And WASM will absolutely never replace normal JS in the browser. It’s a completely different use case. It’s awesome and has a great niche, but it’s not really intended for normal web page management use cases.
While I overall agree that JS / TS isn’t likely to be replaced, Microsoft’s Blazor project is interesting conceptually … Write C# webpages and have it compile down to WASM for more performance than JS could offer.
What, you can write a website in C# and have It output as a website using wasm? I have never touched wasm. That might be an interesting way to try it though.
The problem with blazor as I understand it, is that no, it does not compile your C# into WASM. Instead, it compiles into a standard .net module – with as much excising of unused code as possible – and distributes it with a CLR that is compiled to WASM. So effectively you’re running the .net VM inside the WASM VM. If you do client-side blazor, which is not even MS’s push anymore because they stand to make more money if you write server-side blazor and deploy it to Azure.
Do look it up yourself tho. I could have a totally wrong understanding. I haven’t looked into it in some time because I’ve not been in a position to start a new frontend project from scratch. I would love to do my frontend stuff in C# though, don’t get me wrong.
Interesting, yeah. I inherited a Blazor project though and have nothing positive to say about it really. Some of it is probably implementation, but it’s a shining example of how much better it is to choose the right tool for the job, rather than reinventing the wheel. For a while I was joking about setting the whole project “ablazor” until we finally decided to go back to a React/C# ASP.NET stack. If you’re thinking of using Blazor still, though, I think two fun things to look into are “linting issues with Blazor” and “Blazor slow”. I’ve heard people praise it, but they tend to be those who consider themselves backend devs that occasionally get stuck making frontends.
But before I hire you, can you please build a small house or a shed or a trampoline to show me that you have the skills of an architect. The exact details of what to build will be given to you when the test assignment starts.
Or the technical challenge being ridiculous like a lot of them are. If you have that many people failing it, that tells me some or all of these things are true:
Management, or whoever they hire for handling candidates, is not screening them well
The challenge is needlessly complex
The challenge requirements are not clear
The company expects absolute prod-ready perfection but told the candidates “don’t spend more than 2-3 hours on this,” despite it taking one of their own engineers 6-8 hours
The salary is way too low and they’re not getting candidates that fit their demands (e.g. wanting “senior” while offering “junior” salaries)
Seriously, some tech companies think they shit gold and give ridiculous challenges that reflect that delusion.
Source: been in tech since 2005 and in a terminal since I was 12.
My own take as someone internal to that process is that it was a combination of 1 and 5.
I have no idea how candidates were screened. I do know that even before the “technical challenge” we had a large number of candidates completely faceplant on lowball questions asking what single line snippets of code did.
I can also say that I absolutely did not expect prod-ready results from the challenge. But I did expect things like not vomiting raw uuids on the screen instead of user readable values when displaying results. Or not having commits from overseas dev contractors which did all the actual work in your git log.
The actual problem is if you don’t add new features, there’s nothing for people to do beyond maintenance and you aren’t going to keep good developers to only stick around for that.
So your option is new features or a new app entirely, but coming up with other good apps isn’t easy and is a huge risk.
So if you actually did good market research and spoke to users, you could find new features to add.
Beyond a tiny company or sole developer, it doesn’t really work.
You mean the one that no one asked for, makes it harder to do the primary thing the app is designed to do, and all the involved developers have told management it’s a bad idea with a detailed list of why?
I use quake style terminals, and often start writing a file and completely forget about it and turn off the computer, and only remember what i left behind when i find the random recovery files around, so :w a lot is quite useful for me.
Is there any reason to use :w other than it being the default? I have mine mapped to CTRL-S and it makes sure to keep me in insert mode if I was in insert mode. Feels way faster and easier to spam than the 4 key presses it takes to execute “:w”.
At the very least I’d try to clean up that fuzzy condition on behavior to anticipate any bad or inconsistent data entry.
WHERE UPPER(TRIM(behavior)) = ‘NICE’
Depending on the possible values in behavior, adding a wildcard or two might be useful but would need to know more about that field to be certain. Personally I’d rather see if there was a methodology using code values or existing indicators instead of a string, but that’s often just wishful thinking.
Edit: Also, why dafuq we doing a select all? What is this, intro to compsci? List out the values you need, ya heathen ;)
Honest question, which ones wouldn’t it work with? Most add a semicolon to the end automatically or have libraries and interfaces saved me a million times?
I’m not sure how including a final semicolon can protect against an injection attack. In fact, the “Bobby Tables” attack specifically adds in a semicolon, to be able to start a new command. If inputs are sanitized, or much better, passed as parameters rather than string concatenated, you should be fine - nothing can be injected, regardless of the semicolon. If you concatenate untrusted strings straight into your query, an injection can be crafted to take advantage, with or without a semicolon.
You need semicolons if it is a script with multiple commands to separate them. It is not needed for a single statement, like you would use in most language libraries.
Also I created this repo to create a reproducible sec environment for myself. I added other languages, but personally work mostly with python. It is basically resonating for handling all the boiler plate:
For packaging in docker I started to use nix2container project as it gives me a greater control over layers. So for example when I package my phyton app I typically use 3 layers:
python and it’s dependencies
my application dependencies
my application, which is very tiny compared to other two, so there is great reuse of the layers
The algorithm mentioned in the video also helps a lot with reuse, but the above is more optimized by frequency of how things typically change.
BTW: today I discovered this github.com/astro/microvm.nix I haven’t play with it yet, but in theory it would let me generate a microvm image (in similar fashion to generate a docker container) which would let me to run my app natively as a tiny VM on EC2 for example, and use only minimum necessary of a typical OS to run it.
I have a love/hate relationship with docker. On one side it’s convenient to have a single line start for your services. On the other side as a self-hoster it made some developers rely only on docker meaning that deploying the stack from source is just an undocumented mess.
Also following the log4j vulnerability I tend to prioritize building from source as some docker package were updated far later than the source code was.
The Dockerfile is essentially the instructions for deploying from scratch. Sure, they most likely only exist for one distro but adapting isn’t a huge chore.
You can also clone the repo and build the container yourself. If you want to update say, log4j, and then attempt to build it, that’s still entirely possible and easier than from scratch considering the build environment is consistent.
If I’m updating the source code already I might as well build my service from it, I really don’t see how building a docker container afterward makes it easier considering the update can also break compatibility with the docker environment.
Also adapting can be a pita when the package is built around a really specific environment. Like if I see that the dockerfile installs a MySQL database can I instead connect it to my PostgreSQL database or is it completely not compatible? That’s not really something the dockerfile would tell me.
I really don’t see how building a docker container afterward makes it easier
What it’s supposed to make easier is both sandboxing and reuse / deployment. For example, Docker + Traefik makes some tasks so incredibly easy and secure compared to running them on bare metal. Or if you need to spin up multiple instances, they can be created and destroyed in seconds. Without the container, this just isn’t feasible.
The dockerfile uses MySQL because it works. If you want to know if the core service works with PostgreSQL, that’s not really on the guy who wrote the dockerfile, that’s on the application maintainer. Read the docs, do some testing, create your own container using its own PostgreSQL or connecting to an external database if that suits your needs better.
Once again the flexibility of bind mounts means you could often drop that external database right on top of the one in the container. That’s the real beauty of Docker IMO, being able to slot the containers into your system seamlessly due to the mount system.
adapting can be a pita when the package is built around a really specific environment
That’s the great thing about Docker, it lets you bring that really specific environment anywhere and in an incredibly lightweight manner compared to the old days of heavyweight VMs. I’ve even got Docker containers running on a Raspberry Pi B+ that otherwise is so old that it would be nearly impossible to install the libraries required to run modern software.
I love Docker because it is the only sane method to selfhost shit with my Synology NAS, and I love my Synology NAS because it is the only Linux interaction that I have (from my old MacBook Pro).
Had someone verify that wifi was working because he could see his neighbors’ networks. Airplane Mode was enabled. Dunno what he thought he saw.
Same thing with a colleague. The guy told him that he was definitely connected to wifi. It took a lot of probing to confirm that wasn’t true.
Some people just can’t provide valid feedback nor follow simple instructions. I kinda feel like those individuals shouldn’t be allowed to use computers to do their jobs. If you can’t master just pass the basics, sorry. Here’s a pencil and a pad of paper. You can either work the longer way or you can consciously put in the effort to learn this stuff enough for us to help you when you need it.
My own father, who had a doctorate in mechanical engineering: “Now click the Apple menu.” “What’s that?” “It’s the menu that’s an Apple logo in the top left corner of the screen.” “I don’t have that.” “Yes, you definitely have that.” “No, I don’… oh there it is.”
I’m not calling anyone stupid. More that I’m saying people convince themselves that they can’t learn and then shut down.
I mean in fairness to the first one, on most systems it is possible to turn wifi back on without turning off airplane mode (there is in-flight wifi after all)
I think that’s the trick, right? 1% of a perfectly normal person’s attention looks a lot like a really dumb person. This certainly goes for tech, but also for any number of other fields.
Insightful. I was commenting about a VIP wrt a power dialog on a mobile device and posited that the reason they didn’t understand a thing must be that they don’t read before dismissing it. I would even say that’s half of 1% of their attention and that makes complete sense. The other 99.5% is focused on the things they consider more important.
Had they read the message, it would have saved them a lot of time waiting for the solution that would have been near instantaneous otherwise. But their 0.5% is more important to them than your 99.5%. Hopefully they’re really good at bringing money into the company, because their ability to save labor money for the company is abysmal.
Hopefully they’re really good at bringing money into the company, because their ability to save labor money for the company is abysmal.
I was asked to drive 80m to reboot a device when I’d said the previous day that rebooting would fix it (it was a phone; there’s almost no real troubleshooting on the platform). I kept quiet about how financially irresponsible the request was. When I got there, the phone was already turned off for other reasons. At least I got to listen to podcasts while I drove there and back.
I’m aware that he probably meant miles, but he still used the wrong abbreviation (should have been mi). Gotta be careful about that kind of thing, although I’m not sure what the tech anecdote equivalent of the Mars Climate Orbiter would be. Someone taking it too seriously, like I’m doing here, probably. 😅
kinda feel like those individuals shouldn’t be allowed to use computers to do their jobs. If you can’t master just pass the basics, sorry. Here’s a pencil and a pad of paper.
My wife had her HR rep get pissed at her just yesterday for sending an email to her boss and other higher-ups asking why assistant managers at her company can’t use the computers theyre on all day properly. She had asked for a screenshot of something so she could see what the other person was seeing and they replied with “I can’t do that idk how” and thought that was acceptable?
Luckily the other higher ups told HR to shut up and that she was only mad because it’s her job to ensure basic computer literacy and she clearly didn’t.
People 100% get into the mindset that “well, I already know the basics, so anything I don’t know is advanced user shit so I can’t learn it” and it’s infuriating
Wow. It’s so easy to get that info from a web search that I’d argue that the response is evidence of the person not doing their job. Good on your wife for calling bullshit and the same for the higher ups who defended her position.
a tech illiterate old friend of mine in his 60s got tasked with changing his simcard for new one. But the network just didn’t appear. Long story short after 3 hrs of headbashing I asked him to send me the photo of simcard itself
I used to work at a phone repair shop. The amount of people that put Sim cards in their brand new phone without the tray. We would have to take the phones apart to get their Sim cards out.
Isn’t the “standard” sim card in you pic actually mini-sim? While the standard one is credit card-sized? I think I have a phone somewhere that takes a credit card sized sim actually.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.