Use a hand-modified-to-ESM version of SQL.js, which is SQLite in JavaScript.
Get a database ready that SQL.js can query.
Build a Houdini PaintWorklet that executes queries in JavaScript and paints the results back to the screen in that <canvas>-y way that PaintWorklets do.
Pass the query you want to run into the worklet by way of a CSS custom property.
For me it shows as step 5, in Firefox on Android using web browser interface. Also I can view your source which shows as simply “5. Go…”, so it is definitely your app.
It’s not the best UI, but you can also view your comment from a standard web browser, just to see how it looks. The advantage to the web browser is that it is always by definition maximally up-to-date:-) - though its baseline functionality may still be lower than an app if the latter is done well.
Lemmy is fine, it depends on the markdown parser/renderer. Markdown allows you to use any numbers for numbered lists and the renderer is supposed to display them corrected.
This is a Markdown issue really. Starting a line with a number and then a dot turns that line into an item in an ordered list. The most common behaviour (that I’ve seen) is to start that list from 1, regardless of what number is used. The intent is to make it easy to add items later without renumbering everything, for living documents at least.
That’s not advertising, that’s prosthelytizing. Advertising has to be for goods and services. And just telling someone about something isn’t advertising. It’s when you spend money or resources to bring attention to that thing that it counts as advertising. People keep saying that putting a sign on your building stating the company name is advertising. It’s not.
Yes, this isn’t advertising. It’s proselytizing. Advertising has to be for goods and services and it has to be explicit. Me telling you about the company I work for isn’t advertising.
With that, you could sue Google and other companies pushing anti-adblocker tech for discrimination, just bring up all the cases when the supreme court sided with cristofascists to oppress other people.
In general, it translates instructions into something readable by whats accessing it. A popular translation layer on Lemmy is Proton. Its how the Steam Deck can play all those windows games.
Cuda is an Nvidia specific method for using a graphics card to do computation (not just graphics), like physics simulations.
Translation layers would let you use software designed for other graphics cards to work with Cuda, or to let Cuda software work on other graphics cards
Less that they don’t want other companies using it and more so they don’t want other other companies translating it into something they can use.
Basically, translating an instruction manual from German to Spanish.
No one is breaking any copyright laws or IP to do this. It’s the same how Steam created Proton to run Windows games on Linux. It’s translating code from one language to another that’s readable.
If Linux becomes the dominant gaming platform for gaming (not gonna happen, wish it would tho), there is no reason for a “Proton for Windows” could/should emerge.
Hey now. That all depends on how popular Steam Deck handhelds keep getting and if future versions of windows keep getting worse and more ad intrusive like windows 11 has done. Gaming on Linux has gotten much easier and at some point the chunk of people on Linux will be high enough (it’s gone from 1.6% in 2019 to 4% now) that devs will decide its worth it to make Linux compatible games. I have a desktop at home that still works as a pretty good gaming rig at home, but win 11 isn’t supported by my processor. Once win 10 stops getting support it will be running Linux only. A lot of preventing a full switch over now is the anti cheat software some major studios use on their online games that won’t run on Linux.
Oh, I drive Linux only. I have Windows 10 running Atlas playbook on standby but hasn’t been booted in months.
I think the entry barrier for installation/setup is what will be what stops Linux fully taking over. If OEMs start loading a very user friendly Linux on their “normal” desktops/laptops (Best Buy, Amazon, etc.), then I can see Linux being the majority.
With all that said, I want Linux to be the majority and running on everyone’s computer. I’m just being a realist at this point in time.
CUDA was there first and has established itself as the standard for GPGPU (“general purpose GPU” aka calculating non-graphics stuff on a graphics card). There are many software packages out there that only support CUDA, especially in the lucrative high-performance computing market.
Most software vendors have no intention of supporting more than one API since CUDA works and the market isn’t competitive enough for someone to need to distinguish themselves though better API support.
Thus Nvidia have a lock on a market that regularly needs to buy expensive high-margin hardware and they don’t want to share. So they made up a rule that nobody else is allowed to write out use something that makes CUDA software work with non-Nvidia GPUs.
That’s anticompetitive but it remains to be seen if it’s anticompetitive enough for the EU to step in.
I guess I’m missing who owns/developed Cuda, then. Like, why does Nvidia think they can disallow anyone else from using Cuda if Cuda was made and broadly used as the API before Nvidia.
The company had avoided certain destruction, after having fired the previous CEO and putting a new one in it’s place. The new CEO had managed to bring a newfound calm to the company and it’s ranks, and brought an air of meditative discipline to board room meetings.
Some said it was crazy, but making the LectoFan EVO the new CEO was the best decision the company board had ever made.
I find it very hard to believe that AI will ever get to the point of being able to solve novel problems without a fundamental change to the nature of “AI”. LLMs are powerful, but ultimately they (and every other kind of “AI”) are advanced pattern matching systems. Pattern matching is not capable of solving problems that haven’t been solved before.
In internet terms: It’s just a soyjak holding a box with data who is pointing at another soyjak holding a box with data who is pointing at another {insert N-3 of the same soyjaks} soyjak with a box with data without an arm to point with
each commit points to the one before. additionally a commit stores which lines in which files changed compared to the previous commit. a branch points to a particular commit.
There’s a guy out there who made a reversible NES emulator, meaning it can run games backwards and come to the correct state. He made a brilliant post on Reddit /r/programming linking his ideas for the emulator to quantum mechanics.
Then he was asked why he didn’t distribute his program in git. He said that he didn’t know git.
To me, that’s a pretty good example of the difference between computer science and software engineering.
Correcting the reviewer.
Notes: “should of” isn’t valid, should implies a verb, of isn’t a verb. I expect you meant “should have”. Please recall this in future submissions.
A question mark does not fit the sentence, which is a statement (“they should.” rather than “should they?”). While question marks are commonly used to demonstrate a rising tone at the end of a sentence, its not considered correct for formal writing.
A-ha, but this most decidedly not formal writing! UNO REVERSE CARD.
But on a more serious note, I did intend it as a sort of question because I’m not 100% sure, because the rules for quote use might well be different in English than my native language. I actually also don’t know the rule for question mark usage in English; is it generally considered a crime against orthography to plonk a question mark on something that’s a statement, or is it valid in some cases?
It’s totally valid in most cases. It’s technically only supposed to be used for a question, but language is based on how it’s most commonly used, with those “rules” only applying in extremely formal situations. With the prevalence of informal text-based communication, many people use it to indicate being unsure, like how you used it. I just wanted to continue the chain of grammar corrections (which is why I used the wrong “its”/“it’s” at one point). Also, you were right about the quotes.
It’s technically only supposed to be used for a question, but language is based on how it’s most commonly used
Ah, I see you’re also a descriptivist 😀
But yeah I know you were just continuing the joke; I’m a language nerd (well, general nerd really) and I just got curious about what the rule actually is. While English orthography rules related to punctuation usually seem to be pretty much the same as with Finnish, the rule for question marks seems to be more relaxed in Finnish because it can “officially” be used to mark any expression as a question. The rules for commas are also different, ours are closer to German and we tend to spray commas everywhere
They should of course keep that in mind, but it’s not that “should” should always be followed by a verb directly. The problem is that “of” in this context is a mishearing/spelling of “have”, so they should in this case have written it like that instead.
I would argue that “should of” is just a naive written rendition of the spoken contraction “should’ve”. They are homophones, so it’s a completely understandable error among those without the relevant education or background. I know only English and was in Grade 9 at a different school before someone corrected me.
In that spirit, I will call attention to your first sentence, specifically the comma. In my opinion, that can be improved. One of three other constructions would be more appropriate:
I am really happy when people are quite strict in code reviews. It makes me feel safer and I get to learn more.
I am really happy when people are quite strict in code reviews, because it makes me feel safer and I get to learn more.
I am really happy when people are quite strict in code reviews; it makes me feel safer and I get to learn more.
The first of my suggested changes is favoured by those who follow the school of thought that argues that written sentences should be kept short and uncomplicated to make processing easier for those less fluent. To me, it sounds choppy or that you’ve omitted someone asking “Why?” after the first sentence.
Personally, I prefer the middle one, because it is the full expression of a complete state of mind. You have a feeling and a reason for that feeling. There is a sense in which they are inseparable, so not splitting them up seems like a good idea. The “because” explicitly links the feeling and reason.
The semicolon construction was favoured by my grade school teachers in the 1960s, but, as with the first suggestion, it just feels choppy. I tend to overuse semicolons, so I try to go back and either replace them with periods or restructure the sentences to eliminate them. In this particular case, I think the semicolon is preferable to both comma and period, but still inferior to the “because” construction.
I’ve clearly spent too much time hashing stuff out in writers’ groups. :)
I agree with most of that. In formal settings, I prefer full sentences with conjunctions; however, choppy sentences are the ones that often end up in my Lemmy comments.
Reviews have to be balanced to circumstance. There is a big difference between putting out the sales brochure and the notice on the bulletin board. Likewise in coding a cryptographic framework for general consumption and that little script to create personal slideshows based on how you’ve tagged your photos.
As a general rule, wider distributions, public distributions, and long-lived distributions need more ambitious reviews. If the distribution is wide, public, and permanent, then everything needs very detailed scrutiny.
I have found some success in starting with and occasionally revisiting review goals. This helps create and maintain some consistency in a process that is scaled to the task at hand.
Notably, a good code review should also bring up the good parts of the submission, and not just concentrate on the errors. Not only does it make the recipient feel better to get positive feedback among the negative, but it helps them learn about good practices too. Just concentrating on the errors doesn’t really tell them which things they’re doing well.
Many reviewers concentrate on just finding mistakes, and while it’s useful it’s sort of the bare minimum; a good code review should be educational. Especially if the submitter’s a more junior coder, in which case it’d also be a good idea to not just outright tell them how you’d fix some problem, but sort of lead them to a solution by asking them questions and pointing things out and letting them do the thinking themselves. But still, experienced coders will also benefit from well-structured feedback, it’s not like we’re “finished” and stopped learning.
Yes, I tend to do that, and thankfully some of my colleagues do too. Clever but readable solutions, following good and relevant practices, clear documentation, making a good MR description that makes it easier to review, and more.
That’s great to hear. It’s thankfully becoming more common in general, and we can all do our part in spreading these practices.
I tended to actively evangelize for it when I was managing coders or teams. Unfortunately it’s still not all that uncommon for coders to be downright offensive when giving feedback, like not necessrily quite Linus-level rants but things like “this is idiotic, this is stupid, that’s shit, why would you do that” etc etc. The usual explanation I’ve gotten is that they’re just being “honest” and saying what they think, and it’s not their problem if the reviewee (is that even a word‽ I can’t English today) gets offended. Some even get all huffy about it, like “oh we’re just supposed to coddle them and never say anything negative so their little feefees don’t get hurt?” And I mean, yeah, getting honest feedback definitely a good way to learn, but it’s not like the only way to point out errors or problems is to be a cunt about it.
Assuming you have competent leadership, then it wouldn’t be merged if you missed something obvious. I guess you’re saying that you want more positive reinforcement.
Yeah, I learn so much from code reviews and they’ve saved me so much time from dumb mistakes I missed. I’ve also caught no shortage of bugs in other people’s code that saved us all a stressful headache. It’s just vastly easier to fix a bug before it merges than once it breaks a bunch of people.
The good news is, based on the diagram looking like it’s straight from AWS docs, there’s a Cloud formation template for all that.
Bad news, good luck troubleshooting any of it if something breaks
As fellow german I luckily have an answer for smaller projects, where my non-techy mother-in-law hosts her own business wordpress since years without any issues. It’s just a simple webhoster with ssh-login.
I’ve been with digital ocean for more years than I can remember. I love Digital Ocean. Their core product is great, great UI, API, and their new products have been great as well. I’m using their K8s managed install for a year or so now on a product with no issues.
I believe they have 1 click installs for Wordpresss.
Here’s a referral code for $200 over 2 months if anyone wants to try it:
I adore DO. They offer so many good products beyond VMs these days. Their K8s is cheap and their AppEngine stuff is like baby FarGate, sort of. They even offer server less as well. S3, RDS, NLBs, it’s all there 😎
I’m a huge fan of Fly.io. I deal with Kubernetes on AWS all day at work; Fly gives me the power of Kubernetes without the configuration hell that comes along with it. I just Dockerize my app and push it up.
Huge bonus points for multi-region support and Anycast IP addresses too. And they support IPv6 which is always a dealbreaker for me.
To me, a Linux user, Apple is more of a jail or a pusher. I don’t want to use it because of lock in. Oh, you have an iPod? It’s much better with a Mac. An iPhone? All your friends should also have it, and now we have this special app you can only use properly with other apple users.
Was going to say that I don't have the energy to be passionate about anything these days, but then I realised I'm quite happy - almost passionate, you might say - to turn that dispassion towards large organisations like Microsoft.
Main reason I started using Linux on my computers a few years ago. I also learned some shocking things about privacy that made me wanna switch. Linux runs most stuff someone not in another weird niche could ever want nowadays anyway.
The funny thing about apple is just how far they’ve moved away from Jobs’ “vision”.
It was clearly evident when they released the Apple Pencil. Jobs hated Stylus’. one of the reasons he killed the newton, and a reason why the iphone/ipad never originally had one as he was quoted as saying “why would I need a stylus? I have 5 of them on my hand”
I mean Apple is like a cult that worship a dead god who would burn current Apple to the ground and start over if he came back.
Steve jobs also hated keyboards, or at least all the F1-12 keys because “nobody needs them”
About the “5 stylus on my hand”, it really feels like he only ever cared about the lowest common denominator when it came to usability and function. Yes, you have 5 fingers, but to this day fingers lack precision on touch screens, while a pencil stylus is as precise as it can get.
The function keys allow you to access extra features or shortcuts in programs that most people don’t ever use or don’t know might make them slightly quicker if they use the program a lot.
Steve Jobs only seemed to believe in supporting input methods he thought seemed most convenient for most people. Anything else was needlessly complicated and a waste of space. Some of his ideas about that come across as unusual, especially when things like space aren’t as limited.
Jobs also believed that 3.5" was the perfect touchscreen size for the human hand, neglecting the fact that (a) the human hand size varies drastically and (b) people are willing to trade ergonomic perfection for more screen estate because it’s more usable that way.
E2: of course that’s not why I told her. I explained how fastboot sometimes takes over and doesn’t actually restart the device, only “refreshes” the experience. I recommended she restart at least once a week. We’ll see what happens.
Idk how that person’s IT works, but in mine, that would probably warrant a lot of paperwork. The techs would have to pitch the change to client management, client management would have to pitch it to change management and provide test results to show it has no side effects, then deal with the techs complaining about the uptick in tickets about slow boot times or people justifying never shutting down or restarting with it taking so long to boot.
Not that they’re actually slow, our users are just super entitled. I got to observe the rollout of automatic screen lock for security reasons, and the ensuing pushback. The audacity of having to reenter your password if you’ve spent more than ten minutes doing nothing!
Security even managed to push for reducing it to five minutes after some unfortunate incident… but it got reverted for reasons you can probably guess. Hint: shit always flows downward.
I recommend looking into Windows hello for business to reduce the usage of passwords in the first place. It’s so much nicer to use your fingerprint, face, or even a PIN.
I would never consider fingerprints or face scans to be secure even for personal devices. I guess if theres literally nothing to protect, if thats possible.
I do understand the point that the biometrics are replacing very short pins usually, oftentimes 4 digits only but I dont quite see how that makes the passcodes worse than the biometrics.
I’d say even a 6 digit passcode with a randomized number pad, alongside an emergency wipe pin, would do better than biometrics, which also need to have a passcode setup as backup anyhow.
Maybe you could play out a few scenarios that illustrate your point?
Why exactly do you think biometrics are so terrible? Is it because you could theoretically access someone’s computer when they are sleeping or something?
As far as I’m aware that is not the consensus in the industry. I even need biometric (in combination with a card and a pin) to enter a specific datacenter.
I do think that bringing up specialised and uncommon hardware like randomised number pads is out of scope. Are you talking about highly sensitive and restricted systems? I’m talking about normal user computers.
Randomized keypads are for touchscreens, although like you said sort of not common for desktop workstations.
Just comparing a password to biometrics though on say a laptop or desktop, there is the major drawnback that you can be forced either knowingly or unknowingly to put in a biometric to unlock a device. It would be easier to circumvent then a standard password (at my company and the clients we work with, 16 characters is standard) with an encrypted hard drive.
This is all deduction ive made from other things I know to be true though, if you happen to know of a resource that explains both methods of securing g a workstation and the risks associated, I’d love to read it.
I also do agree overall that password less makes the most sense now, as people are never going to get better at making secure passwords and remembering them.
windows doesnt actually shut down, its some kind of hybrid hibernation now. it only really reboots if you actually reboot. so they may actually be “shutting down” every day.
They have successfully circumvented the reboot. I just always turn that setting off. SSDs are ubiquitous, nobody needs a fake shutdown. It just causes more issues.
Nah, hackthebox and many other red team simulation type sites have strict rules of engagement. You’re there to solve a puzzle as defined by hackthebox, not get around the puzzle by hacking hackthebox.
Oh no, just like if you were actually hired to do a red team simulation for a business! They would have strict rules of engagement and certain systems would potentially be defined as off-limits.
How terrible of Hackthebox to *checks notes… promote industry standard Red Team practices.
Fucking awesome writing style there - and a lot of salient points. The only weakness is that it’s preaching to the choir - the use of jargon and technical references probably makes it inaccessible to anyone who doesn’t agree with its conclusion.
Right‽ This was seriously the best rant I’ve read in ages; not only was it spot on, it was fucking hilarious.
This has to be the best way I’ve seen anyone describe what the problem with the current AI woo-woo is:
And then some absolute son of a bitch created ChatGPT, and now look at us. Look at us, resplendent in our pauper’s robes, stitched from corpulent greed and breathless credulity, spending half of the planet’s engineering efforts to add chatbot support to every application under the sun when half of the industry hasn’t worked out how to test database backups regularly. This is why I have to visit untold violence upon the next moron to propose that AI is the future of the business - not because this is impossible in principle, but because they are now indistinguishable from a hundred million willful fucking idiots.
Can you tell that a known human being is not an ‘AI’ chatbot, based on text correspondence?
Apparently we are now just going to have AI simulacra of ourselves date each other on dating apps and meet with each other on zoom.
The meeting thing in particular is so fucking insane.
Problem: Meetings waste time and accomplish nothing!
Solution: Don’t hire or train competent people, instead, automate meetings, the transcripts of which will presumably still have to be read, and will likely not make any sense, thus necessitating more meetings.
The goal of technological civilization apparently truly is to create maximum misery via maximizing meetings.
Ok, so here is OpenAI wanting to make… well basically it seems to want to have not only an AI agent in a text support chatbox telling you how to fix a problem…
…but give it the ability to completely take over your computer and just do it for you, presumably via Remote Assistance and whatever the Mac equivalent is.
No way this could go wrong and lead to fake support sites just fucking writing a batch file and executing it in the blink of an eye.
Then we’ve got both Zoom and Otter who yes, straight up, want to build AI powered avatars, based on each employee/user and replace the human entirely in meetings.
Could AI personas attend your work meetings for you? One tech CEO says yes
One tech CEO has drain bamage, I take it. To paraphrase Charles Babbage, I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a statement.
Like, what the fuck is the point of this? If you think meetings are a problem and AI is the solution, there are a countably infinite amount of ideas you could come up with that aren’t this idiotic
Yeeeaaah you’re supposed to regularly test that you can actually restore your backups, because boy do a lot of companies find out they can’t only after shit goes sideways and to their horror they then realize that they can’t restore some system’s backups because reasons.
Not sure I’ve worked in a company that did that, and frankly even when I was CTO in a startup we didn’t have automated backup tests – mostly because it was still early days and I just manually tested restoring our in-house service when a change was made that would warrant it. N + 1 other things to do besides automating backup tests so I deemed that Good Enough™.
Python is NameError: name ‘term_to_describe_python’ is not defined
JavaScript is [object Object]
Ruby is TypeError: Int can’t be coerced into String
C is segmentation fault
C++
Java is
<span style="color:#323232;">Exception in thread "main" java.lang.NullPointerException: Cannot read the termToDescribeJava because is null at ThrowNullExcep.main(ThrowNullExcep.java:7)
</span><span style="color:#323232;">Exec.main(ThrowNullExcep.java:7)
</span>
CSS j ust # sucks
<HTML />
Kotlin is type inference failed. The value of the type parameter K should be mentioned in input types
Go is unused variable
Rust is Compiling term v0.1.0 (/home/james/projects/Term)
I’ll happily download 63928 depends so long as it continues to work. And it does, unlike python projects that also download 2352 depends but in the process brick every other python program on your system
Crates aren’t exactly runtime dependencies, so i think that’s fine as long as the 1500+ dependencies actually help prevent reinventing the wheel 1500+ times
I once forgot to put curly braces around the thing I was adding into a hashmap. If I remember correctly it was like ~300 lines of error code, non of which said “Wrong shit inside the function call ma dude”.
This graph cuts off early. Once you learn that pointers are a trap for noobs that you should avoid outside really specific circumstances the line crosses zero and goes back into normal land.
C++ is unironically my favorite language, especially coding in python feels so ambiguous and you need to take care of so many special cases that just wouldn’t even exist in C++.
You can absolutely read my code. The ability (similar to functional languages) to override operators like crazy can create extremely expressive code - making everything an operator is another noob trap… but using the feature sparingly is extremely powerful.
Typically, I can read an “average” open source programmers code. One of the issues I have with C++ is the standard library source seems to be completely incomprehensible.
I recently started learning rust, and the idea of being able to look at the standard library source to understand something without having to travel through 10 layers of abstraction was incredible to me.
I wonder what went into their minds when they decided on coding conventions for C++ standard library. Like, what’s up with that weird ass indentation scheme?
One of the issues I have with C++ is the standard library source seems to be completely incomprehensible.
AAAAAAhhh I once read a Stroustrup quote essentially going “if you understand vectors you understand C++”, thought about that for a second, coming to the conclusion “surely he didn’t mean using them, but implementing them”, then had a quick google, people said llvm’s libc++ was clean, had a look, and noped out of that abomination instantly. For comparison, Rust’s vectors. About the same LOC, yes, but the Rust is like 80% docs and comments.
I think some of those abominational constructs were for compile-time errors. Inline visibility macro is for reducing bynary size, allowing additional optimizations and improving performance and load time.
In my projects I set default visibility to hidden.
I’ve been using C++ almost daily for the past 7 years and I haven’t found a use for shared_ptr, unique_ptr, etc. At what point does one stop being a noob?
Given that you probably are using pointers, and occasionally you are allocating memory, smart pointers handle deallocation for you. And yes, you can do it yourself but it is prone to errors and maybe sometimes you forget a case and memory doesn’t get deallocated and suddenly there is a leak in the program.
When you’re there, shared_ptr is used when you want to store the pointer in multiple locations, unique_ptr when you only want to have one instance of the pointer (you can move it around though).
Smart pointers are really really nice, I do recommend getting used to them (and all other features from c++11 forward).
I would have said the same thing a few years ago, but after writing C++ professionally for a while I have to grudgingly admit that most of the new features are very useful for writing simpler code.
A few are still infuriating though, and I still consider the language an abomination. It has too many awful legacy problems that can never be fixed.
well, if I have an object on the heap and I want a lot of things to use it at the same time, a shared_ptr is the first thing I reach for. If I have an object on the heap and I want to enforce that no one else but the current scope can use it, I always reach for a unique_ptr. Of course, I know you know all of this, you have used it almost daily for 7 years.
In my vision, I could use a raw pointer, but I would have to worry about the lifetime of every object that uses it and make sure that it is safe. I would rather be safe that those bugs probably won’t happen, and focus my thinking time on fixing other bugs. Not to mention that when using raw pointers the code might get more confusing, when I rather explicitly specify what I want the object lifetime to be just by using a smart pointer.
Of course, I don’t really care how you code your stuff, if you are comfortable in it. Though I am interested in your point of view in this. I don’t think I’ve come across many people that actually prefer using raw pointer on modern C++.
Shared poibters are used while multithreading, imagine that you have a process controller that starts and manages several threads which then run their own processes.
Some workflows might demand that an object is instantiated from the controller and then shared with one or several processes, or one of the processes might create the object and then send it back via callback, which then might get sent to several other processes.
If you do this with a race pointer, you might end in in a race condition of when to free that pointer and you will end up creating some sort of controller or wrapper around the pointer to manage which process is us8ng the object and when is time to free it. That’s a shared pointer, they made the wrapper for you. It manages an internal counter for every instance of the pointer and when that instance goes out of scope the counter goes down, when it reaches zero it gets deleted.
A unique pointer is for when, for whatever reason, you want processes to have exclusive access to the object. You might be interested in having the security that only a single process is interacting with the object because it doesn’t process well being manipulated from several processes at once. With a raw pointer you would need to code a wrapper that ensures ownership of the pointer and ways to transfer it so that you know which process has access to it at every moment.
In the example project I mentioned we used both shared and unique pointers, and that was in the first year of the job where I worked with c++. How was your job for you not to see the point of smart pointers after 7 years? All single threaded programs? Maybe you use some framework that makes the abstractions for you like Qt?
I hope these examples and explanations helped you see valid use cases.
First year programming in the late 90s … segmentation fault? I put printfs everywhere. Heh. You’d still get faults before the prints happened, such a pain to debug while learning. Though we weren’t really taught your point of the comment at the time.
Least that was my experience on an AIX system not sure if that was general or not, the crash before a print I mean.
Yea, pointer arithmetic is cute but at this point the compiler can do it better - just type everything correctly and use []… and, whenever possible, pass by reference!
Your graph also cuts out early. Eventually you want to get performance gains with multi-threading and concurrency, and then the line drops all the way into hell.
I’m not saying you can’t do multi-threading or concurrency in C++. The problem is that it’s far too easy to get data races or deadlocks by making subtle syntactical mistakes that the compiler doesn’t catch. pthreads does nothing to help with that.
If you don’t need to share any data across threads then sure, everything is easy, but I’ve never seen such a simple use case in my entire professional career.
All these people talking about “C++ is easy, just don’t use pointers!” must be writing the easiest applications of all time and also producing code that’s so inefficient they’d probably get performance gains by switching to Python.
That’s the problem of most general-use languages out there, including “safe” ones like Java or Go. They all require manual synchronization for shared mutable state.
There’s a difference between “You have to decide when to synchronize your state” and “If you make any very small mistake that appears to be perfectly fine in the absence of extremely rigorous scrutiny then this code block will cause a crash or some other incomprehensible undefined behavior 1/10000 times that it gets run, leaving you with no indication of what went wrong or where the problem is.”
I use thread sanitizer and address sanitizer in my CI, and they have certainly helped in some cases, but they don’t catch everything. In fact it’s the cases that they miss which are by far the most subtle instances of undefined behavior of all.
They also slow down execution so severely that I can’t use them when trying to recreate issues that occur in production.
They caught lock inversion, that helped to fix obscure hangs, that I couldn’t reproduce on my machine, but were constantly happening on machine with more cores.
programmer_humor
Top
This magazine is from a federated server and may be incomplete. Browse more on the original instance.