Years ago, older C programmers told me you don’t know C unless you use dynamic memory management. I ended up rarely writing any C, but when I do, it’s usually on microcontrollers where dynamic memory management isn’t even supported out of the box.
Though as a non-embedded dev who has interviewed embedded candidates I like to ask them to talk about the issues around C vs C++ for embedded and the first point 8 out of 10 of them make is C++ is bad because dynamic allocation is bad. And while they could expand to almost sort of make their point make sense, they generally can’t and stumble when I point out it’s just as optional in each.
Can you give some examples of what you consider to be the issues?
My professor said that C++ embedded compilers used to be very buggy but have matured quite a lot as of ~10 years ago while C was stable a lot longer.
Another thing I could think of is the language complexity causing higher resource usage, e.g. by including large libraries though I’m not sure about that since most of the unused stuff should theoretically get optimized out.
I guess if you don’t know roughly how the internals of some C++ data types work it could cause you to accidentally use dynamic memory allocation when using strings or vectors.
On the other side, C++ style casts provide more safety as compared to C style casts and allows for usage of references instead of raw pointers to make the code generally safer.
Yeah, I get where they’re coming from–in typical use cases, C is often used with static allocation (correlated with minimal/embedded devices) while C++ is often used with dynamic allocation (correlated with enterprise/GUI applications).
Of course you can use either for either purpose, but that pattern seems more common. That being said, I’d be concerned with applicants who don’t understand that.
On November 12th, 2012, YouTuber LifeAccordingToJimmy posted a video titled “Don’t Stop the Music,” a skit based on the awkward moments caused when the music stops at a party and a story one is telling is overheard by others. In the sketch, the music stops as the main character says something particularly strange, causing the partygoers to stare at him. The video gained over 4.2 million views (shown below).
Edit. Okay, I click the link before you edited it and I just watched the whole video and that was actually kind of funny but still not as cool as I hoped.
Is that just like the shared memory model of parallel computing or are there any added complications? Have you done this before? Please do share your experiences if so cause now I’m interested :p
It’s similar, but the general idea of a hypervisor is to separate resources and avoid this exact situation (it’s nuanced and there are some exceptions, but that’s the general use case).
The added complication would be that when you compile a binary for one virtual machine, the compiler may optimize things, blissfully unaware that there are other players possibly affecting memory. In a typical multithreaded environment, the compiler has a better picture of how shared resources are being used across threads, but that has to be declared manually for a hypervisor. So if you configure your hypervisor to share resources, you have to be even more vigilant in configuring the individual compilers to play nice.
I don’t have a ton of experience with embedded hypervisors, though. And it’s worth noting that there are lots of “hypervisors” out there, and some work very differently from others.
programmer_humor
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.