I genuinely believe something like this is what some of my professors wanted me to submit back in school. I once got a couple points off a project for not having a clarifying comment on every single line of code. I got points off once for not comment-clarifying a fucking iterator variable. I wish I could see what they would have said if I turned in something like this. I have a weird feeling that this file would have received full marks.
Did you have my professor for intro to C? This guy was well known for failing people for plagiarism on projects where the task was basically “hello world”. And he disallowed using if/else for the first month of class.
Reminds me of an early Uni project where we had to operate on data in an array of 5 elements, but because “I didn’t teach it to everyone yet” we couldn’t use loops. It was going to be a tedious amount of copy-paste.
I think I got around it by making a function called “not_loop” that applied a functor argument to each element of the array in serial. Professor forgot to ban that.
but because “I didn’t teach it to everyone yet” we couldn’t use loops.
That is aggravating. “I didn’t teach the class the proper way to do this task, so you have to use the tedious way.” What is the logic behind that other than wasting everyone’s time?
Teaching someone the wrong way to do something frequently makes the right way make way more sense. Someone who just copy/pasted 99 near identical if statements understands on a fundamental level when, why, and where you use a for loop much more than someone who just read in the textbook “a for loop is used to iterate elements in a collection”.
And if I know the right way of doing it I already understand why it’s better because I want to use it in this situation. Making the students who already understand the lesson do it the wrong way is just a waste of their time.
Before I block you I’ll be kind and make one genuine attempt to help you learn:
Just like nobody is required to invite someone into their home, nobody is required to listen to someone either. And nobody is required to let them loiter on their property (the server) and act like a douche to their guests (the users).
They are facing the consequences of their actions. People don’t like them and instead of considering why that might be and adjusting, you simply complain that people are kicking them out of the party.
They are not owed an audience. They do not have a right to be heard.
I literally do not care what you think on lemmyshitpost loser, People are mad about hexbear which I just ignore, you can too. Most of all though I just think you are a whiny bitch. Please block me and spare me having to read your stupid bullshit ever again
The karens want to defederate from lemmy.ml because of being a “proxy” of hexbear. It is the most ridiculous thing I ever read and is so painfully reddit I had to make a shit post.
I’m really don’t understand the calls for defederation. Lemmy added per-user instance blocking so people could stop whining and block the instance. There’s no need to have the entire instance defederate instead of just blocking it.
Just think about a user making a post to whatever admin community for their instance. That takes so much more work than just blocking the instance.
Because deplatforming works. Because tolerating intolerance eventually results in the tolerant being extinguished.
If I’m hosting a party and there’s a Nazi on my front lawn, I don’t care if I and my guests can mute them, block them, whatever. I’m going to get rid of them. I don’t want new guests seeing them when they arrive. I don’t want every single person to have to be exposed to the Nazis first before they can then block them out. I don’t want the Nazis to exist at all. Nazis don’t deserve to exist. We went to war to kill Nazis and I’d vote to do it again if I could.
It’s our house. Our community. No. Fucking. Nazis. No toxicity. We don’t have to suffer them to exist.
I try to stick with libvirt/https://manpages.debian.org/bookworm/libvirt-clients/virsh.1.en.html when I don’t need any graphical interface (integrates beautifully with ansible [1]), or when I don’t need clustering/HA (libvirt does support “clustering” at least in some capability, you can live migrate VMs between hosts, manage remote hypervisors from virsh/virt-manager, etc). On development/lab desktops I bolt virt-manager on top so I have the exact same setup as my production setup, with a nice added GUI. I heard that cockpit could be used as a web interface but have never tried it.
Proxmox on more complex setups (I try to manage it using ansible/the API as much as possible, but the web UI is a nice touch for one-shot operations).
Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.
In my experience and for my mostly basic needs, major differences between libvirt and proxmox:
The “clustering” in libvirt is very limited (no HA, automatic fencing, ceph inegration, etc. at least out-of-the box), I basically use it to 1. admin multiple libvirt hypervisors from a single libvirt/virt-manager instance 2. migrate VMs between instances (they need to be using shared storage for disks, etc), but it covers 90% of my use cases.
On proxmox hosts I let proxmox manage the firewall, on libvirt hosts I manage it through firewalld like any other server (+ libvirt/qemu hooks for port forwarding).
On proxmox I use the built-in template feature to provision new VMs from a template, on libvirt I do a mix of virt-clone and virt-sysprep.
On libvirt I use virt-install and a Debian preseed.cfg to provision new templates, on proxmox I do it… well… manually. But both support cloud-init based provisioning so I might standardize to that in the future (and ditch templates)
LXD/Incus provides a management and automation layer that really makes things work smoothly essentially replacing Proxmox. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes) and those are just a few things you can do with it and not with pure KVM/libvirt. Also has a WebUI for those interested.
A big advantage of LXD is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.
Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. It plays on the land of, let’s say, Proxmox and I can guarantee you that most people running it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no holding back critical fixes for paying users and way less overhead.
I should RTFM again… manpages.debian.org/bookworm/…/virsh.1.en.html has options for virsh migrate such as –copy-storage-all… Not sure how it would work for actual live migrations but I will definitely check it out. Thanks for the hint
Pretty darn well. I actually needed to do some maintenance on the server earlier today so I just migrated all of the VMs over to my desktop, did the server maintenance, and then moved the VMs back over to the server, all while live and functioning. Running ping in the background looks like it missed a handful of pings as the switches figured their life out and then was right back where they were; not even long enough for uptime-kuma to notice.
Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.
Maybe you should consider consolidating into Incus. You’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potential issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?
Hey look, it’s the Incus guy. Every time this topic comes up, you chime in and roast Proxmox and it potential issues with a link go a previous comment roasting Proxmox and it’s potential issues and at no point go into what those potential issues are outside of the broad catch all term of ‘bloat’.
I respect your data center experience, but I wish you were more forward with your issues instead of broad, generalized terms.
As someone with much less enterprise experience, but small business it administration experience, how does Incus replace ESXi for virtual machines coming from the understanding that “containerization is the new hotness but doesn’t work for me” angle?
Now, I’m on my phone so I can’t write that much but I’ll say that the post I liked to isn’t about potential issue, it goes over specific situations where it failed, ZFS, OVPN, etc. but I won’t obviously provide anyone with crash logs and kernel panics.
About ESXi: Incus provides you with a CLI and Web interface to create, manage, migrate VMs. It also provides basic clustering features. It isn’t as feature complete as ESXi but it gets the job done for most people who just want a couple of VMs. At the end of the day it is more inline with what Proxmox than what ESXi offers BUT it’s effectively free so it won’t hold important updates from users running on free licenses.
If you list what you really need in terms of features I can point you into documentation or give my opinion how how they compare and what to expect.
The migration is bound to happen in the next few months, and I can’t recommend moving to incus yet since it’s not in stable/LTS repositories for Debian/Ubuntu, and I really don’t want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (…) so that would still be an improvement, I guess.
Management is already sold to the idea of Proxmox (not by me), so I think I’ll take the path of least resistance. I’ve had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with… do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I’d still like to put a word of caution about that.
DO NOT migrate / upgrade anything to the snap package that package is from Canonical and it’s after the Incus fork, this means if you do for it you may never be able to then migrate to Incus and/or you’ll become hostage of Canonical.
About the rest, if you don’t want to add repositories you should migrate into LXD LTS from Debian 12 repositories. That version is and will be compatible with Incus and both the Incus and Debian teams have said that multiple times and are working on a migration path. For instance the LXD from Debian will still be able to access the Incus image server while the Canonical one won’t.
DO NOT migrate / upgrade anything to the snap package
It was already in place when I came in (made me roll my eyes), and it’s a mess. As you said, there’s no proper upgrade path to anything else. So anyway…
you should migrate into LXD LTS from Debian 12 repositories
The LXD version in Debian 12 is buggy as fuck, this patch has not even been backported github.com/canonical/lxd/issues/11902 and 5.0.2-5 is still affected. It was a dealbreaker in my previous tests, and doesn’t inspire confidence in the bug testing and patching process on this particular package. On top of it, It will be hard to convice other guys that we should ditch Ubuntu and their shenanigans, and that we should migrate to good old Debian (especially if the lxd package is in such a state). Some parts of the job are cool, but I’m starting to see there’s strong resistance to change, so as I said, path of least resistance.
Do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable?
So you say it is “buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English? 😂 Anyways you can create the bride yourself and get around that.
“buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English?
It sends pretty bad signals when it causes a crash on the first lxd init (sure I could make the case that there are workarounds, switch locales, create the bridge, but it doesn’t help make it appear as a better solution than proxmox). Whatever you call it, it’s a bad looking bug, and the fact that it was not patched in debian stable or backports makes me think there might be further hacks needed down the road for other stupid bugs like this one, so for now, hard pass on the Debian package (might file a bug on the bts later).
About the link, Proxmox kernel is based on Ubuntu, not Debian…
Thanks for the link mate, Proxmox kernels are based on Ubuntu’s, which are in turn based on Debian’s, not arguing about that - but I was specifically referring to this comment
having to wait months for fixes already available upstream or so they would fix their own shit
any example/link to bug reports for such fixes not being applied to proxmox kernels? Asking so I can raise an orange flag before it gets adopted without due consideration.
Well what I can say is that since my team migrated everything to LXD/Incus the amount of tickets that are somehow related to the virtualization solution we used dropped really fast. Side note: we started messing around with LXD from Snap (but running under Debian) and moved to Debian 12 version as soon as it was made available.
About the kernel things, my upstream fix comment was about how Canonical / Ubuntu does things. They usually come up with some “clever” ideia to hack something, implement it and then the upstream actually solves the issue after proper evaluation and Ubuntu just takes it and replaces their quick hack. This happens quite frequently and it’s not always a few lines of code, for instance, it happened with the mess that shiftfs was and then the kernel guys come up with a real solution (idmapped) and now you see Canonical is going for it. Proxmox inherits the typical Canonical mess.
We use cockpit at work. It’s OK, but it definitely feels limited compared to Proxmox or Xen Orchestra.
Red Hat’s focus is really on Openstack, but that’s more of a cloud virtualization platform, so not all that well suited for home use. It’s a shame because I really like Cockpit as a platform. It just needs a little love in terms of things like the graphical console and editing virtual machine resources.
The “clustering” in libvirt is limited to remote controlling multiple nodes, and migrating hosts between them. To get the High Availability part you need to set it up through other means, e.g. pacemaker and a bunch of scripts.
Type 1 runs on bare metal. You install it directly onto server hardware. Type 2 is an application (not an OS) lives inside an OS, regardless of whether that OS is a guest or a host, the hypervisor is a guest of that platform, and the VMs inside it are guests of that hypervisor.
The previous comment is an excellent summary. It is worth noting that there are some type 1 hypervisors that can look like type 2s. Specifically, KVM in Linux (which sometimes gets referred to as Virt-manager, Virtual Machine Manager, or VMM, after the program typically used to manage it) and Hyper-V in Windows.
These get mistaken for type 2 hypervisors because they run inside of your normal OS, rather than being a dedicated platform that you install in place of it. But the key here is that the hypervisor itself (that is, the software that actually runs the VM) is directly integrated into the underlying operating system. You were installing a hypervisor OS the whole time, you just didn’t realise it.
The reason this matters is that type 1 hypervisors can operate at the kernel level, meaning they can directly manage resources like your memory, CPU and graphics. Type 2 hypervisors have to queue with all the other pleb software to request access to these resources from the OS. This means that type 1 hypervisors will generally offer better performance.
With hypervisor platforms like Proxmox, Esxi, Hyper-V server core, or XCP-NG, what you get is a type 1 hypervisor with an absolutely minimal OS built around it. Basically, just enough software to the job of running VMs, and nothing else. Like a drag racer.
It’s like a first-try at a hypervisor. Terrible UI, with machine config scattered around. Some stuff can only be done on the command line after you search the web for how to do it (like basic stuff, say run headless by default). Enigmatic error messages.
I wouldn’t say you HAVE to know the original but it’s pretty common in memes for text to be replaced and the person doing the replacing to not know the original font.
He spent a bunch of dough on that hair. I believe he is likely aware of his appearance. He is pale and overweight because he spends a bunch of time playing video games, being a nazi in his failing social media site, and (based on this picture) eating french fries.
Maybe at some point his vanity will compel him to get liposuction or just go full orange man with spray tan and wear baggy suits with ties that are too long. I don’t see him being like Bezos or Zuck and getting into shape.
Lol thats just what happens if you follow Orion. Quit due to that but left before the rollout of the camera. I assume it was delivered and he just had to take a picture to complete the stop? Maybe didnt realize it until the next stop
They don’t have “priorities”. They just don’t want to govern. They don’t want there to be a government. Every action they take is consistent with “undermine the concept of government”.
Any action that actually helped people would be governing. They don’t want to do that. It’s not priorities; if it were priorities, the Republican party would occasionally run the pool of abuse dry and be left with positive changes they could make, and then they’d work on those briefly until the bullshit-laws queue filled back up.
But that never happens, because it’s not priorities. It’s just being the opposite of whatever you think government is.
This country is full of people being governed who want a government, but they don’t understand that there’s an entire party that doesn’t want that.
Yes. They want paychecks without work, responsibility, or blame.
They don’t want there to be a government.
No. I see no evidence of that. Every chance people get to raise military or police spending or make up new laws to restrict people’s choices, they take it.
The problem with that train of thought is that they were already rich when they got the job. Many of them actually spend more to get the job than the job even pays. They aren’t there for the pay, they are there for the power plays. Once you get in you know people, people mainly help those they are close to. Money is a means to an end (and integral to the storage process ) but it’s all about power and connections. Why else would someone pay $1M of their own money for a 50/50 shot at a job that pays $1.2M over 4 years?
I don’t really know how people can even use YouTube without ad blockers. Sitting through minutes of advertisement is not going to make me want to buy your product if I start mentally associating your product with frustration and annoyance. If these video ads are going to be repetitive and annoying, at least make them funny.
It seems like there is nowhere on the Internet to get away from ads currently, even here, where you thought you are safe, you are now reading an ad for my newest movie (you know the one), now also available on streaming!
I would like to imagine a world where site advertising was reasonable. Ad blockers don’t exist, sites advertise 1 or 2 banners at $3-5 CPM and everyone gets paid and consumes content in synchrony. It won’t happen. Advertising is setup for ad blocking audiences and iOS cookieless environments. Everyone else subsidizes by viewing the myriad of placements splattered all over the page.
people say “yeah i stayed up too late last night, I went down a youtube rabbithole you know?” and I’m like No! I actually don’t know! How the fuck are you watching crap on youtube for hours??
When people tell me YouTube is one of their most used sites I’m just like “WHY”. It’s such a waste of time. I’d rather be on Reddit(using boost because I made my own sub just to continue using it) or Lemmy than YouTube.
At least I get useful information from forums/whatever you would call reddit/lemmy.
What I don’t get, is that people whine and bitch on every post about YouTube doing stupid shit to fuck over the content creators, and then when it’s suggested that they stop using YouTube, it’s:
”NOOOOOOOOOOOOOoooooooooooo! Not my precious YouToobes!”
Sitting through minutes of advertisement is not going to make me want to buy your product if I start mentally associating your product with frustration and annoyance.
Thing is, it’s actually very easy to quantify whether or not these ads produce enough sales to justify their spend.
They piss of people, but they also work and drive sales that more than make up for the ad spend.
There are places like that, even with YouTube, but you usually have to pay rather than use the ad-supported free product. (Assuming ad blockers don’t work well any longer)
Sitting through minutes of advertisement is not going to make me want to buy your product if I start mentally associating your product with frustration and annoyance.
The thing is, we all like to think we’ll do this, but our frustration just gets mapped on to YouTube itself. In 6 months, after the specific frustration has long passed, the influence of that ad will still be there, and who you’ll remain wary of is YouTube.
Agreed. Also, it’s not YouTube’s content to sell. YouTube doesn’t create content. It only exists because we (the users) upload the content to make that possible.
They want you to remember their products. For example, you know that Grammarly is a text correction service. If the ads didnt exist you wouldn’t know that, so now if you want a service like this, instead of searching “top autocorrect tools” or something else, you would search “grammarly download”.
Paying Google for them to stop shoving ads in my face doesn’t feel like a good purchase and I don’t want to support that kind of behavior, and I’m smart enough to use uBlock Origin and ReVanced (Little bit of a struggle though.)
@MargotRobbie As a creative on strike I would have thought you would have felt some solidarity with the content creators who make 50% of the ad revenue you are withholding from them by blocking ads.
I pay for premium because I can't stand ads but I do want to support creators with a share of my subscription, even though I know it is less than they would have made if I watched the ads. I thought maybe you would feel the same given you aren't hurting for money either.
I know it is not a perfect system, but I do appreciate the content creators I watch enough to want them get payed. I subscribe to Nebula too for this reason, though I admit I should use it more.
Can I discourage rolling your own password manager (like using a text doc or spreadsheet) and instead recommend what you hopefully meant, self-hosting your own password manager?
The only annoying part about the modern world is that you want to have that keepass file synchronized between devices, at which point you either go down the path of something like Synchthing (not mainstream user friendly) or you just end up asking yourself “fine, what cloud service do I trust to not go looking at my files?”
I always synced my database manually either directly over usb, or wifi (KDE Connect). I have to admit that it’s not really user friendly, but once I got used to it, it’s no problem at all.
And uploading it to any cloud service should be fine as long as it’s encrypted with a strong password. But that kind of defeats the point of an offline password-manager in my opinion.
Good advice only for tech-savvy and people who are interested in self-hosting. There’s so many things that can go wrong like improper backups and accidental networking problems.
Well, you can. But you have to be PERSONALLY hacked. At which point you’re at a level of risk equal to “will my house burn and my notebook full of passwords get lost?”
And here’s a reminder that trusting centralized service with high security access control is usually a bad idea.
I stay away from LastPass for the same reasons I stay away from TeamViewer. Security through obscurity on top of decoupling my security interests from others means other people being attacked is much less likely to cause me harm at the same time
Offline password managers like KeepassXC are a thing, plus self hosted remote storage like Nextcloud means you’re not worried about any third party interference
I use Pleasant Password Manager, which is keepass compatible. Big fan of offline cache with online sync for access anywhere with an internet connection on top of my phone offline
And at least for LastPass no passwords were compromised. Saying they “were hacked” and leaving the extent of the hack out implies something worse IMO, it’s misleading. The safes themselves are E2E encrypted so they also don’t have your password.
That said, my vote is to Bitwarden as it’s open source and allows self hosting if you think you’re a more reliable admin than they are. Open plus more choice is always better.
This is true, but they have your encrypted vault, and all the technical data to make unlimited informed attempts at cracking it. If you used LastPass, you definitely need to be changing passwords for your critical services at a minimum.
Just this month a link was made between $35 million in crypto being stolen and the 150 victims being LastPass users.
In 2022 Lastpass was compromised through a developer’s laptop and had customer data like emails, names, addresses, partial credit cards, website urls, and most importantly vaults stolen last year, and given they’re closed source, have no independent audits, and don’t release white papers, we have no idea how good their encryption schemes actually are nor if they have any obvious vulnerabilities.
In 2021, users were warned their master passwords were compromised.
In 2020 they had an issue with the browser extension not using the Windows Data Protection API and just saving the master password to a local file.
What will 2024 bring for LastPass? They were hacked, and there’s no reason to think they won’t see more breaches of confidential customer information and even passwords in the future. This is a repeated pattern, and I’d better trust a post-it-note on my monitor for security than LastPass at this point.
The only problem with their SSH agent is, if you store let’s say 6 keys and the server is set to accept a maximum of 5 keys before booting you, and the correct key happens to be key number 6, you can end up being IP banned.
This happened to me on my own server :P
That being said, my experience was using the very first GA release of their SSH Agent, so it’s possible the problem has been sorted by now.
Firefox is extremely easy to get your password from behind the *** if it autofills. Requires physical access, but literally takes seconds. Right click the field, inspect and change the field type from password to text.
On mobile I’m assuming. I personally don’t know a way to bypass the fingerprint locks. And if you’re also having Firefox create random difficult passwords, its significantly better than reusing the same one. So you’re probably a much harder target than the majority of people. I’d have to double check but I think even on desktop if you have a master password for Firefox and don’t just have logins auto filled you’re probably good there too.
lemmy.world
Top