There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@TCB13@lemmy.world cover

This profile is from a federated server and may be incomplete. Browse more on the original instance.

TCB13 , to selfhosted in virtualizing PFSense. What else works besides ESXi for virtual networking?
@TCB13@lemmy.world avatar

Yes it does run, but BSD-based VMs running on Linux have their details as usual. This might be what you’re looking for: discuss.linuxcontainers.org/t/…/15799

Since you want to run a firewall/router you can ignore LXD’s networking configuration and use your opnsense to assign addresses and whatnot to your other containers. You can created whatever bridges / vlan-based interface on your base system and them assign them to profiles/containers/VMs. For eg. create a cbr0 network bridge using systemd-network and then run lxc profile device add default eth0 nic nictype=bridged parent=cbr0 name=eth0 this will use cbr0 as the default bridge for all machines and LXD won’t provide any addressing or touch the network, it will just create an eth0 interface on those machines attached to the bridge. Then your opnsense can be on the same bridge and do DHCP, routing etc. Obviously you can passthrough entire PCI devices to VMs and containers if required as well.

When you’re searching around for help, instead of “Incus” you can search for “LXD” as it tend to give you better results. Not sure if you’re aware but LXD was the original project run by Canonical, recently it was forked into Incus (and maintained by the same people who created LXD at Canonical) to keep the project open under the Linux Containers initiative.

TCB13 , (edited ) to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

LXD uses QEMU/KVM/libvirt for VMs thus the performance is at least the same as any other QEMU solution like Proxmox, the real difference is that LXD has a much smaller footprint, doesn’t depend on 400+ daemons thus boots and runs management operations much faster. The virtualization tech is the same and the virtualization performance is the same.

TCB13 , (edited ) to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

LXD uses QEMU/KVM/libvirt for VMs thus the performance is at least the same as any other QEMU solution like Proxmox, the real difference is that LXD has a much smaller footprint, doesn’t depend on 400+ daemons thus boots and runs management operations much faster. The virtualization tech is the same and the virtualization performance is the same.

Here’s one of my older LXD nodes running HA:

https://lemmy.world/pictrs/image/266e723e-62f9-4ca5-86eb-b0ce45a7f342.png

It’s “so hard” to run HA under LXD… you just have to download the official HA OS image and import to LXD.

TCB13 , to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

proxmox

LXD/Incus

TCB13 , to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

He should really switch to LXD/Incus, not Proxmox as it will end like ESXi one day.

TCB13 , to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

Or… LXD/Incus.

TCB13 , to piracy in Best website for downloading a pirated copy of Windows?
@TCB13@lemmy.world avatar

You don’t download pirated copies for your your safety.

Download official ISOs from massgrave.dev (preferably the business editions and install Enterprise,… or Pro) and then use the provided activator on the same website.

Side note: Enterprise is the same as Pro without some telemetry / garbage by default or that can be further disabled, so you can safely use it.

TCB13 , to memes in 2024 is going to be the beginning of the end of us all
@TCB13@lemmy.world avatar

I’m not against anything you’ve just said.

TCB13 , to selfhosted in New home server: what hypervisor/OS?
@TCB13@lemmy.world avatar

My bad, typo.

TCB13 , to selfhosted in virtualizing PFSense. What else works besides ESXi for virtual networking?
@TCB13@lemmy.world avatar

Ahahaha that’s up to you. All best for your shoulder!

TCB13 , to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

It’s not just the level of distrust, is the fact that we eventually moved all those nodes to LXD/Incus and the amount of random issues in day to day operations dropped to almost zero. LXD/Incus covers the same ground feature-wise (with a very few exceptions that frankly didn’t also work properly under Proxmox), is free, more auditable and performs better under the continuous high loads you expect on a datacenter.

When it performs that well on the extreme case, why not use for self-hosting as well? :)

I’m interested in have a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it.

Well you can always virtualize under a Proxmox node so you get familiar with it ahaha

TCB13 , to selfhosted in virtualizing PFSense. What else works besides ESXi for virtual networking?
@TCB13@lemmy.world avatar

And in about 2 years you’ll switch to LXD/Incus. :P

TCB13 , to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

What makes you think that can’t happen to something just because it’s open source? And from all companies it’s from Canonical.

You better review your facts.

It was originally mostly made at Canonical however I was NOT ever suggesting you run LXC/LXD from Canonical’s repos. The solution is available on Debian’s repositories and besides LXD was forked into Incus by the people who originally made LXC/LXD while working at Canonical that now work full time on the Incus project / away from Canonical keeping the solution truly open.

It’s “Selfhosted” not “SelfHostedOpenSourceFreeAsInFreedom/GNU”. Not everyone has drank the entire open source punch bowl.

Dude, I use Windows and a ton of proprietary software, I’m certainty not Richard Stallman. I simply used Proxmox for a VERY LONG time professionally and at home and migrated everything gradually to LXD/Incus and it performs a LOT better. Being truly open-source and not a potential CentOS/ESXi also helps.

TCB13 , (edited ) to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

comment history keeps taking aim at Proxmox. What did you find questionable about them?

Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

TCB13 , to selfhosted in Broadcom yanks ESXi Free version, effective immediately
@TCB13@lemmy.world avatar

Because you don’t care about it being open source?

If you’re okay with the risk of one day ending up like the people running ESXi now then you should be okay. Let’s say that not “ending up with your d* in your hand” when you least expect it is also a pretty big motivating factor to move away from Proxmox.

Now I don’t see how come in a self-hosting community on Lemmy someone would bluntly state what you’ve.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines