Fucking good. They should go down in flames for what Broadcom is doing to VMware. Our company switched off it too. Not as large but we have a couple thousand servers and they are all now slowly moving to hyper v
I’m convinced VMware started downhill when they dropped the hard windows client for the web based admin panel.
They claimed it was for multi os compatibility… But they wrote the thing using ActiveX. For the youngsters, ActiveX shit was Internet Explorer and M.S. only. So the idiots wrote a UI that still only worked in Windows, and was now 5 times slower than the thick client.
BTW, I run proxmox clusters in my garage. Its awesome
I’m honestly glad he got slapped with such a huge bill. Maybe it will prompt other corporations to start putting real money into the open source projects all their billion dollar businesses are built off of.
We rent our servers from Ionos and price hike came as a complete surprise. Luckily Ionos took some of the increase on themselves, but had I been ready with different provider I’d switch in a blink. It seems price hike was a surprise to Ionos as well and am sure as hell hoping they are working on adding another hypervisor.
Really looking forward to seeing more Rancher Harvester clusters out there.
VMWare stuff are a pain to work with and open source and more modern systems are needed anyways. Really want to see all of the crazy powerful stuff people do when VMs are just another type of container.
I don’t understand diddly about the specifics of this article (I’m a member of the normie minority on this site who is neither working in IT, nor interested in the field), but I gotta say, I loved how it was structured and written. In a sea of AI generated crap, or simply parroting talking heads and calling it news, I found the way they laid out the article in two parts ("this is what happened, followed by “this is our subjective opinion on those events based on the wider context”) to be very refreshing.
In large scale computing, a server will have VERY powerful hardware. You can run multiple VMs on that one machine, giving a slice of that power to each VM so that it basically ends up with multiple individual computers running on one very powerful set of hardware instead of building a ton of individual.
The other key feature being cost. A VDI terminal is much cheaper than actual PCs for employees. When I was working IT for a large company, we were able to get them in bulk for about $100 each. A PC cost us at least $800.
Similar to docker, but the technical differences matter a lot. VMs have a lot of capabilities containers don’t have, while missing some of the value on being lightweight.
However, a more direct (if longer) answer would be: all cloud providers ultimately offer you VMs. You can run docker on those VMs, but you have to start with a VM. Selfhosted stuff (my homelab, for example) will also generally end up as a mix of VMs and docker containers. So no matter what project you’re working on at scale, you’ve probably got some VMs around.
Whether you then use containers inside them is a more nuanced and subtle question.
Running a virtual server allows you to run a server application on its own virtual machine, this eliminates the chance that (when running multiple applications from a server) the underlaying requirement for each apllication conflict.
In comparison to docker the full server can offer more native capabilities for some applications, while other applications simply only run on a full OS.
So by virtualizing the servers one large piece of Hardware can be used to run multiple servers and you can (sometimes dynamically) allocate resources as needed.
The backups can consume all computing power put of office hours while the other applications share during Office hours as needed… sometimes a bit more for VM A and sometimes a bit more for VM B.
Off course monitoring overallocation is a thing as you might end up with bottlenecks caused by peak loads that occur at the same time… the issue would be bigger when running on dedicated hardware.
And off course having multiple hardware platforms interconnected allows for a VM to be moved from hardware platform to hardware platform without interruption (license required) meaning you can perform hardware maintenance without an outage.
In my workplace we worked tirelessly to get rid of all VMware VMs as fast as possible when new pricing became clear. Thousands migrated. What a huge fuckup by broadcom.
We were very *very *close to replacing our ~700 office Cisco SD-Wan environment with VeloCloud, which is owned by VMware. The Broadcom merger put the brakes on the project completely, they missed out on a few million dollars on that effort alone. The Velo guys were totally in the dark on what was coming down the pipe for them, Broadcom forced them to change hardware vendors on day one, for example.
What solution are you looking towards? I work in a massive organization with 20,000+ VMs and we’ve been having weekly virtual working groups across the country (our overseas depts have been doing their own) to try and discuss finding other solutions. We haven’t been very successful, as the biggest pitfall we’ve seen is no one offers lifetime licenses so if we don’t renew a yearly maintenance our VMs won’t stop functioning properly. That’s one of the main reasons we’re looking to off board from VMware.
Fuck Broadcom. I liked VMware and their products and actually paid for them as a consumer. Broadcom is a ham-fisted money grabber and cares little about anything else. This will not end well for any businesses they serve to. Why? *Maya Angelou: ‘When someone shows you who they are, believe them the first time.’*They’re focused on milking the cow dry, not spending money on anything (despite their R&D claims). They have a history and have straight up said who they are before, and said who they’re planning to continue to be. Flee while you can.