There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

jellyfish ,

This sounds like a fun project! I recently just ripped out and redid the network segmentation on my 3-node proxmox cluster too.

Originally I had everything in a /16, but that was causing some routing problems because I actually needed to static route a /24 that was in the /16 to a VM for VPN. Anyways, I’m going to try to dig through your post and give some advice. This is all just personal opinion on how I’d set stuff up after over a decade of homelabbing/home infra, so ya know, take and leave what you want.

It sounds like you want to use one of your Proxmox nodes with a VM running OPNSense as your router? I’d highly discourage this. I know you call your setup a lab, but it’s running the *arrs and probably a streaming server; and there’s nothing worse than planning a movie night and having your networking be down. Also, it’ll make it easier to recover from a power outage or hardware failure, keep your network config much simpler, and provide physical boundaries between machines increasing security.

So, I’d say unless you’re fine with the possibility of extended outages, use dedicated hardware for network. I’m partial to pfsense’s netgate, it’s a good price and a lot of bang for your buck, and it comes from an awesome open source project. I use Unifi, though I wouldn’t necessarily recommend it due to some shady stuff the company has done/said over the last few years.

OPNSense looks neat, but the only reason I see to use it over pfsense is the integrated IDS/IPS, which is just a nice gui over suricata and a proofpoints subscription. Personally I’d run suricata in a VM and mirror WAN traffic to it via pfsense. This way a VM isn’t in your critical network path, but IDS is available and easy to manage.

Don’t forget, when you separate stuff into VLANs, it forces the traffic to go up to the router, and back down to the switch. This means any inter-VLAN traffic has a 1gb limit on it. So if you ever upgrade your servers with 10gb nics, if you setup VLANs incorrectly, you won’t get that performance. Or if you just have a lot of traffic, you’ll start getting TCP slow starts and retransmissions and it may play havoc on your network. That’s why many usually just don’t bother with VLANing, it gives you network isolation, but comes at the cost of increased routing.

As for routing, all VLANs will route between eachother automatically. As obvious as it is, just think of two VLANs as two separate physical switches plugged into the same router. By default those two switches will be able to communicate with each other through the router, but they can’t directly communicate with one an other (which would have higher throughput/bandwidth).

DMZ is interesting, in my mind it basically came from a time when networks had a hard shell and a soft interior (wrt security). I don’t DMZ because I have host level firewalls and network firewalls to do LAN segmentation. But, that isn’t to say it’s a bad idea if you’re up for it. Basically a DMZ (demilitarized zone) is a VLAN where you’d put stuff like a mail server, DNS servers, and maybe an HTTP server. Stuff you’d want to expose to the internet, as well as into your local network. The idea is that if one of those servers were compromised, you wouldn’t want it to have full access to your local network. So instead you split off a DMZ network so if a host in it gets compromised, it wouldn’t provide the attackers a good base to pivot into your local network from. I don’t expose any services to the internet, except a VPN, and that VPN definitionally needs a lot of access to my local network to be useful, so I don’t partition it off into its own DMZ. I’m not a network admin, so that’s just my interpretation of it.

As for structure, this is where I ended up:

  • 10.0.0.0/24 - LAN management - Stuff like Unifi/Pfsense admin panels
  • 10.0.1.0/23 - LAN - Where most of my normal stuff goes, desktops, laptops, phones, etc.
  • 10.99.0.0/24 - OOB Administration - Things like IPMI and BMCs end up here
  • 10.99.1.0/24 - Administration - Things like Proxmox VMs end up here
  • 10.99.2.0/24 - Core network - Things like VPN, DNS, backups, basically important network services.
  • 10.99.3.0/24 - Services - Things like *arr, etc. end up here. I actually run K8s via kubespray in Proxmox, so for me this is actually my metalLB service IP range.
  • 10.99.100.0/24 - VPN IP pool - I static IP my VPN clients instead of masquerade, so they get an IP out of this pool, the VPN instance acts as a router, and I static route from my main router back to the VPN instance.

And I have a separate /24 for my 10gb network for Ceph.

So yeah! I don’t know if that helps at all, feel free to ask questions to clarify. If you still really want to install OPNSense in a VM I can give you some tips on that as well.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines