There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@TCB13@lemmy.world cover

This profile is from a federated server and may be incomplete. Browse more on the original instance.

TCB13 ,
@TCB13@lemmy.world avatar

It depends on what you’re self-hosting and If you want / need it exposed to the Internet or not. When it comes to software the hype is currently setup a minimal Linux box (old computer, NAS, Raspberry Pi) and then install everything using Docker containers. I don’t like this Docker trend because it 1) leads you towards a dependence on property repositories and 2) robs you from the experience of learning Linux (more here) but I it does lower the bar to newcomers and let’s you setup something really fast. In my opinion you should be very skeptical about everything that is “sold to the masses”, just go with a simple Debian system (command line only) SSH into it and install what you really need, take your time to learn Linux and whatnot.

Strictly speaking about security: if we’re talking about LAN only things are easy and you don’t have much to worry about as everything will be inside your network thus protected by your router’s NAT/Firewall.

For internet facing services your basic requirements are:

  • Some kind of domain / subdomain payed or free;
  • Preferably Home ISP that has provides public IP addresses - no CGNAT BS;
  • Ideally a static IP at home, but you can do just fine with a dynamic DNS service such as freedns.afraid.org.

Quick setup guide and checklist:

  1. Create your subdomain for the dynamic DNS service freedns.afraid.org and install the daemon on the server - will update your domain with your dynamic IP when it changes;
  2. List what ports you need remote access to;
  3. Isolate the server from your main network as much as possible. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;
  4. If you’re using VLANs then configure your switch properly. Decent switches allows you to restrict the WebUI to a certain VLAN / physical port - this will make sure if your server is hacked they won’t be able to access the Switch’s UI and reconfigure their own port to access the entire network. Note that cheap TP-Link switches usually don’t have a way to specify this;
  5. Configure your ISP router to assign a static local IP to the server and port forward what’s supposed to be exposed to the internet to the server;
  6. Only expose required services (nginx, game server, program x) to the Internet us. Everything else such as SSH, configuration interfaces and whatnot can be moved to another private network and/or a WireGuard VPN you can connect to when you want to manage the server;
  7. Use custom ports with 5 digits for everything - something like 23901 (up to 65535) to make your service(s) harder to find;
  8. Disable IPv6? Might be easier than dealing with a dual stack firewall and/or other complexities;
  9. Use nftables / iptables / another firewall and set it to drop everything but those ports you need for services and management VPN access to work - 10 minute guide;
  10. Configure nftables to only allow traffic coming from public IP addresses (IPs outside your home network IP / VPN range) to the Wireguard or required services port - this will protect your server if by some mistake the router starts forwarding more traffic from the internet to the server than it should;
  11. Configure nftables to restrict what countries are allowed to access your server. Most likely you only need to allow incoming connections from your country and more details here.

Realistically speaking if you’re doing this just for a few friends why not require them to access the server through WireGuard VPN? This will reduce the risk a LOT and won’t probably impact the performance. Here a decent setup guide and you might use this GUI to add/remove clients easily.

Don’t be afraid to expose the Wireguard port because if someone tried to connect and they don’t authenticate with the right key the server will silently drop the packets.

Now if your ISP doesn’t provide you with a public IP / port forwarding abilities you may want to read this in order to find why you should avoid Cloudflare tunnels and how to setup and alternative / more private solution.

TCB13 ,
@TCB13@lemmy.world avatar

Oh well, If you think you’re good with Docker go ahead use it, it does work but has its own dark side

cause its like a micro Linux you can reliably bring up and take down on demand

If that’s what you’re looking for maybe a look Incus/LXD/LXC or systemd-nspawn will be interesting for you.

I hope the rest can help you have a more secure setup. :)

Another thing that you can consider is: instead of exposing your services directly to the internet use a VPS a tunnel / reverse proxy for your local services. This way only the VPS IP will be exposed to the public (and will be a static and stable IP) and nobody can access the services directly.

client —> VPS —> local server

The TL;DR is installing a Wireguard “server” on the VPS and then have your local server connect to it. Then set something like nginx on the VPS to accept traffic on port 80/443 and forward to whatever you’ve running on the home server through the tunnel.

I personally don’t think there’s much risk with exposing your home IP as part of your self hosting but some people do. It also depends on what protection your ISP may offer and how likely do you think a DDoS attack is. If you ISP provides you with a dynamic IP it may not even matter as a simple router reboot should give you a new IP.

TCB13 ,
@TCB13@lemmy.world avatar

That looks like a DDoS, for instance that doesn’t ever happen on my ISP as they have some kind of DDoS protection running akin to what we would see on a decent cloud provider. Not sure of what tech they’re using, but there’s for certainly some kind of rate limiting there.

  1. Isolate the server from your main network as much as possible. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;

In my case I can simply have a bridged setup where my Internet router get’s one public IP and the exposed services get another / different public IP. If there’s ever a DDoS, the server might be hammered with request and go down but unless they exhaust my full bandwidth my home network won’t be affected.

Another advantage of having a bridged setup with multiple IPs is that when there’s a DDoS/bruteforce then your router won’t have to process all the requests coming in, they’ll get dispatched directly to your server without wasting your router’s CPU.

As we can see this thing about exposing IPs depends on very specific implementation detail of your ISP or your setup so… it may or may not be dangerous.

TCB13 ,
@TCB13@lemmy.world avatar

Is there a use case for CrowdStrike on any platform? No, there isn’t. Anything that messes with the kernel at that level should be considered a security threat on the basis of potential service disruption / threat to business continuity. Do you really want to run a closed source piece of malware as a kernel module?

They completely fuck over their customers in the business continuity aspect, they become the problem and I bet that most companies would never suffer any catastrophic failure this bad if they didn’t run their software at all. No hacker would be able to take down so many systems so fast and so hard.

TCB13 ,
@TCB13@lemmy.world avatar

While I don’t totally disagree with you, this has mostly nothing to do with Windows and everything to do with a piece of corporate spyware garbage that some IT Manager decided to install. If tools like that existed for Linux, doing what they do to to the OS, trust me, we would be seeing kernel panics as well.

TCB13 ,
@TCB13@lemmy.world avatar

Yeah, you’re right.

TCB13 ,
@TCB13@lemmy.world avatar

Fair enough.

Still this fiasco proved once again that the biggest thread to IT sometimes is on the inside. At the end of the day a bunch of people decided to buy Crowdstrike and got screwed over. Some of them actually had good reason to use a product like that, others it was just paranoia and FOMO.

TCB13 ,
@TCB13@lemmy.world avatar

I believe most regulated ccTLDs (not the ones sold to the higher bigger) actually do that.

TCB13 ,
@TCB13@lemmy.world avatar

After some time, the domain fully expired and GoDaddy decided to buy it as soon as it did,

Oh yeah, that’s what happens when you pick scammy domain registrars. It is very possible that Epik auctioned your domain (after wall they kept it after the expiry date and payed fees) and then GoDaddy snatched it. This is what usually happens.

TCB13 ,
@TCB13@lemmy.world avatar

You’re missing the point, it wasn’t bought by godaddy. Epik auctioned the domain to godaddy after it expired, it’s common for registrars to sell domains to each other so they don’t get a bad reputation and make people think what you’re thinking.

TCB13 ,
@TCB13@lemmy.world avatar

The good part: two garbage apps will be gone from windows 😂

PSA: GoDaddy gated their own API. DDNS users warned (loudwhisper.me)

GoDaddy really lived up to its bad reputation and recently changed their API rules. The rules are simple: either you own 10 (or 50) domains, you pay $20/month, or you don’t get the API. I personally didn’t get any communication, and this broke my DDNS setup. I am clearly not the only one judging from what I found online. A...

TCB13 ,
@TCB13@lemmy.world avatar

Andrew complains, Microsoft makes a root mode so Andrew can have his way. Andrew breaks his computer the next second by deleting a system file and proceeds to call Microsoft support. :)

TCB13 ,
@TCB13@lemmy.world avatar

Most of the annoying stuff that Linux users hate about Windows are because Windows has to cater to even the least technologically knowledgeable users.

Isn’t that the whole idea of GNOME? Always considering users as stupid and lowering the bar?

TCB13 ,
@TCB13@lemmy.world avatar

Yes, LetsEncrypt with DNS-01 challenge is the easiest way to go. Be it a single wildcard for all hosts or not.

Running a CA is cool however, just be aware of the risks involved with running your own CA.

You’re adding a root certificate to your systems that will effectively accept any certificate issued with your CA’s key. If your PK gets stolen somehow and you don’t notice it, someone might be issuing certificates that are valid for those machines. Also real CA’s also have ways to revoke certificates that are checked by browsers (OCSP and CRLs), they may employ other techniques such as cross signing and chains of trust. All those make it so a compromised certificate is revoked and not trusted by anyone after the fact.

TCB13 ,
@TCB13@lemmy.world avatar
TCB13 ,
@TCB13@lemmy.world avatar

Just be aware of the risks involved with running your own CA.

You’re adding a root certificate to your systems that will effectively accept any certificate issued with your CA’s key. If your PK gets stolen somehow and you don’t notice it, someone might be issuing certificates that are valid for those machines. Also real CA’s also have ways to revoke certificates that are checked by browsers (OCSP and CRLs), they may employ other techniques such as cross signing and chains of trust. All those make it so a compromised certificate is revoked and not trusted by anyone after the fact.

For what’s worth, LetsEncrypt with DNS-01 challenge is way easier to deploy and maintain in your internal hosts than adding a CA and dealing with all the devices that might not like custom CAs. Also more secure.

TCB13 ,
@TCB13@lemmy.world avatar

While I agree with you, an attacker may not need to go to such lengths in order to get the PK. The admin might misplace it or have a backup somewhere in plain text. People aren’t also prone to look to logs and it might be too late when they actually noticed that the CA was compromised.

Managing an entire CA safely and deploying certificates > complex; Getting let’s encrypt certificates using DNS challenges > easy;

TCB13 ,
@TCB13@lemmy.world avatar

SMTP with good delivery and whatnot is entirely possible it just takes an IP with a good reputation and enough patience to read and understand the ISPmail guide and a few other details. Running a CA is a security vulnerability and a major pain if you plan to deploy it to the devices of your entire family.

TCB13 ,
@TCB13@lemmy.world avatar

I want the WAN coming in from the router from the Pi’s Ethernet port, and the LAN coming out as Wi-Fi. I may also stick an additional Ethernet adapter to it in the future.

Can you try to explain this a bit more?

TCB13 ,
@TCB13@lemmy.world avatar

Anything with GNOME is visually appealing but unfortunately the usability is pure garbage. KDE is the exact opposite and Xfce is quick but sits on an awkward place.

TCB13 ,
@TCB13@lemmy.world avatar

What do you do if you want to find the IP address of an instance, but incus list does not give you one?

If that’s the case then it means there’s no networking configured for the container or inside it. The image you’re using may not come with DHCP enabled or networking at all.

I often just find the IP of the container and then ssh in as that feels natural, but perhaps I am cutting against the grain here.

You are. You aren’t supposed to SSH into a container… it’s just a waste of time. Simply run:


<span style="color:#323232;">lxc exec container-name bash # or sh depending on the distro
</span>

And you’ll inside the container much faster and without wasting resources.

TCB13 ,
@TCB13@lemmy.world avatar

Well, it’s a container, in most situations you would be running as root because the root inside the container is an unprivileged user outside it. So in effect the root inside the container will only be able to act as root inside that container and nowhere else. Most people simply do it that way and don’t bother with it.

If you really want there are ways to specify the user… but again there’s little to no point there.


<span style="color:#323232;">lxc exec container-name --user 1000 bash 
</span><span style="color:#323232;">lxc exec container-name -- su --shell /bin/bash --login user-name
</span>

For your convenience you can alias that in your host’s ~/.bashrc with something like:


<span style="color:#323232;">lxcbash() { lxc exec "$1" -- sudo --login --user "$2"; }
</span><span style="color:#323232;">
</span>

And then run like:


<span style="color:#323232;">lxcbash container-name user-name
</span>
TCB13 ,
@TCB13@lemmy.world avatar

You can run full GUI apps inside LXC containers and have X11 deal with the rest. Guides here and here.

Why do you still hate Windows?

I realize this is a Linux community, but I was wondering why you still hate Windows. I mean, I love Linux, but I will not argue that it’s more convenient to the average person in most use cases to use Windows, I recently had to switch back to Windows and I realized how convenient it all was and how I was missing so many things...

TCB13 ,
@TCB13@lemmy.world avatar

The ads in win10 pushed me to the limit

Never seen them. But Microsoft does document how to disable everything you would like to.

I don’t just don’t get why do the same people who bitch a lot about Windows (not you) are unable to install Windows 10 Enterprise and read the manual BUT they are able to jump between 30 different Linux distros and spend 100x more time customizing their DE and dealing with Wine / virtualization crap. Ironic.

TCB13 ,
@TCB13@lemmy.world avatar

Linux is great, and does a lot of stuff right… however…

I just don’t get the people around there sometimes. They’re okay with spending 1000+ hours jumping between 30 different Linux distros and customizing their DE, dealing with Wine / virtualization crap. BUT they aren’t able to Windows 10 Enterprise and read the manual to get a clean usable system in 1/1000 of the time and effort.

How ironic.

TCB13 ,
@TCB13@lemmy.world avatar

Never seen that guide. Does it actually work?

Yes, best results with Enterprise.

It won’t implode, and it becomes a zero maintenance OS.

Windows out of the box is full of crap but we all know that a lot of large companies use it and Microsoft is kinda forced into making it feasible enough for those companies. If you’re managing let’s say 500+ machines you can’t deal with the bullshit that comes with Windows 10 Home / Pro and systems that break every week.

There are also a lot of govt agencies and private companies with very strict security policies that can’t just allow Windows to connect to MS and leak information around. If you simply disable what you don’t need by following that manual things will really work out.

On the corporate world those changes are typically applied using AD, however, if you apply them manually in group policy they’ll stick and you won’t be bothered. Don’t forget to check the link every time there’s a major version because they usually add stuff.

I installed Windows 10 Enterprise 1709 on my main desktop in 2018 and applied the stuff documented there… I’ve been upgrading since then and it’s currently running 22H2 just fine. No policy regressions like some people claim.

Microsoft is forced to provide ways for big customers to make Windows usable and those aren’t going away anytime soon, they’ve a financial incentive to do so.

TCB13 ,
@TCB13@lemmy.world avatar

I am assuming that is on purpose?

Most likely, “normie” don’t even know Enterprise exist…

With that said, you may find links here:

massgrave.dev/windows_10_links

Business ISO includes both Pro and Enterprise versions. On the same website you can find activation tools including HWID that will give you a valid digital license for your hardware that will survive a reinstallation of windows.

Just as a note if you’ve any Windows 10 Pro machines around you can upgrade them to Enterprise by just changing the key to a generic one under settings. A clean install of Enterprise would be better but you can still do it that way if you don’t want the trouble / spend more time with it.

TCB13 ,
@TCB13@lemmy.world avatar

No. It means if you upgrade a system from 21h2 to 22h2 Microsoft may have added new stuff in there that you’ve to review because if you connect it the internet right away those new “features” may connect to them.

Consider this example: Windows 11 before and after the Copilot shit. You can completely disable Copilot and other AI features using group policy however if you’re on the “before” version you can’t disable the feature because it isn’t there already, if you upgrade, the features would be there with defaults and on the first boot it might great you with a “welcome to copilot” that will connect to Microsoft.

TCB13 ,
@TCB13@lemmy.world avatar

And that’s okay, however those same people are the ones saying Windows is unusable because it would take a very long time to disable analytics. This is the thing, people aren’t consistent.

TCB13 ,
@TCB13@lemmy.world avatar

“oh but Debian only has old stuff” , yeah sure. :P

TCB13 ,
@TCB13@lemmy.world avatar

I know, I know, but trust me that a lot of people believe that they don’t issue security patches fast.

TCB13 ,
@TCB13@lemmy.world avatar

So I want to get back into self hosting, but every time I have stopped is because I have lack of documentation to fix things that break. So I pose a question, how do you all go about keeping your setup documented? What programs do you use?

Joplin or Obsidian? Or… plain markdown files with your favorite text editor.

TCB13 ,
@TCB13@lemmy.world avatar

If you want a git “server” quick and low maintenance then gitolite is most likely the best choice. gitolite.com/gitolite/index.html

It simply acts as a server that you can clone with any git client and the coolest part is that you use git commits to create repositories and manage users as well. Very very or no maintenance at all. I’ve been using it personally for years but also saw it being used at some large companies because it simply gets the job done and doesn’t bother anyone.

TCB13 ,
@TCB13@lemmy.world avatar

Maybe the NextCloud guys will follow… oh wait that would just be yet another perpetually half-finished NC thing.

TCB13 , (edited )
@TCB13@lemmy.world avatar

Wait, is there a SA Mod Manager type of thing / installer / recommended mods for Sonic Adventure 2?? Is the Steam version any good (compared to the Dreamcast version)? Or does it have all the colors wrong and audio messed up?

TCB13 ,
@TCB13@lemmy.world avatar

+1 for this. This is kinda the same issue with encoding, just UTF-8 everything and move on.

Pros and cons of Proxmox in a home lab?

Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others...

TCB13 , (edited )
@TCB13@lemmy.world avatar

I like the web UI as well, but since i use an iPhone i wasn’t really able to be able to set up the browser with the cert

One thing you can do (that I have in the corporate) is to setup a reverse proxy in front of the WebUI and have it manage user authentication. Essentially nginx authenticates users against the company Keycloak IdP that provides SSO and whatnot. You can do with a simple HTTP basic auth or some simpler solution like phpAuthRequest.

thanks again for the recommendation.

You’re welcome, enjoy.

TCB13 OP ,
@TCB13@lemmy.world avatar

To be honest I felt a bit lost on MacOs Catalina and felt like everything was difficult compared to Gnome.

Just because you aren’t used to the macOS workflow it doesn’t mean it is bad - that’s the same argument you GNOME fan boys do with Windows users ;)

But I guess Gnome is taking a lot of inspiration from the MacOs aesthetic, and it’s okay with me because it looks great.

Yes, it’s okay, and that was never an issue in this discussion. The issue is that they didn’t took enough inspiration on basic UX patterns.

TCB13 OP ,
@TCB13@lemmy.world avatar

it is trivial to disable all animations

Yeah you can go into settings and toggle of a switch, however they don’t disable everything. ~

Whenever you go into Settings > Accessibility > Enable Animations and disable it one would expect that ALL animations would be disabled while in fact they aren’t. It should behave like Xfce that is, click on something and get the instant result, no delay, no very small animation / fade like GNOME still does.

Bottom line: that option in GNOME is misleading and doesn’t do what it advertises.

TCB13 OP ,
@TCB13@lemmy.world avatar

My point is: if you want to copy / be inspired by others at least do it right.

TCB13 OP ,
@TCB13@lemmy.world avatar

But something that’s different would rationally be called not copying, whereas you categorize it as poor copying. Interesting.

I would categorize it as poor copying because the copy doesn’t conform to the design / UX patterns that were present on the “original” work.

TCB13 OP ,
@TCB13@lemmy.world avatar

That’s what the paper was about, the law also applies the in reverse, adding the space protects the user because it makes it harder to click on the hitbox.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines