There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@TCB13@lemmy.world cover

This profile is from a federated server and may be incomplete. Browse more on the original instance.

TCB13 ,
@TCB13@lemmy.world avatar

It depends on what you’re self-hosting and If you want / need it exposed to the Internet or not. When it comes to software the hype is currently setup a minimal Linux box (old computer, NAS, Raspberry Pi) and then install everything using Docker containers. I don’t like this Docker trend because it 1) leads you towards a dependence on property repositories and 2) robs you from the experience of learning Linux (more here) but I it does lower the bar to newcomers and let’s you setup something really fast. In my opinion you should be very skeptical about everything that is “sold to the masses”, just go with a simple Debian system (command line only) SSH into it and install what you really need, take your time to learn Linux and whatnot. A few notable tools you may want to self-host include: Syncthing, FileBrowser, FreshRSS, Samba shares, Nginx etc. but all depends on your needs.

Strictly speaking about security: if we’re talking about LAN only things are easy and you don’t have much to worry about as everything will be inside your network thus protected by your router’s NAT/Firewall.

For internet facing services your basic requirements are:

  • Some kind of domain / subdomain payed or free;
  • Preferably Home ISP that has provides public IP addresses - no CGNAT BS;
  • Ideally a static IP at home, but you can do just fine with a dynamic DNS service such as freedns.afraid.org.

Quick setup guide and checklist:

  1. Create your subdomain for the dynamic DNS service freedns.afraid.org and install the daemon on the server - will update your domain with your dynamic IP when it changes;
  2. List what ports you need remote access to;
  3. Isolate the server from your main network as much as possible. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;
  4. If you’re using VLANs then configure your switch properly. Decent switches allows you to restrict the WebUI to a certain VLAN / physical port - this will make sure if your server is hacked they won’t be able to access the Switch’s UI and reconfigure their own port to access the entire network. Note that cheap TP-Link switches usually don’t have a way to specify this;
  5. Configure your ISP router to assign a static local IP to the server and port forward what’s supposed to be exposed to the internet to the server;
  6. Only expose required services (nginx, game server, program x) to the Internet us. Everything else such as SSH, configuration interfaces and whatnot can be moved to another private network and/or a WireGuard VPN you can connect to when you want to manage the server;
  7. Use custom ports with 5 digits for everything - something like 23901 (up to 65535) to make your service(s) harder to find;
  8. Disable IPv6? Might be easier than dealing with a dual stack firewall and/or other complexities;
  9. Use nftables / iptables / another firewall and set it to drop everything but those ports you need for services and management VPN access to work - 10 minute guide;
  10. Configure nftables to only allow traffic coming from public IP addresses (IPs outside your home network IP / VPN range) to the Wireguard or required services port - this will protect your server if by some mistake the router starts forwarding more traffic from the internet to the server than it should;
  11. Configure nftables to restrict what countries are allowed to access your server. Most likely you only need to allow incoming connections from your country and more details here.

Realistically speaking if you’re doing this just for a few friends why not require them to access the server through WireGuard VPN? This will reduce the risk a LOT and won’t probably impact the performance. Here a decent setup guide and you might use this GUI to add/remove clients easily.

Don’t be afraid to expose the Wireguard port because if someone tried to connect and they don’t authenticate with the right key the server will silently drop the packets.

Now if your ISP doesn’t provide you with a public IP / port forwarding abilities you may want to read this in order to find why you should avoid Cloudflare tunnels and how to setup and alternative / more private solution.

TCB13 ,
@TCB13@lemmy.world avatar

Totally agree. :) Here’s a quick and nice guide: digitalocean.com/…/how-to-secure-nginx-with-let-s…

Planning on moving over from Windows 10 to Linux for my Personal Work Station. Can't decide which OS I should switch to.

Windows has been a thorn in my side for years. But ever since I started moved to Linux on my Laptop and swapping my professional software to a cross platform alternative, I’ve been dreaming on removing it from my SSD....

TCB13 ,
@TCB13@lemmy.world avatar

You already know why you should pick Debian:

Pro: The most stable OS I’ve used

About your “ancient packages” that’s an easy fix, just install all your software using Flatpak/Flathub and you’ll get the latest software on your rock solid base system.

TCB13 ,
@TCB13@lemmy.world avatar

turns out i can do that with debian just fine.

Exactly, and unlike others Debian simply doesn’t fail.

TCB13 ,
@TCB13@lemmy.world avatar

You never tried to listen for stock Firefox’s traffic with Wireshark for sure.

People speak very good thing about Firefox but they like to hide and avoid the shady stuff. Let me give you the un-cesored version of what Firefox really is. Firefox is better than most, no double there, but at the same time they do have some shady finances and they also do stuff like adding unique IDs to each installation.

Firefox does is a LOT of calling home. Just fire Wireshark alongside it and see how much calling home and even calling 3rd parties it does. From basic ocsp requests to calling Firefox servers and a 3rd party company that does analytics they do it all, even after disabling most stuff in Settings and config like the OP did.

I know other browsers do it as well, except for Ungoogled and because of that I’m sticking with it. I would like to avoid programs that need no snitch whenever I open them. ungoogled-chromium + ublock origin + decentraleyes + clearurls and a few others.

Now you’re free to go ahead and downvote this post as much as you would like. I’m sorry for the trouble and mental break down I may have caused by the sudden realization that Firefox isn’t as good and private after all.

TCB13 ,
@TCB13@lemmy.world avatar

I have no idea why people use . looks so much better,

Reason n1: Firefox’s font rending sucks; Reasons n2: Chrome dev tools are better and way more supported by whatever ecosystem you develop in.

TCB13 ,
@TCB13@lemmy.world avatar

That’s interesting… it makes a difference indeed.

TCB13 ,
@TCB13@lemmy.world avatar

That’s fair, but I still wouldn’t trade the amazing font rendering that chromium offers.

TCB13 ,
@TCB13@lemmy.world avatar

I’m not ever going to use Mullvad Browser, I would rather use stock Firefox than that. I have LibreWolf installed as second browser and I like it at that, but I don’t see myself going away from ungoogled-chromium anytime soon.

TCB13 ,
@TCB13@lemmy.world avatar

Firefox dev tools are much better for typical web development.

Not true, not even close. That was true like 15-20 years ago, but nowadays, especially when I’m debugging Angular (yes the extension for chrome is better) and developing stuff that will be used by people who go for Chrome.

TCB13 ,
@TCB13@lemmy.world avatar

That’s all true, but why take a modified chromium instead of a modified Firefox?

Because chromium rendering is better than Firefox’s and I personally like the dev tools better and my usual target audience in dev uses Chrome. I have LibreWolf as the secondary browser but I don’t see me ever liking the way Firefox renders the web.

TCB13 ,
@TCB13@lemmy.world avatar

Librewolf is my second browser, but I don’t see me using it everyday. I like chromium rendering more and the dev tools.

TCB13 ,
@TCB13@lemmy.world avatar

Go ahead then.

TCB13 ,
@TCB13@lemmy.world avatar

and Ungoogled Chromium has no fingerprint protection

More or less, but you know as we all as I do that there are extensions for that… and Ungoogled Chromium doesn’t snitch on me so…

TCB13 ,
@TCB13@lemmy.world avatar

Usually it’s not about entire websites, it’s the small detail like the font rendering smoothness and a few others.

TCB13 ,
@TCB13@lemmy.world avatar

Let me ask you, how much do you use the dev tools and for what?

TCB13 ,
@TCB13@lemmy.world avatar

So… you don’t trust Google but you trust some shady VPN company? You aren’t wrong about quick wireshark tests, it does seem cleaner but long term trust and VPN companies are not something that go into the same sentence.

TCB13 ,
@TCB13@lemmy.world avatar

I’ve to work with what I got :P Either way even if I was doing jQuery or Vue (like I did in the past) I wouldn’t ever use Firefox because even without the Angular extension, just plain JS/CSS debugging I like Chromium dev tools more.

Besides the fact that my target users are always Chrome users and by using Firefox for development in the past I run into issues because specific features would work in Firefox but not on Chrome and vice-versa… or some piece of CSS rendered differently Chromium offers a level of polishness on small details that Firefox wasn’t ever close to. Firefox’s dev tools are always playing catch-up time to Chromium’s, that’s what I see.

Maybe I’m biased like you seem to be, but in the opposite way :P

I want to bring some attention to Slidge XMPP Bridges (git.sr.ht)

It seems like an awesome project that fulfills a lot of the requirements for bridging many popular messaging platforms (like FB messenger, WhatsApp, discord, signal, and more). I wanted to share because I know a lot of us have friends and family who still use antiquated/proprietary communication platforms. Fair warning, I have...

TCB13 ,
@TCB13@lemmy.world avatar

PHP -> Problem -> Replace the developer -> Solution.

Yes PHP was bad in 5.x, in 8.x if things go bad it’s just the developer who’s bad.

Self hosted syncthing relay with keepass, how secure is it?

I already have my keepassxc and syncthing setup on my phone and computers and it’s great, I’d like to go a step further and have my password database sync when I’m not on my home network. From my understanding I can use relays set up by other users and they are encrypted, but if I do not trust syncing personal (encrypted)...

TCB13 ,
@TCB13@lemmy.world avatar

Is setting up a personal relay a lot of work or a potential security risk for my home network?

Where are you planning to host the relay?

Either way, here’s how to setup a relay and here how to configure clients to use your relay.

TCB13 , (edited )
@TCB13@lemmy.world avatar

If you can create a port forward in your router and run stuff at your house what’s the point of a relay then? Just expose the ports that Syncthing uses and configure your client to connect to it using your dynamic DNS. No public or private relays are required.

  1. Port forward the following in your router to the local Syncthing host, any client will be able to connect to it directly:
  • Port 22000/TCP: TCP based sync protocol traffic
  • Port 22000/UDP: QUIC based sync protocol traffic
  1. Go into the client and edit the home device. Set it to connect using the dynamic DNS directly:

https://lemmy.world/pictrs/image/194983e8-bbab-450c-869b-3189d164b4ae.jpeg

For extra security you may change the Syncthing port, or run the entire thing over a Wireguard VPN like I also do.

Note that even without the VPN all traffic is TLS protected.

TCB13 ,
@TCB13@lemmy.world avatar

The thing with Docker is that people don’t want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security and that’s mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There’s LOT of money into transitioning everyone to the “deploy-from-github-to-cloud-x-with-hooks” model so those companies will keep pushing for it.

Note that technologies such as Docker keep commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

TCB13 ,
@TCB13@lemmy.world avatar

Docker and containerization resolve this by running each app in its own mini virtual machine

While what you’ve written is technically wrong, I get why you did the comparison that way. Now there are tons of other containerization solutions that can exactly what you’re describing without the dark side of Docker.

TCB13 ,
@TCB13@lemmy.world avatar

Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere

I don’t disagree with you, but that also shows that most modern software is poorly written. Usually a bunch of solutions that hardly work and nobody is able to reproduce their setup in a quick, sane and secure way.

There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle.

Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendor

Completely proprietary… like QEMU/libvirt? :P

TCB13 ,
@TCB13@lemmy.world avatar

I’ve been using ansible as well and it’s great.

TCB13 ,
@TCB13@lemmy.world avatar
  • FileBrowser
  • Joplin

I’ve been using those two and they’re way faster and more reliable than NextCloud.

I’ve already found alternatives for all services, except for the calendar.

I’m using Baikal for Contacts & Calendar, it provides a generic CardDAV and CalDAV solution that can be access from iOS/Android or some web client like the plugins for RoundCube. Thunderbird also now has native support for CardDAV and CalDAV and it works just fine with Baikal.

TCB13 ,
@TCB13@lemmy.world avatar

Joplin, and what ultimately pushed me away from it was the portability of the data within it—I didn’t love that I wasn’t ultimately just working with a folder of Markdown

I believe you did miss something, Joplin “stores notes in Markdown format. Markdown is a simple way to format text that looks great on any device and, while it’s formatted text, it still looks perfectly readable in a plain text editor.” Source: joplinapp.org/help/apps/rich_text_editor/

You have have a bunch of options when it comes to synchronization:

https://lemmy.world/pictrs/image/161dde7a-cca5-4c21-b8a6-98d4f713eba9.png

You can just point it at some folder and it will store the files there and then sync it with any 3rd party solution you would like. I personally use WebDav because it’s more convenient (iOS support) and it’s very easy to get a Nginx instance to serve what it needs:


<span style="color:#323232;">server {
</span><span style="color:#323232;">    listen 443 ssl http2;
</span><span style="color:#323232;">    server_name  xyz.example.org;
</span><span style="color:#323232;">    ssl_certificate ....;
</span><span style="color:#323232;">    ssl_certificate_key ...;
</span><span style="color:#323232;">    root /mnt/SSD1/web/root;
</span><span style="color:#323232;">
</span><span style="color:#323232;">   # Set your password with: WebDAV htpasswd -c /etc/nginx/.credentials-dav.list YOUR_USERNAME
</span><span style="color:#323232;">    location /dav/notes {
</span><span style="color:#323232;">	alias /mnt/SSD1/web/dav/notes;
</span><span style="color:#323232;">        auth_basic              realm_name;
</span><span style="color:#323232;">        auth_basic_user_file    /etc/nginx/.credentials-dav.list;
</span><span style="color:#323232;">        dav_methods     PUT DELETE MKCOL COPY MOVE;
</span><span style="color:#323232;">        dav_ext_methods PROPFIND OPTIONS;
</span><span style="color:#323232;">        dav_access      user:rw;
</span><span style="color:#323232;">        client_max_body_size    0;
</span><span style="color:#323232;">        create_full_put_path    on;
</span><span style="color:#323232;">    }
</span>

I was already using Nginx as a reverse proxy / SSL termination for FileBrowser so it was just a couple of lines to get it running a WebDAV share for Joplin.

Is FileBrowser doing any cross-device syncing at all, or is it as it appears on the surface

FileBrowser doesn’t do cross-device syncing and that’s the point, I don’t ever want it doing it. For sync I use Syncthing, I just run both on my NAS and have them pointed at the same folder. All of my devices run Syncthing and sync their data with the NAS so this way I can have the NAS working as a central repository and everything is available through FileBrowser.

deleted_by_author

  • Loading...
  • TCB13 ,
    @TCB13@lemmy.world avatar

    The point is… actually there are multiple points:

    TCB13 , (edited )
    @TCB13@lemmy.world avatar

    ETA: I cannot use this version of the application with this version of OS X (I have 10.10.5, the application requires 10.12 or later)

    That has nothing to do with the root certificates. The root certificates just make it so any browser or whatever on the system can communicate with the internet. If you want a working browser use github.com/blueboxd/chromium-legacy

    TCB13 , (edited )
    @TCB13@lemmy.world avatar

    Not so sure, to be fair Apple is in compliance with everything right now :)

    What Apple did isn’t illegal as per DMA, they carefully studied the requirements and made sure they were in compliance without losing control. There’s no law breaking here, there’s simply a poorly written DMA. Now the right question to ask is: was the DMA poorly written by mistake or… did Apple and others made sure it was written the way it is?

    [Rant] A few days ago, I asked if Mint would run okay on a Lenovo T460 (I appreciate all the advice). I got it working, but the installation was a big pain and I totally blame Lenovo.

    I got the T460 refurbished and I really didn’t want to run Windows 10 on it. I last used Linux for any real length of time a good 20 years ago, so I’m pretty inexperienced with it at this point and I had to figure out how to install it myself....

    TCB13 , (edited )
    @TCB13@lemmy.world avatar

    That’s what you get when you buy Lenovo. It can’t even run Windows properly most of the time how did you expect it to run Linux?

    Seriously if you go into any large company and ask why they don’t use Lenovo they’ll simply tell you that the failure rate of those machines is way to high to be worth it. Like order 50 and only 10 are in working condition after 2 years… or a simple USB 3 cable running along the computer will make it slow because there isn’t enough shielding on the machine and the high frequency of those cables interferes with your storage controller / NVMe.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Hardware recommendations are really hard, brands to a lot of shit and there are a LOT of small details that make it so a small revision on the same model can make or break compatibility with any OS… even worse for Linux. Windows has tons of specifics hacks to work on specific hardware and they aren’t pretty.

    For me, personally I always got the best result with HP EliteBooks from 2 or 3 generations bellow the current one and the latest Debian. But again, that’s just personal experience, nobody can guarantee you that you won’t pick a very specific EliteBook with some awkward detail and things will fail.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Depends on the line… the high end enterprise products (EliteBooks) are very solid and good, they also provide very good servicing, every piece of those machines is replaceable even screws have serial numbers and can be ordered from HP.

    ProBooks are a mixed batch, some are decent others are total garbage. Consumer grade HP mostly follows the same trend, if you go for machines that are “Apple-priced” they’re good, otherwise crap. Still not as crap as Lenovo became after China.

    European Union: Designated gatekeepers must now comply with all obligations under the Digital Markets Act (ec.europa.eu)

    As of today, Apple, Alphabet, Meta, Amazon, Microsoft and ByteDance, the six gatekeepers designated by the Commission in September 2023, have to fully comply with all obligations in the Digital Markets Act (DMA)....

    TCB13 ,
    @TCB13@lemmy.world avatar

    Makes rules that are enforceable across member states (admittedly by proxy mechanisms)

    Those “proxy mechanisms” make things very different than a typical government. Also not everything that the parliaments says is required to be enforced in member states. A lot of the proposals are recommendations and even the ones that are actually about regulation have to be transposed into member state laws in some way those countries see fit and there’s a lot or margin there.

    and like three Presidents. (…) It has elections.

    There aren’t direct elections by the people like in countries, things are a bit more complex: elections.europa.eu/en/how-elections-work/

    Even a shared army.

    No, there isn’t. The founding treaties of the EU don’t allow for the creation of a European army as the EU is about peaceful economic cooperation and and also a bunch of other reasons.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Run it on a “normal” server and everything is smooth.

    Sure until you try with a high end 12 core CPU on NVMe storage all kinds of caching, redis etc. and you find you it doesn’t perform particularly better.

    TCB13 ,
    @TCB13@lemmy.world avatar
    • fast and reliable. Add FileBrowser if you want to have a WebUI on a central “server” to access all your files and you’ll be 100x better than the garbage that NC offers.
    TCB13 ,
    @TCB13@lemmy.world avatar

    Dropbox is faster.

    Dropbox is A LOT faster than NC ever was. But if you want to talk about speeds and reliability then use Synching. Add FileBrowser if you want to have a WebUI on a central “server” to access all your files and you’ll be 100x better than the garbage that NC offers.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Seafile, that’s a name I haven’t heard in a very long time. How does that work in terms of self-hosting limitations, mobile clients and sync? Do you have any experience with Synching for instance? How does it compare performance wise?

    TCB13 ,
    @TCB13@lemmy.world avatar

    There’s a lot more complains, besides the Mail “app” is a big advertised features of it and is developed by the core team.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Or one of the few people who tested the thing and spend time taking screenshots and pointing out issues unlike most others…

    TCB13 ,
    @TCB13@lemmy.world avatar

    Great to know, last time I tried it I was running on a very weak ARM platform and while it worked way better than NC I was impressed with the performance. Thanks.

    TCB13 ,
    @TCB13@lemmy.world avatar

    Well I only saw problems with about 1TB of small files. I’m not sure if they were actually caused by the volume of the data or because there were multiple using syncing parts of that data as well.

    TCB13 ,
    @TCB13@lemmy.world avatar

    I’ve posted screenshots and a lot of detailed information of it failing. It’s just not a question of personal preference, it’s most like it has old bugs that aren’t ever fixed and things keep piling.

    TCB13 ,
    @TCB13@lemmy.world avatar

    I know exactly how Synching works, the point is not the p2p nature of it, the point is that Nextcloud’s sync performance and reliability isn’t even comparable because the desktop clients, sync algorithm and server side tech (PHP) won’t ever be as performant at dealing with files as Go is.

    The way Nextcloud implemented sync is totally their decision and fault. Syncthing can be used in a more “client > server” architecture and there are professional deployments of that provided by Kastelo for enterprise customers with SSO integrations, web interfaces, user management and whatnot.

    Nextcloud could’ve just implemented all their web UI and then rely on the Syncthing code for the desktop / mobile clients sync. Without even changing Syncthing’s code, one way to achieve this would be launch a single Syncthing instance per NC user and then build a GUI around that that would communicate with the NC API do handle key exchanges with the core Syncthing process. Then add a few share options in the context menu and done.

    This situation illustrates very clearly the issue with Nextcloud development and their decisions - they could’ve just leveraged the great engine that Syncthing has a backend for sync but instead, as stubborn as they are, they came up with an half assed-solution that in true Nextcloud fashion never delivers as promised.

    TCB13 ,
    @TCB13@lemmy.world avatar

    You may want to read my post again as there’s currently no user management in Syncthing. I just said that Kastelo provides a payed and very proprietary solution with user management for enterprise customers.

    Anyways for anyone who wants to code a solution like that it isn’t impossible, I proceeded to outline what Kastelo does on their solution and what Netcloud cloud’ve done.–

    TCB13 ,
    @TCB13@lemmy.world avatar

    But i do need replication and multi user access.

    You can setup one Syncthing instance per user for that. That’s the way it was designed to work.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines