There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@TCB13@lemmy.world cover

This profile is from a federated server and may be incomplete. Browse more on the original instance.

European Commission opens non-compliance investigations against Alphabet, Apple and Meta under the Digital Markets Act (ec.europa.eu)

Today, the Commission has opened non-compliance investigations under the Digital Markets Act (DMA) into Alphabet’s rules on steering in Google Play and self-preferencing on Google Search, Apple’s rules on steering in the App Store and the choice screen for Safari and Meta’s “pay or consent model”....

TCB13 ,
@TCB13@lemmy.world avatar

This is what happens when the EU decides that your phone should be more open and you get around it with cleaver lawyers and tactics instead of actually doing what’s right.

TCB13 , (edited )
@TCB13@lemmy.world avatar

How much wifi and open-source do you really want?

If you are willing to go with commercial hardware + open source firmware (OpenWrt) you might want to check the table of hardware of OpenWrt at openwrt.org/toh/…/toh_available_16128_ax-wifi and openwrt.org/toh/views/toh_available_864_ac-wifi. One solid pick for the future might be the Netgear WAX2* line or the GL.iNet GL-MT6000. One of those models is now fully supported the others are on the way. If you don’t mind having older wifi a Netgear R7800 is solid.

For a full open-source hardware and software experience you need a more exotic brand like this www.banana-pi.org/en/bananapi-router/. The BananaPi BPi R3 and here is a very good option with a 4 core CPU, 2GB of RAM Wifi6 and two 2.5G SFP ports besides the 4 ethernet ports. There’s also an upcoming board the BPI-R4 with optional Wifi 7 and 10G SPF.

Both solutions will lead to OpenWRT when it comes to software, it is better than any commercial firmware but be aware that it only support wifi hardware with open-source drives such as MediaTek. While MediaTek is good and performs very well we can’t forget that the best performing wifi chips are Broadcom and they use hacks that go behind the published wifi standards and get it go a few megabytes/second faster and/or improve the range a bit.

DD-WRT is another “open-source” firmware that has a specific agreement with Broadcom to allow them to use their proprietary drivers and distribute them as blob with their firmware. While it works don’t expect compatibility with newer hardware nor a bug free solution like OpenWRT is.

There are also alternatives like OPNsense and pfSense that may make sense in some cases you most likely don’t require that. You’ve a small network and OpenWRT will provide you with a much cleaner open-source experience and also allow for all the customization you would like. Another great advantage of OpenWRT is that you’ve the ability to install 3rd party stuff in your router, you may even use qemu to virtualize stuff like your Pi-Hole on it or simply run docker containers.

TCB13 ,
@TCB13@lemmy.world avatar

Just throwing in the usual over-complication. The OP can do this with a simple OpenWRT router and by setting a few firewall rules. To be fair there are even some comercial routers from Asus and Netgear with their stock firmware that will allow you to block a device from accessing the internet.

TCB13 ,
@TCB13@lemmy.world avatar

What’s your router? Can you install OpenWrt on it? OpenWrt provides a GUI for the firewall where you can set that a specific device won’t be able to access the internet with a few clicks.

TCB13 , (edited )
@TCB13@lemmy.world avatar

Just a few notes:

  1. What you’re describing is not what the OP is asking for. He simply wants a quick solution to block a couple of devices from accessing internet.
  2. I don’t get your “note” as that’s precisely what I suggested the OP to do. And if you actually read the manual and pick a recommend model it can be as simple as uploading the firmware using the router’s firmware upgrade feature.
  3. The scenario you described can be done with OpenWrt on a consumer router and it isn’t that complex to setup. Even older hardware like the Netgear R7800 will be able to handle that.
TCB13 ,
@TCB13@lemmy.world avatar

I would advise you get Debian + GNOME and install all software via flatpack/flathub. This way you’ll have a very solid and stable system and all the latest software that can be installed, updated and removed without polluting your base system. The other option obviously is to with those hipster of a systems like pop, mint and x-ubuntu that are perpetually “half made” and fail often.

Now I’m gonna tell you what nobody talks about when moving to Linux:

  1. The “what you go for it’s entirely your choice” mantra when it comes to DE is total BS. What happens is that you’ll find out while you can use any DE in fact GNOME will provide a better experience because most applications on Linux are design / depend on its components. Using KDE/XFCE is fun until you run into some GTK/libadwaita application and small issues start to pop here and there, windows that don’t pick on your theme or you just created a frankenstein of a system composed by KDE + a bunch of GTK components;
  2. I hope you don’t require “professional” software such as MS Office, Adobe Apps, Autodesk, NI Circuit Design and whatnot. The alternatives wont cut it if you require serious collaboration and virtualization, emulation (wine) may work but won’t be nice. Going for Linux kinda adds the same pains of going macOS but 10x. Once you open the virtualization door your productivity suffers greatly, your CPU/RAM requirements are higher and suddenly you’ve to deal with issues in two operating systems instead of just one. And… let’s face it, nothing with GPU acceleration will ever run decently unless big companies start fixing things - GPU passthroughs and getting video back into the main system are a pain and add delays;
  3. Proprietary/non-Linux apps provide good features, support and have tons of hours of dev time and continuous updates that the FOSS alternatives can’t just match.
  4. Linux was the worst track ever of supporting old software, even worse than Apple;
  5. Half of the success of Windows and macOS is the fact that they provide solid and stable APIs and development tools that “make it easy” to develop for those platforms and Linux is very bad at that. The major pieces of Linux are constantly and ever changing requiring large and frequent re-works of apps. There aren’t distribution “sponsored” IDEs (like Visual Studio or Xcode), userland API documentation, frameworks etc.;
  6. The beautiful desktop you see online are bullshit with a very few exceptions. Most are just carefully designed screenshots but once you install the theme you’ll find out visual inconsistencies all over the place, missing icons and all kinds of crap that makes Microsoft look good;
  7. Be ready to spend A LOT of time to make basic things work. Have coffee and alcohol (preferably strong) at your disposal all the time.

(Wine for all the greatness it delivers still sucks and it hurts because it’s true).

TCB13 ,
@TCB13@lemmy.world avatar

Ubuntu will work just fine with a minimum of “getting basic things to work.” (…) I suggest Ubuntu.

I don’t disagree with you, Ubuntu may make be easier but with it you get the worst of both worlds - no “good” and “solid” proprietary apps + questionable open-source, potential spyware and other shenanigans Canonical is known for. In that case I would rather keep using Windows and have everything working out of the box.

If one lives in a bubble and doesn’t to collaborate then native Linux apps might deliver a decent workflow. Once collaboration with Windows/Mac users is required then it’s game over – the “alternatives” aren’t just up to it. Proprietary applications provide good and complex features, support, development time and continuous updates that FOSS alternatives can’t just match.

Windows licenses are cheap and things work out of the box. Software runs fine, all vendors support whatever you’re trying to do and you’re productive from day zero. Sure, there are annoyances from time to time, but they’re way fewer and simpler to deal with than the hoops you’ve to go through to get a minimal and viable/productive Linux desktop experience. It all comes down to a question of how much time (days? months?) you want to spend fixing things on Linux that simply work out of the box under Windows for a minimal fee. Buy a Windows license and spend the time you would’ve spent dealing with Linux issues doing your actual job and you’ll, most likely, get a better ROI.

You can buy a second hand computer with a decent 8th generation CPU for around 200 € and that includes a valid Windows license. Computers selling on retail stores also include a Windows license, students can get them for free etc. what else?

TCB13 ,
@TCB13@lemmy.world avatar

Fair enough, but I don’t want to have to battle my computer every time single I want to get anything done… and most of the Linux community forgets that the general public kind of shares that opinion.

TCB13 ,
@TCB13@lemmy.world avatar

Not sure where you got the idea it was slow… but okay, keep using your perpetual half made Mint/Pop/Arch whatever. :P

TCB13 ,
@TCB13@lemmy.world avatar

Enlightening me then…

TCB13 ,
@TCB13@lemmy.world avatar

Just install your software using Flatpak and the latest with a reliable OS.

TCB13 ,
@TCB13@lemmy.world avatar

The “what you go for it’s entirely your choice” mantra when it comes to DE is total BS. What happens is that you’ll find out while you can use any DE in fact GNOME will provide a better experience because most applications on Linux are design / depend on its components. Using KDE/XFCE is fun until you run into some GTK/libadwaita application and small issues start to pop here and there, windows that don’t pick on your theme or you just created a frankenstein of a system composed by KDE + a bunch of GTK components.

TCB13 ,
@TCB13@lemmy.world avatar

The truth is that you have to chase wayland, because it is the future.

Ahahaha nice one.

TCB13 ,
@TCB13@lemmy.world avatar

Easy, replace it with systemd-boot: blog.bofh.it/debian/id_465

systemd-boot is simpler to configure and keep up to date. On my PC I only needed to create 5 lines of config for my Linux drive, and it automatically configures the boot option for my Windows drive.

Should I learn Docker or Podman?

Hi, I’ve been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it’s better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it’s better to straight up learn podman when I don’t know any of the...

TCB13 ,
@TCB13@lemmy.world avatar

Also I’m not sure about your claim that Podman is more FOSS than docker

The issue with Docker isn’t the core product itself, is the ecosystem, it’s the DockerHub, Kubernetes etc.

TCB13 ,
@TCB13@lemmy.world avatar

First of all they make the user dumber. Instead of learning something new, you blindly “compose pull & up” your way. Easy, but it’s dumbifier and that’s not a good thing

I don’t like this Docker trend because, besides what you’ve said, it 1) leads you towards a dependence on property repositories and 2) robs you from the experience of learning Linux (more later on) but I it does lower the bar to newcomers and let’s you setup something really fast. In my opinion you should be very skeptical about everything that is “sold to the masses”, just go with a simple Debian system (command line only) SSH into it and install what you really need, take your time to learn Linux and whatnot.

there is a dangerous trend where projects only release containers, and that’s bad for freedom of choice (bare metal install, as complex as it might be, need to always be possible) and while I am aware that you can download an image and extract the files inside, that’s more an hack than a solution

And the second danger there is that when developers don’t have to consider the setup of a their solution the code tends to be worse. Why bother with having single binaries, stuff that is easy to understand and properly document things when you can just pull 100 dependencies and compose files? :) This is the unfortunate reality of modern software.

Third, with containers you are forced to use whatever deployment the devs have chosen for you. Maybe I don’t want 10 postgres instances one for each service, or maybe I already have my nginx reverse proxy or so

See? Poorly written software. Not designed to be sane and reasonable and integrate with existing stuff.

But be aware that containers are not the solution to selfhosting-made-easy and, specifically, containers havebeen created to solve different issues than self-hosting!

Your article said it all and is very well written. Let me expand a bit into the “different issues”:

The thing with Docker is that people don’t want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security, reproducibility and that’s mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions and nobody cares about them.

Companies such as Microsoft and GitHub are all about re-creating and re-configuring the way people develop software so everyone will be hostage of their platforms - that’s why nowadays everything and everyone is pushing for Docker/DockerHub/Kubernetes, GitHub actions and whatnot. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There’s LOT of money into transitioning everyone to the “deploy-from-github-to-cloud-x-with-hooks” model so those companies will keep pushing for it.

At the end of the day technologies like Docker are about commoditizing development and about creating a negative feedback loop around it that never ends. Yes, I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something.

Successful cloud companies are not longer about selling infrastructure, we’re past that - the profit is now in transforming developer knowledge into products/services that can be bought with a click.

TCB13 ,
@TCB13@lemmy.world avatar

Definitely not. :P

TCB13 ,
@TCB13@lemmy.world avatar

Are you aware that all those isolation, networking, firewall etc. issues can be solved by simply learning how to write proper systemd units for your services. Start by reading this: www.redhat.com/sysadmin/mastering-systemd

TCB13 ,
@TCB13@lemmy.world avatar
TCB13 , (edited )
@TCB13@lemmy.world avatar

You’re using LXC… so you may want to learn about Incus/LXD that was made by the same people who made LXC, can work as a full replacement for Proxmox in most scenarios. Here a few reasons:

  • It is bellow the Linux Containers project, open-source;
  • Available on Debian 12’s repositories;
  • Unlike Proxmox, it won’t withhold important fixes on the subscription (payed) repositories;
  • Is way, way lighter;
  • LXC was hacked into Proxmox, they simply removed OpenVZ from their product and added LXC and it won’t even be as compatible and smooth as Incus;
  • Also has a WebUI;

Why not try it? :)

TCB13 ,
@TCB13@lemmy.world avatar
TCB13 ,
@TCB13@lemmy.world avatar

I see your point and would usually think the same way / agree with it, however the issue with Docker is that you’re kind of forced and coerced into using those proprietary solutions around it. It also pushed people into a situation where it’s really hard to not depend on constant internet services to use it.

TCB13 ,
@TCB13@lemmy.world avatar

At least let’s use podman and I will keep fighting for containers being at least optional.

Well, systemd can also provide as much isolation and security. It’s another option… :) as well as LXC.

TCB13 ,
@TCB13@lemmy.world avatar

But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here,

This is changing… Fedora is planning to enable the various systemd services hardening flags by default and so is Debian.

We’re talking about ease of setting things up, anything you can do in docker you can do withou

Yes, but at what cost? At the cost of being overly dependent on some cloud service / proprietary solution like DockerHub / Kubernetes? Remember that the alternative is packages from your Linux repository that can be easily mirrored, archived offline and whatnot.

TCB13 ,
@TCB13@lemmy.world avatar

Yet people chose to use those proprietary solutions and platforms because its easier. This is just like chrome, there are other browser, yet people go for chrome.

It’s significantly hard to archive and have funcional offline setups with Docker than it is with an APT repository. It’s like an hack not something it was designed for.

TCB13 ,
@TCB13@lemmy.world avatar

It’s definitely much easier to do that on docker than with apt packages,

What a joke.

Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that’s their choice

Yes and they point the market in a direction that affects everyone.

GitHub is also proprietary and no one cares that a project is hosted there.

People care and that’s why there are public alternatives such as Codeberg and the base project Gitea.

TCB13 ,
@TCB13@lemmy.world avatar

Yes, I can, but this not about what I or you can do. This is about what the actually do, the direction technology is taking and the lack of freedoms that follows. Distribution is important.

TCB13 ,
@TCB13@lemmy.world avatar

I’m not sure what you’re talking about. Most people self-hosting don’t need anything special, just a docker compose file

Yes, and they proceed to pull their software from DockerHub (closed and sometimes decides to delete things) and most of them lack the basic Linux knowledge to do it in any other way. This is a real problem.

TCB13 ,
@TCB13@lemmy.world avatar

I never said people shouldn’t use those platforms. What I said countless times is that while they make the life of newcomers easier they pose risks and the current state of things / general direction don’t seem very good.

TCB13 ,
@TCB13@lemmy.world avatar

Look this isn’t even about “drawing lines in the sand”, I do understand why use containers and I use them in certain circumstances, usually not Docker but that’s more due to the requirements in said circumstances and not about personal decision.

Do you object to software repositories that install dependencies precompiled? (…) but then claim that using systems that package all the dependencies into a single runnable unit is too much and cedes too much freedom?

No and I never claimed that. I’m perfectly happy to use a single-binary statically linked applications, in fact I use quite a few such as FileBrowser and Syncthing and they’re very good and reasonable software. Docker however isn’t one of those cases or, at least, not just that.

I agree that containers are allowing software projects to push release engineering and testing down stream and cut corners a bit

Docker is being used and abused for cutting corners and now we’ve developers that are just unable to deploy any piece of software without it. They’ve zero understanding of infrastructure and anything related to it and this has a big negative impact on the way they develop software. I’m not just talking about FOSS projects, we see this in the enterprise and bootcamps as well.

Docker is a powerful thing, so powerful it opens the door for poorly put together software to exist and succeed as it abstracts people from having to understand architectures, manually install and configure dependencies and things that anyone sane would be able to do in a lifetime. This is why we sometimes see “solutions” that run 10 instances of some database or some other abnormality.

Besides all that, it adds the half-open repository situation on top. While we can host repositories and use open ones the most common thing is to see everything on Docker Hub and that might turn into a CentOS style situation anytime.

TCB13 ,
@TCB13@lemmy.world avatar

It depends on your needs. How much do you value your data? Can you re-create / re-download it in case of a disk failure?

In some case, like a typical home users with a few writes per day or even week simply having a second disk that is updated every day with rsync may be a better choice. Consider that if you’re two mechanical disks spinning 24h7 they’ll most likely fail at the same time (or during a RAID rebuild) and you’ll end up loosing all your data. Simply having one active disk (shared on the network and spinning) and the other spun down and only turned on once a day with a cron rsync job mean your second disk will last a LOT longer and you’ll be safer.

TCB13 ,
@TCB13@lemmy.world avatar

I am not sure if a disk that is spun up daily will outlast one that mostly idles 24/7. Maybe if you do it only weekly?

Well, I do it weekly in a specific case but I also have other systems running daily. I guess it also depends on the use case / amount of data written and how damaging it can be if the “hot” drive breaks between the syncs.

TCB13 , (edited )
@TCB13@lemmy.world avatar

I’m referring you to my quick “self-hosting guide” for security and whatnot: lemmy.world/comment/7126969

With that said,

A) HP Mini second hand. Low power in the “T” CPU models, some have 2 nvme slots that can be used for extra storage with a cheap adapter like this + a power supply for the hard drives. If you don’t want to DIY it so much some also have USB type C ports (and Thunderbolt) that you can use to connect to an external drive enclosure or this one.

B) SSD for boot drive, run VMs etc, HDDs for long term storage

C) Debian as base system, no GUI. LXD/LXC as hypervisor to run all your stuff in containers and VMs. Or run everything directly on the machine.

Other recommendations:

  • Use BTRFS as filesystem as much as possible;
  • Aside from the big brands like HP and Dell there are other alternatives such as the trendy MINISFORUM however their BIOS comes out of the factory with weird bugs and the hardware isn’t as reliable - missing ESD protection on USB in some models and whatnot;
TCB13 ,
@TCB13@lemmy.world avatar

Adding nofail will most likely fix this. However switching from fstab mounts to systemd mounts could be cleaner as you would be able to create a systemd target that gets activated whenever you’re on your main network and then trigger a mount of the share / unmount when you leave.

TCB13 ,
@TCB13@lemmy.world avatar

Debian.

TCB13 ,
@TCB13@lemmy.world avatar

Debian is hard mode and is an advanced distro. There are a ton of tools that are unique to Debian. It is used mostly for people running their own servers and custom purpose machines from home or work. It is also the primary distro for hacking hardware and reverse engineering stuff that has no other way to create Linux kernel support.

While I get it I don’t agree with the first part. If you install Debian out of the box with GNOME it will work out just fine for the majority of people, usually it will work out better than Mint, Arch and whatnot because it is a finished and very reliable OS, not something targeted for experimentation.

TCB13 ,
@TCB13@lemmy.world avatar

because depending ln their hardware, wifi might not work out of the box, and maybe even not ethernet either

I never experienced this with tons of machines, besides Debian now comes with proprietary blobs for that kind of hardware out of the box as well.

. If it’s the iso version with the proprietary firmware already in it’s maybe…

That ISO no longer exists. It’s all now on the base image.

UPDATE 10 Jun 2023: As of Debian 12 (Bookworm), firmware is included in the normal Debian installer images. Source: cdimage.debian.org/…/cd-including-firmware/

“The Debian official media may include firmware that is otherwise not part of the Debian system to enable use of Debian with hardware that requires such firmware.” Source: tomshardware.com/…/debian-includes-proprietary-co…

TCB13 ,
@TCB13@lemmy.world avatar

It is not really a complete experience. It is ugly, and for the type of person that wants to play in the weeds

Wtf are you even talking about? Setup Debian with all the defaults, it’s easier than Windows and you’ll get GNOME out of the box. Ugly?

or figuring out flatpaks

Running 2 commands to get all the flatpak software into the GNOME GUI store is very hard :P

Debian provides a solid out of the box experience, a system that won’t break and will be compatible with most of the decent hardware out there. It won’t complain and bitch, it won’t be an half finished product like Arch. If it’s too complicated just get Ubuntu and enjoy it’s mangled kernel.

Arch / Gentoo are the real “base installs” here, nobody can run those things out of the box without tweaks. Arch doesn’t even have an installer, just a bunch of scripts and 3rd party attempts and making something usable and you’re recommending over Debian that has a full GUI with sane defaults?

TCB13 ,
@TCB13@lemmy.world avatar

Yes, Debian + Flatpak is a good way to have a very reliable system with all the latest software.

TCB13 ,
@TCB13@lemmy.world avatar

I’m referring you to my quick “self-hosting guide”: lemmy.world/comment/7126969

TCB13 ,
@TCB13@lemmy.world avatar

Do you use bash? Yes because it is everywhere and available by default.

deleted_by_author

  • Loading...
  • What's the real world connection speed from your residential IP to your Server?

    I’m using contabo and the VPS I got is advertised as 1 Gigabit. When I do a speedtest or use iperf3 to connect to public servers I get pretty close to 1 Gigabit. But from my residential IP the speed drops down to 100-250 Mbit/s. My home internet connection can handle 500 Mbit just fine....

    TCB13 ,
    @TCB13@lemmy.world avatar

    But from my residential IP the speed drops down to 100-250 Mbit/s. My home internet connection can handle 500 Mbit just fine.

    Maybe the issue is that your ISP has bad peering to some networks including the one where your VPS is?

    TCB13 ,
    @TCB13@lemmy.world avatar

    Maybe the IPv6 tunnel from Hurricane Electric?

    TCB13 ,
    @TCB13@lemmy.world avatar

    My front page is 613KB with Wordpress. Moral of the story, you don’t have to use a static website generator to have light things.

    https://lemmy.world/pictrs/image/c8528ce8-ba0e-4fb6-9845-b67b05267936.png

    TCB13 ,
    @TCB13@lemmy.world avatar

    And how do you plan to manage your posts, database etc. and render stuff in those? You still need some backend solution like Wordpress, you can use vue as a frontend library for it… or vanilla JS, or jQuery…

    TCB13 ,
    @TCB13@lemmy.world avatar

    So… you are aware that FastAPI and Flask will always be significantly slower than Wordpress… because Python, always running processes etc.?

    You’re building a simple website / blog just use Wordpress, it will output most of the pages into plan simple and fast HTML, then add a few pieces of vanilla JS or Vue (if you’re into that) to make things “fluffier”. Why bother with constant XHR requests when you’re just serving simple text pages?

    With Wordpress you’ll also get all the management, roles, permissions, backend for “free” and you can always, like sane people, cache the output of the most visited pages. Wordpress also provides a RESTful API if required.

    TCB13 ,
    @TCB13@lemmy.world avatar

    This might be caused by your I/O reaching the limits while doing random reads. Reducing the amount of peers you connect to may fix the problem.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines