There can be ways of your are using ipv6, basically turning your cloud host into a router, but but ipv4 you would have to have a 1:1 mapping and setup the routing carefully to make it work.
No. That would defeat the purpose of me installing Linux in (old) laptops. Windows feels sluggish enough with a sea of bad things wanting your minimum wage and have Windows Defender prevent it but not all of it, obviously.
I put all my attention to prevention and set strict rules on the router. It can be as simple as setting the DNS to stuff like dnsforge.de or DIY it with PiHole with hosts lists of your hearts content that update itself weekly, I do the latter. Nothing beats a cross platform solution that protects every device in the network, if you’re after 100% performance. Of course you can still catch bad things, such as social engineering by email that happened over at Linus Tech Tips. You better stay vigilant no matter what solution you use and don’t sleep on making backups, which can be as simple and automated when you use Syncthing for example.
You’re right. Syncthing isn’t a backup tool per se and the devs even tell you that in the FAQ. But forgive me if I did preach about it anyway, because you can enable file versioning (keep old and deleted files on each host) which kind of makes it backup incase something bad happens? Anyway it is my set and forget solution for Linux, Android and Windows. If you could recommend me a alternative that ticks these boxes I’d appreciate that. :)
Some are pretty legitimate, like the lack of Adobe or Autodesk support on Linux, which means a lot of people just 100% cannot participate in their industry using Linux. It’s borderline illegal to use Linux if you’re a mechanical or civil engineer; Solidworks and MATLAB are pretty much regulatory requirements; you’d probably lose your engineering license if you turned in a drawing made in FreeCAD. In the art space, tell a publishers you drew something in Inkscape and watch their personality leak out their ears. Everyone hates Adobe, but glory to Adobe.
There are also legitimate culture shocks; there’s this LTT video where they had iJustine on, and Linus and Justine swapped platforms, he on a Mac, she on a PC, and they were given basic tasks like “install Slack. Take a screenshot. Paste that screenshot in a Word Processing document. Save it as a PDF. Send that PDF to James in a Slack message. Uninstall Slack.” Justine immediately started looking around the back of the monitor for USB ports, rapidly found that a fresh install of vanilla Windows doesn’t (or didn’t at the time) come with a word processor that could save documents as a PDF, Linus immediately went to the web browser instead of the app store…They did similar stunts with their Linux challenge later on, though I’d kinda argue about the tasks they were set to do (such as “sign” a document, which Linus started to do cryptologically but didn’t have any keys enrolled because who the fuck does, and Luke just…copy/pasted an image of his handwriting?) But anyway. Linux is different than Windows to use, and even a VERY windows-like DE like Cinnamon is going to have differences that will feel foreign. I remember tripping over “shortcuts” being “links or launchers depending on what you want to do.”
There’s also the fact that Microsoft has done a world class job at making the average normie hate and fear the command line interface. Because universally, when you see a cmd prompt appear in Windows, it is a bad thing. That hate gets transferred to Linux, where we do routinely use the terminal because while it can be a little arcane, with a little bit of learning you can do some powerful stuff. But, because people have been so conditioned to hate the CLI by Microsoft, you get exchanges like this:
“Hey I’m trying out Pop!_OS because you nerds keep saying it’s good, and my laptop can connect to the internet with ethernet but not Wi-Fi, what’s up with that?” “Well let’s see, could you open a terminal and type sudo lshw -C network, and then copy-paste what it says here for me to look at?” “NO!!!11!! NEVAR!!! How DARE you suggest I use a computer by doing anything other than pointing at little pictures?! The indignity! It’s current year!!”
Finally, before I hit the character limit for this post, there’s just a reputation around Linux. I’ve had this happen more than once, someone will ask to use my computer to look something up on the internet. “Sure.” They find the Firefox icon on the quicklaunch bar just fine, it pops open, they’re doing fine, then they notice the color scheme and icons are a little different and they ask “uhh, what version of Windows is this?” And I say “It’s Linux Mint.” And they lift their hands off the keyboard with the same gesture as if I just told them my cute furry pet in their lap is actually a tarantula. They have it in their head that Linux is deliberately hard to use because it’s for computer nerds–they think all Linux is Suckless–and because they’re not computer nerds, they can’t use Linux. So the second they know it’s Linux, they “can’t” use it.
League of Legends is toxic in the way of people getting too emotionally invested in a game, but Counterstrike (in the old days, pre Source and GO) was toxic in a casually bigoted way almost completely detached from the state of the current match, which I think is worse.
I like to etch circuits, mess around, and learn. I don’t care to mess with SDR stuff or buying anything. The art of building is far more interesting than the end goal IMO.
Left 4 Dead 2 versus. I dare you to join a random match online and last longer than 10 minutes without getting kicked. Or just search for “left 4 dead 2 versus kicked” and you will find countless examples of people complaining about it.
It’s become a meme at this point and I’m pretty sure that people kick for fun although some claim that people kick you for not being good enough or too good. Just play with friends instead or play campaign, people are nice there.
As someone who has tried both and went back to pihole for no reason other than “why not?” – it works as intended, does everything accordingly and I have 0 issues running it plus 2x unbound dns servers.
In the same boat as you here. Tried both and went back to Pi-hole because “why not?”
Adguard does have homeassistant setup which was nice and easy, but I like to compartmentaliz my setup so if homeassistant goes offline my internet does not go out when adguard is down.
Since I started running pfsense on a custom PC with dedicated NIC, unbound has been my go to choice now for DNS and Adblock. I use Pi-hole on specific subnets now.
Is there any solution (program/Docker image) that will take a port, forward it to another host (or maybe another program listening on the host) that then modifies the traffic to contain the real source IP. The whole idea is that in the server logs I want to see people’s real IP addresses, not the server in the cloud private VPN IP.
Not that I’m aware of. Most methods require some kind of out-of-band way to send the client’s real IP to the server. e.g. X-Forwarded-For headers, Proxy Protocol, etc.
If your backend app supports proxy protocol, you may be able to use HAProxy in front on the VPS and use proxy protocol from there to the backend. Nginx may also support this for streams (I don’t recall if it does or not since I mainly use HAProxy for that).
Barring that, there is one more way, but it’s less clean.
You can use iptables on the VPS to do a prerouting DNAT port forward. The only catch to this is that the VPN endpoint that hosts the service must have its default gateway set to the VPN IP of the VPS, and you have to have a MASQUERADE rule so traffic from the VPN can route out of the VPS. I run two services in this configuration, and it works well.
Where eth0 is the internet-facing interface of your VPS.
Edit: One more catch to the port forward method. This forward happens before the traffic hits your firewall chain on the VPS, so you’d need to implement any firewalls on the backend server.
You may need to move the logic from netplan to a script that gets executed when the VPN is brought up. Otherwise, it will likely fail since it won’t have the VPN tunnel interface up to route traffic to.
Forgot to ask: Is your server a VPN client to the VPS or a VPN server with the VPS as a client? In my config, the VPS is the VPN server.
Not sure about the netplan config (all my stuff is debian and uses oldschool /etc/network/interfaces), but you’d need logic like this:
Server is VPN client of the VPS:
<pre style="background-color:#ffffff;">
<span style="color:#323232;"> routes:
</span><span style="color:#323232;"> # Ensure your VPS is reachable via your default gateway
</span><span style="color:#323232;"> - to: <vps public ip>
</span><span style="color:#323232;"> via: <your local gateway>
</span><span style="color:#323232;"> # Route all other traffic via the VPS's VPN IP
</span><span style="color:#323232;"> - to: 0.0.0.0/0
</span><span style="color:#323232;"> via: <vps vpn ip>
</span>
You may also need to explicitly add a route to your local subnet via your eth0 IP/dev. If the VPS is a client to the server at home, then I’m not sure if this would work or not.
Sorry this is so vague. I have this setup for 2 services, and they’re both inside Docker with their own networks and routing tables; I don’t have to make any accommodations on the host.
Everything I use is in Docker too, I’d much rather use Docker than mess around with host files, but to try it out I don’t mind. If you have an image you could share, I’d appreciate it.
Anyway, neither are clients or servers as I just used ZeroTier as a quick setup. On my other infra I use wireguard with the VPS being the server (that setup works well but I only reverse proxy HTTP stuff so X-Forwarded-For works well).
I’ve no experience with Zerotier, but I use a combo of WG and Openvpn. I use OpenVPN inside the Docker containers since it’s easier to containerize than WG.
Inside the Docker container, I have the following logic:
supervisord starts openvpn along with the other services in the container (yeah, yeah, it’s not “the docker way” and I don’t care)
OpenVPN is configured with an “up” and “down” script
When OpenVPN completes the tunnel setup, it runs the up script which does the following:
<pre style="background-color:#ffffff;">
<span style="color:#323232;"># Get the current default route / Docker gateway IP
</span><span style="color:#323232;">export DOCKER_GW=$(ip route | grep default | cut -d' ' -f 3)
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Delete the default route so the VPN can replace it.
</span><span style="color:#323232;">ip route del default via $DOCKER_GW;
</span><span style="color:#323232;">
</span><span style="color:#323232;"># Add a static route through the Docker gateway only for the VPN server IP address
</span><span style="color:#323232;">ip route add $VPN_SERVER_IP via $DOCKER_GW; true
</span><span style="color:#323232;">ip route add $LAN_SUBNET via $DOCKER_GW; true
</span><span style="color:#323232;">
</span>
LAN_SUBNET is my local network (e.g. 192.168.0.1/24) and VPN_SERVER_IP is the public IP of the VPS (1.2.3.4/32). I pass those in as environment variables via docker-compose.
The VPN server pushes the default routes to the client (0.0.0.0/1 via <VPS VPN IP> and 128.0.0.0/1 via <VPS VPN IP>
Again, sorry this is all generic, but since you’re using different mechanisms, you’ll need to adapt the basic logic.
Just to confirm, is the -o eth0 in the second command essentially the interface where all the traffic is coming in? I’ve setup a quick Wireguard VPN with Docker, setup the client so that it routes ALL traffic through the VPN. Doing something like curl ifconfig.me now shows the public IP of the VPS… this is good. But it seems like the iptables command aren’t working for me.
I usually end up in need of redoing a fresh install until it breaks up again.
That’s common when you start adding random PPAs, running some commands without understanding (we all do 👀) and whatnot, but you can save yourself from reinstalling over and over by using an immutable distribution so at any point you will know what changed in your system and if it breaks you can just roll back to the previous working point and either fix your mistake or wait for a fix from upstream when an issue happens there (this year there were a few kinda major hiccups on Fedora for example).
I suggest you try one of the Fedora immutable spins (Silverblue, Kinoite, Sericea) or Vanilla OS, though I would hold off from it until Orchid comes out.
If you want to go all in you can use NixOS, but it takes a lot of reading
As someone who has needed to use random PPAs and inevitably wound up needing to reinstall many times, I think this is good advice. I’ll do this if I ever get the nerve to try again.
If Flatpak doesn’t cover your needs you can already use Distrobox on your current distro for that purpose, you’d make an Ubuntu container and add the PPAs to it, if/when it breaks your system will still remain intact
RPi’s and RPi compatibles got co-opted by a huge number of commercial and industrial control systems companies being used for cheap full-fat embedded systems that needed more than a simple microcontroller, but where industrial PLC’s were overkill or not sourcable. Everything they produce, which is not a lot given covid supply chain whiplash, has now been going towards those customer’s contracts and fuck the little guy consumer they were meant for.
If you want to get into the SBC ecosystem leave rpi in the dust, they’re dead to the enthusiasts and won’t be coming back. There are much better options. See Linus tech tips video on them.
Finally someone mentions a product name. I am so sick of these “uh duuuh, there are better alternatives out there, hurhur” commenters who name not a single one.
I’d recommend one if I had tried any of them. The only one I’ve bought is the orange pi 5 which runs significantly higher than the basic RPi $35 and figured was outside the power envelope OP really needed.
In their defence, the pi was never intended to be a powerhouse. Their focus was on getting good software support for a low cost system. This provided a stable foundation that built that turnkey reliability.
A lot of the other board providers have a habit of just creating a powerful little board, and throwing it out there to fend for itself. This is great for competent geeks, but less good for those still learning.
Meh, I don’t know if they need defense. It’s just kind of how it is.
They got big and popular and that means momentum. Momentum is good for adoption and momentum is good for support, but it’s not great for huge jumps in technological sophistication.
I still LOVE the 2040, pico, etc, but there are just better options when you go bigger than that.
The Potato, Rock Pis.
This creator is great for when you want to SBC shop
The raspberry pi was never meant to be a power house. It’s whole goal was to make support and learning easy. A few, very well maintained models, with the same core chips. The last bit is the cause of the shortage. They can’t easily redesign without fragmenting the support base. That is completely against their ethos.
I’ve also found, once you hit a Pi’s limit, that it’s best to go to something more specialist. My go-to options are NUCs for general computing, or the Nvidia Jetson series, for portable brute power. Anything that saturates a pi will quickly saturate the smaller SBCs soon after, as well. They suffer from many of the same bottlenecks.
Have you ever checked out OrangePi? I was considering them before picking up a jetson nano. It’s crazy to think that a rpi4b is going for the same price from resellers as a jetson with cuda and tenserflow support.
Over heard of it but haven’t seen them. The other piece I was looking for was CMs for the Trying Pi that I got. It’s been sitting in a box ever since I got it because … no compute modules anywhere.
4K and on my P2000 or using Intel QSV isn’t a great experience. I can totally see it not being a good experience on a P4000 too.
That being said with HDR work 1080 it works with both QSV and the P2000. So it should work like a champ on the P4000. I don’t really have any HDR displays so I don’t really grab that many things in HDR so YMMV.
The best advice I can offer is if the content is transcoded into a mp4 container with the ATOM upfront ( aka fast start / web) and you’re not using subtitles it will work okay-ish as long as you do not pause it. Using the mkv container is just asking for sadness in my experience with it. Though at this point if I need to do that I just transcode into AV1, burn the subs into it, and pass through the audio.
kbin.life
Newest