There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

lal309 OP ,

I believe you are referencing the same post that got my curious about Incus and started playing around with it.

My biggest gripe is the manual installation of all services which I will do if it’s worth it. So far not sure that it is, hence the post to get more opinions.

There’s is a GUI you can install for Incus but it’s optional and not preinstalled.

I appreciate your input.

lal309 OP ,

You are probably right. Judging by their GitHub repo, their first release was in October of 2023. If I understand correctly, Incus is a fork of Canonical LXDs which is not so new??? Idk. Their documentation is quite good but there aren’t a lot of “guides” out there so yea.

lal309 OP ,

Thank you I appreciate your input!

lal309 OP ,

Haven’t really looked into Podman as I read somewhere (if I remember correctly) that it takes quite a bit of rewrite (from docker compose to podman). Again, might be speaking out of turn here.

lal309 OP ,

Very true! Thanks.

lal309 OP ,

Strictly from a container perspective, wouldn’t this workflow create more overhead? For example, an incus cluster for me it would be Debian hosts (layer 1), incus (layer 2), lxd container (layer 3), docker (layer 4), app/service (layer 5). A Docker Swarm cluster (for me) would be Debian hosts (layer 1), docker (layer 2), app/service (layer 3).

Granted a docker swarm cluster would negate the possibility of VMs without having to install something else on the hosts but asking since I’m trying to keep my services in containers.

lal309 ,

This video helped me most. I’m a visual learner so it was easier for me to follow this instead of a written guide. Just be careful when you are following along tutorials (especially those written more than ~9 months ago) because the majority use syntax for OpenSSL 1.1.1 but that version is now EOL. You will need to use OpenSSL 3.x syntax as it’s the currently supported version of OpenSSL.

lal309 ,

Honestly I had trouble understanding the inner workings of wine and how I may need to use to install windows software so I just watched a few videos on YouTube, downloaded a sample exe (I think I used notepad++) and try it out. 10 minutes later I was running software through wine no problem. Wrote myself a quick documentation guide for my future self and gtg.

lal309 OP ,

And you’ve used this with a game (or Elite)?

Looks like this is the engine but you’d still have to the “rest of the car” together on your own. Am I reading that right?

lal309 OP ,

Oh sorry I thought you had used it before. I’ll take a closer look and see what I can. Thanks for looking tho

lal309 OP ,

It’s a shame really. It’s not critical for gameplay but it would have been a great immersive experience.

lal309 ,

This is the only thing keeping me from moving everything from PhotoPrism to Immich. I have over 40,000 objects (photos and videos) on my server already. If I could somehow get immich up and going on and have immich “recognize” this giant directory of objects without having to reupload everything, I would switch tonight.

lal309 ,

It’s not working because it is against Cloudflare’s ToS unfortunately.

First I would ask, do you really have to make Jellyfin publicly accessible?

If yes, are you able to setup a VPN (i.e. Wireguard) and access Jellyfin through that instead?

If you don’t want the VPN route then isolate the NPM and Jellyfin instance from the rest of your server infrastructure and run the setup you described (open ports directly to the NPM instance). That is how most people that don’t want to do Cloudflare are running public access to self hosted services. But first, ask yourself the questions above.

lal309 ,

Personal opinion. If you successfully booted Debian, stick with it. No need to try out a bunch of distros. Debian is well known, well supported, tons of resources AND everything works out of the box with your POS systems. Sold!

lal309 ,

Glad you liked it fellow inter webs person!

Micro***t Word on Linux and alternatives

Are there good Microsoft word alternatives that support Linux (I don’t mind closed source)? Libreoffice is meh and only office is quite good, but are there any better ones? Also, is there a way to install word on Linux using wine? When I do that my laptop just overheats and loses internet connection.

lal309 ,

OnlyOffice is the only one that I’ve used that has a good looking UI, works out of the box and very good compatibility (across Microsoft and other document standards). Install is just one flatpak away. Highly recommend.

lal309 ,

Lots of answers in the comment about this particular storage type/vendor. Regardless, to answer your original question, rclone. Hands down. If you spend 30-60 minutes actually reading their documentation, you are set and understand so much more of what’s going on under the hood.

lal309 ,

Can’t speak for those but I tried Kopia and it did the job okay. Ultimately tho I landed on rclone.

lal309 ,

I’m currently using Backblaze. I also researched Wasabi and AWS.

lal309 ,

Well here’s my very abbreviated conclusion (provided I remember the details appropriately) when I did the research about 3 months ago.

Wasabi - okay pricing, reliable, s3 compatible, no charges to retrieve my data, pay for 1tb blocks (wasn’t a fan of this one), penalty for data retrieval prior to a “vesting” period (if I remember correctly, you had to leave the data there for 90 days before you could retrieve it at no cost. Also not a big fan of this one)

AWS - I’m very familiar with it due to my job, pricing is largely influenced by access requirements (how often and how fast do I want to retrieve my data), very reliable, s3, charges for everything (list, read, retrieve, etc). This is the real killer and largely unaccounted cost of AWS.

Backblaze - okay pricing, reliable, s3 compliant, free retrieval of data up to the same amount that you store with them (read below), pay by the gig (much more flexible than Wasabi). My heartburn with Backblaze was that retrieval stipulation. However, they have recently increased it to free up to 3x of what you store with them which is super awesome and made my heartburn go away really quickly.

I actually chose Backblaze before the retrieval policy change and it has been rock solid from the start. Works seamlessly with the vast majority of utilities that can leverage s3 compliant storage. Pricing wise, I honestly don’t think it’s that bad

Hope this helps

lal309 ,

Honestly what really matters (imo) is that you do offsite storage. Cloud, a friends house, your parents, your buddy’s NAS, whatever. Just get your data away from your “production/main” site.

For me, I chose cloud for two main reason. First, convenience. I could use a tool to automate the process of moving data offsite in a reliable manner thus keeping my offsite backups almost identical to my main array and easy retrieval should I need it. Second, I don’t really have family or friends nearby and/or with the hardware to support my need for offsite storage.

There are lots of pros and cons of each, let alone add your specific needs and circumstances on top of it.

If you can use the additional drives later on in your main array, some other server or a different purpose then it may be worth while exploring the drives (my concern would be ease of keeping offsite data up to par with main data). If you don’t like it for one reason or the other, you can always repurpose the drives and give cloud storage a try. Again, the important thing is to do it in the first place (and encrypt it client side).

lal309 ,

In my opinion it really comes down to support, price (first year and renewal) and ethics.

For the ethics piece, if you think Google is an evil company then avoid Google Domains, as an example.

lal309 ,

Fair point. I failed to mentioned features in my previous comment. Things like WHOIS Privacy are essential to me and I imagine it is for most of us (self hosters)

lal309 ,

Absolutely agree! Just pointing it out in case OP runs into a registrar that doesn’t offer this

lal309 ,

Didn’t even know

lal309 OP ,

PvP only?

lal309 OP ,

What do you mean by micro?

lal309 OP ,

Noted!

lal309 OP ,

Makes much more sense now

lal309 OP ,

Any thought of AoE 4?

lal309 OP , (edited )

I’m almost positive I have 40k somewhere but last time I looked at some gameplay, it seemed very overwhelming

Edit: my apologies, it looks like I got Total War: Warhammer confused with Warhammer 40k. Total War: Warhammer 3 is the one I have and it seemed huge in scale and very overwhelming

lal309 OP ,

Looks interesting, do you use the AppImage or Flatpak install method?

lal309 OP ,

I would like to take this for a spin although I see two install methods, flatpak and appimage? Any recommendations here? Seems like both are on par as far as versions go

lal309 OP ,

Awesome write up! Sounds like an interesting contender!

lal309 ,

Did not know about this one! Just added it to my pi hole instance. Thank you!

lal309 ,

When you created your containers, did you create a “frontend” and “backend” docker network? Typically I create those two networks (or whatever name you want) and connect all my services (gitlab, Wordpress, etc) to the “backend” network then connect nginx to that same “backend” network (so it can talk to the service containers) but I also add nginx to the “frontend” network (typically of host type).

What this does is it allows you to map docker ports to host ports to that nginx container ONLY and since you have added nginx to the network that can talk to the other containers you don’t have to forward or expose any ports that are not required (3000 for gitlab) to talk from the outside world into your services. Your containers will still talk to each other through native ports but only within that “backend” network (which does not have forwarded/mapped ports).

You would want to setup your proxy hosts exactly like you have them in your post except that in your Forward Hostname you would use the container name (gitlab for example) instead of IP.

So basically it goes like this

Internet > gitlab.domain.com > DNS points to your VPS > Nginx receives requests (frontend network with mapped ports like 443:443 or 80:80) > Nginx checks proxy hosts list > forwards request to gitlab container on port 3000 (because nginx and gitlab are both in the same “backend” network) > Log in to Gitlab > Code until your fingers smoke! > Drink coffee

Hope this help!

Edit: Fix typos

lal309 ,

You got it! As long as nginx can reach that service container, it will forward the request to it.

service1.example.com is configured in nginx with a proxy host of service1:1234, service2.example.com is proxied to service2:8080 and so on.

lal309 ,

I’ve been toying with the idea of standing it up. Any recommendations for the GUI side?

lal309 ,

I got ya. Took a quick look at that link and it looks like the client is Windows specific which is frowned upon and permanently blacklisted in this house!!!

Still, I appreciate the reply

lal309 ,

Well…. you just blocked off my calendar for the weekend!

lal309 ,

Have you taken a look at CloudBeaver? I’m not sure I understand what an ERD is but I’ve used this to manage and work with databases before. Pretty easy, UI is not bad at all and it’s self host-able (through docker). I don’t know if it meets your criteria 100% but worth checking out.

lal309 ,

I do remember being a bit lost with initial connection to a postgres when I first spun up the app. I clicked around for a few minutes but after than it has been very handy. My use case was extremely basic as I just needed to manipulate some records that I did not know the right query for and to visualize the rows I needed.

lal309 ,

Sad indeed. Maybe raising an issue on GitHub? Even if you don’t end up using cloudbeaver, it’s worth reporting it. Maybe they don’t know there’s a problem with this component of their app.

lal309 ,

I went with the OpenSSL CA as cryptography has been a weakness of mine and I needed to tackle it. Glad I did, learned a lot throughout the process.

Importing certs is a bit of a pain at first but I just made my public root ca cert valid for 3 years (maybe 5 I can’t remember) and put that public cert in a file share accessible to all my home devices. From each device I go to the file share once, import the public root ca cert and done. It’s a one time per device pain so it’s manageable in my opinion.

Each service gets a 90 day cert signed by root ca and imported to nginx proxy manager to serve up for the service (wikijs.mydomain.io).

Anything externally exposed I use let’s encrypt for cert generation (within NPM) and internally I use the OpenSSL setup.

If you document your process and you’ve done it a few times, it’s gets quicker and easier.

lal309 ,

Didn’t know you could do this. Interesting!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines