There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

linux

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

zecg , in New Mozilla Logo Spotted
@zecg@lemmy.world avatar

Fuck you, Mozilla, it’s uglier and tells less about your core product, which is also the only thing you have that makes people tolerate your other stupid decisions.

zecg ,
@zecg@lemmy.world avatar

Also, fuck your “a” in that font. I’d punch that fucking glyph in the face if it spoke to me on the street.

xryx ,

I’m right there with you. Although I might not wait for that glyph to speak 😜

Guenther_Amanita , in Successful move over after years of trial and error

Just FYI: While Arch isn’t “For experienced users only”, it still might require some more work after your install.

It usually comes pretty minimal by default, and then you might wonder why printing doesn’t work out of the box for example.

It also makes the inexperienced user very easy to bork the system, and then you have to fix it.
I often hear from other users, that sometimes, this just happens out of the blue too.

If Arch works perfectly for you, then congratulations! Keep using it.
But if you notice, that you have to fight against the OS too often, consider a different distro that is supposed to just work.

One of those might be Bazzite (if you game) or Aurora. Both are almost the same, but Bazzite is more for gaming, while Aurora is more for general, non-gaming use. But you can use them interchangeably.
They belong to the uBlue project, which is a customized Fedora Atomic.
They are already set up for you with everything you want and need, are zero-maintenence and basically indestructible.

So, if you’re done with Arch, consider them.

gregor ,

Honestly, I just use Kubuntu. It just works, out of the box, with no fuckery involved and I can customize it as I want. You can read more about what I think about immutable distros in my blog post

Guenther_Amanita ,

While your blogpost isn’t completely right, it’s also not completely wrong.

You can absolutely customize image based distros, just as much as package manager based ones. You just need to do it from upstream, to modify the image itself, not from bottom-up like usually.

uBlue is the best example. There are already hundreds of available customized images around, including for Hyprland, Deepin, and much more.
That’s why immutable is often considered the wrong term for it. Image based, or atomic, is way better fitting.

One of the biggest pros, apart from the lack of maintenance needed (updating, etc.) is the reproducibility.
It’s very similar to Android, where every phone is the same.
Therefore, every bug is the same too, which is why the devs can roll out patches that fix everyone’s install at once.
Also, every update is basically a “reinstall lite”, so no package drift occours.
This makes them way less buggy in my experience.

I used the normal Fedora KDE spin for example, and after a few months there often came weird bugs that only affected my install.
Since the time I use Atomic, none of those problems came back.

Even if you decide to utilize BTRFS-snapper, which you suggested, the underlying system drifts apart from the original install.


Also, instead of Kubuntu, I would recommend the Fedora KDE spin or just Debian with KDE, if you really want to use something traditional.
I just don’t see any reason to not run Kinoite compared to a non-atomic distro, and it will only get better in the future.

Quik ,

(In reply to the post) Actually, I’ve found my immutable distro of choice (Silverblue) to be a lot of fun to tinker in (not with), but you just have to accept that tinkering does work a bit different here, with toolbox/containers instead of your actual host system to install most stuff you want to try etc. on

possiblylinux127 ,

Honestly, I use X, Y and Z

It works out of the box (basically everyone here)

Broken_Monitor ,

This is by far the most confusing part when I consider switching over - which one to get? Primarily Steam gaming, but I saw someone mention the Nvidia cards I have might not play nice.

MyNameIsRichard ,
@MyNameIsRichard@lemmy.ml avatar

This is by far the most confusing part when I consider switching over

It’s the same process as when buying a car. Try a few out and see which one you like.

heythatsprettygood OP ,

If you aren’t trying to run anything too crazy (like AMD HIP compute, HDR, really bleeding edge hardware) I would probably recommend giving openSUSE Tumbleweed, Fedora (only the regular GNOME version, for some reason KDE spin was buggy in my experience), and Pop OS a test drive off live USB drives. Each has their own merits, so it’s worth trying all of them. In terms of NVIDIA support, I personally do not have much experience with NVIDIA cards, but when I was helping a friend format an iPod Fedora booted off a live USB on an RTX 4050 laptop with little fuss, and if you install it gives options for installing the full proprietary NVIDIA drivers. I know there is also an NVIDIA installer option in YaST’s software manager for openSUSE, and Pop even has an ISO with the drivers baked right in for full compatibility. However, your mileage may vary, although I have heard the whole NVIDIA situation is pretty good right now as long as you have the proprietary drivers installed.

possiblylinux127 ,
  • Arch is to complicated
  • proceeds to recommend immutable distros
atocci ,

What does this mean?

Agility0971 ,
@Agility0971@lemmy.world avatar

To be fair, the best standard would be to send off new users to immutable distros

possiblylinux127 ,

Maybe sometime in the future. However, not today. It is still complex and requires some knowledge of how file systems work

Guenther_Amanita ,

Image based distros are only complicated if you come from traditional distros, because they’re different.

If you come from Windows or another OS, then having “The whole OS is one thing” instead of “A huge collection of packages and directories” makes everything simpler to understand, because you don’t mess with anything except /home/. You don’t have to care about anything else.

And if you want to do something more fancy, like using a CLI tool, then having to enter a Distrobox container isn’t complicated.

For casual use, like gaming, browsing or image editing, everything is just as usual. Nobody, except us Linux nerds, actually cares about the underlying system. Casual users just want the OS to be a tool for their programs they use, and for that, it’s ideal, because it just works and doesn’t bork itself.

possiblylinux127 ,

Until something goes wrong or they want to customize the system. It will backfire quickly.

Guenther_Amanita ,

Then you can always rollback in case you don’t have a working image.

I had to do that once. On a non-atomic install, this would have meant a completely broken system. In my case, this was one reboot away and it worked again.

And in case you don’t like the direction of your image project going, you can also always rebase to another one in less than 5 minutes, download time and reboot included.

uBlue for example starts with a very basic Fedora Silverblue image, which you can fork easily yourself. I have zero experience in coding or other stuff, and even I managed to get my own custom image working.

There are already a couple of people around who started with Aurora, Secureblue or Bazzite, but then found them too opinionated, and went back to Vanilla Kinoite for example.
It’s extremely simple to switch out the base OS to something almost completely different.

And, you don’t loose any customisability. You can still do everything you want, take a look at Bazzite or Secureblue. Completely different kernel, additional modifications and packages, and much much more. Feels completely different than Vanilla Kinoite for example.

Hellmo_Luciferrari ,

I tried using Bazzite since I didn’t want to fuss with Wayland on Nvidia with Arch.

I had more gripes and more issues with an immutable distro than I ever did with my Arch install.

Stuck it out with Arch. It has taught me a lot.

The problem many folks have with Arch is the fact they don’t want to read or learn; well, newsflash, if you read and learn Arch isn’t exactly all that hard to use, setup, or maintain. It has better documentation than Bazzite and other newer distros. In fact, Arch Wiki has saved me hassle for other distros.

Your mileage may vary. However, I wouldn’t recommend an immutable distribution nec3ssarily to someone coming from Windows unless they want to shift from one paradigm to another.

Switching from Windows to something with such a vastly different approach in many cases will turn users away from using Linux. Their experience can dictate they switch away because of lack of knowledge and then proced to conflate every distro as just one “Linux” experience and not want to look back at it.

I still stand by one thing you will always hear me say: use the right tool for the job.

Coelacanthus , in Why Wayland adoption to have official support in programs is so slow?
@Coelacanthus@lemmy.kde.social avatar

In my opinion, that’s because X11 lacks proper abstract for many things like screenshot, screencast, color managerment and etc, so the applications have to use many X11 implementation details to implement these features. It leads to high-coupling code with X11 so move their code to wayland and ensuring it works correctly and is consistent with the old behavior is difficult.

Max_P , (edited ) in Java uses double ram.
@Max_P@lemmy.max-p.me avatar

When you control the memory allocation for Minecraft, you really only are configuring the JVM’s garbage collector to use that much memory. That doesn’t include any shared resources outside of the JVM, such as Java itself, OpenGL resources and everywhere else that involves native code and system libraries and drivers.

If you have an integrated GPU, all the textures that normally gets sent to a GPU may also live on your regular RAM too since those use unified memory. That can inflate the amount of memory Java appears to use.

A browser for example, might not have a whole lot of JavaScript memory used, couple MBs maybe. But the tab itself uses a ton more because of the renderer and assets and CSS effects.

UnRelatedBurner OP ,

This is interesting and infuriating, but I don’t think this is quite right in my scenario. As I also observe the over-usage when running a server from console. There shouldn’t be any GPU shenanigans with that, I hope.

DaPorkchop_ ,

There are stilly plenty of native libraries and the JVM itself. For instance, the networking library (Netty) uses off-heap memory which it preallocates in fairly large blocks. The server will spawn quite a few threads both for networking and for handling async chunk loading+generation, each of which will add likely multiple megabytes of off-heap memory for stack space and thread-locals and GC state and system memory allocator state and I/O buffers. And none of this is accounting for the memory used by the JVM itself, which includes up to a few hundred megabytes of space for JIT-compiled code, JIT compiler state such as code profiling information (in practice a good chunk of opcodes need to track this), method signatures and field layouts and superclass+superinterface information for every single loaded class (for modern Minecraft, this is well into the 10s of thousands), full uncompressed bytecode for every single method in every single loaded class. If you’re using G1 or Shenandoah (you almost certainly are), add the GC card table, which IIRC is one byte per alignment unit of heap space (so by default, one byte per 8 bytes of JVM heap) (I don’t recall if this is bitpacked, I don’t think it is for performance reasons). I could go on, but you get the picture.

Joelk111 , in What is the largest file transfer you have ever done?

When I was moving from a Windows NAS (God, fuck windows and its permissions management) on an old laptop to a Linux NAS I had to copy about 10TB from some drives to some other drives so I could re-format the drives as a Linux friendly format, then copy the data back to the original drives.

I was also doing all of this via terminal, so I had to learn how to copy in the background, then write a script to check and display the progress every few seconds. I’m shocked I didn’t loose any data to be completely honest. Doing shit like that makes me marvel at modern GUIs.

Took about 3 days in copying files alone. When combined with all the other NAS setup stuff, ended up taking me about a week just in waiting for stuff to happen.

I cannot reiterate enough how fucking difficult it was to set up the Windows NAS vs the Ubuntu Server NAS. I had constant issues with permissions on the Windows NAS. I’ve had about 1 issue in 4 months on the Linux NAS, and it was much more easily solved.

The reason the laptop wasn’t a Linux NAS is due to my existing Plex server instance. It’s always been on Windows and I haven’t yet had a chance to try to migrate it to Linux. Some day I’ll get around to it, but if it ain’t broke… Now the laptop is just a dedicated Plex server and serves files from the NAS instead of local. It has much better hardware than my NAS, otherwise the NAS would be the Plex server.

clmbmb ,

so I had to learn how to copy in the background, then write a script to check and display the progress every few seconds

I hope you learned about terminal multiplexers in the meantime… They make your life much easier in cases like this.

corsicanguppy ,

30 years with Linux and I know I still haven’t. Maybe this year? :-D

jrgd , in Java uses double ram.

Depending on version and if modded with content mods, you can easily expect Minecraft to utilize a significant portion memory more than what you give for its heap. Java processes have a statically / dynamically (with bounds) allocated heap from system memory as well as memory used in the stack of the process. Additionally Minecraft might show using more memory in some process monitors due to any external shared libraries being utilized by the application.

My recommendation: don’t allocate more memory to the game than you need to run it without noticeable stutters from garbage collection. If you are running modded Minecraft, one or more mods might be causing stack-related memory leaks (or just being large and complex enough to genuinely require large amounts of memory. We might be able to get a better picture if you shared your launch arguments, game version, total system memory, memory used by the game in the process monitor you are using (and modlist if applicable).

In general, it’s also a good idea to setup and enable ZRAM and disable Swap if in use.

taaz , (edited )

Big modpacks that add a lot of different blocks will also always explode the memory usage as at the start, Minecraft pre-bakes all the 3d models of the blocks.

UnRelatedBurner OP , (edited )

launch arguments [-Xms512m, -Xmx1096m, -Duser.language=en] (it’s this little, so that the difference shows clearly. I have a modpack that I give 8gb to and uses way more as well. iirc around 12)

game version 1.18.2

total system memory 32gb

memory used by the game https://i.imgur.com/loRZxqu.pngI’m using KDE’s default system monitor, but here’s Btop as well: https://i.imgur.com/JP3I9MX.png

also: this test was on max render distance, with 1gb of ram, it crashed ofc, but it crashed at almost 4gbs, what the hell! That’s 4 times as much

jrgd ,

For clarification, this is Vanilla, a performance mod Fabric pack, a Fabric content modpack, Forge modpack, etc. That you are launching. If it’s the modpack that you describe needing 8gb of heap memory allocated, I wouldn’t be surprised the java stack memory taking ~2.7 GiB. If it’s plain vanilla, that memory usage does seem excessive.

UnRelatedBurner OP ,

This was Vanilla.

jrgd , (edited )

Running the same memory constraints on a 1.18 vanilla instance, most of the stack memory allocation largely comes from ramping the render distance from 12 chunks to 32 chunks. The game only uses ~0.7 GiB memory non-heap at a sane render distance in vanilla whereas ~2.0 GiB at 32 chunks. I did forget the the render distance no longer caps out in vanilla at 16 chunks. Far render distances like 32 chunks will naturally balloon the stack memory size.

UnRelatedBurner OP ,

That you’d think that random game objects aren’t stored on the stack. Well, thanks for the info. Guess there isn’t anything to do, as others have said as well.

Max_P ,
@Max_P@lemmy.max-p.me avatar

It looks like you’re looking at the entire PolyMC process group so in this case memory usage also includes PolyMC itself, which buffers a chunk of the logs. It shouldn’t be using that much, but it will add a hundred MB or two to your total here as well.

Taleya , in What is the largest file transfer you have ever done?

I work in cinema content so hysterical laughter

potajito ,

Interesting! Could you give some numbers? And what do you use to move the files? If you can disclose obvs

Taleya , (edited )

A small dcp is around 500gb. But that’s like basic film shizz, 2d, 5.1 audio. For comparison, the 3D deadpool 2 teaser was 10gb.

Aspera’s commonly used for transmission due to the way it multiplexes. It’s the same protocolling behind Netflix and other streamers, although we don’t have to worry about preloading chunks.

My laughter is mostly because we’re transmitting to a couple thousand clients at once, so even with a small dcp thats around a PB dropped without blinking

MoonMelon ,

In the early 2000s I worked on an animated film. The studio was in the southern part of Orange County CA, and the final color grading / print (still not totally digital then) was done in LA. It was faster to courier a box of hard drives than to transfer electronically. We had to do it a bunch of times because of various notes/changes/fuck ups. Then the results got courier’d back because the director couldn’t be bothered to travel for the fucking million dollars he was making.

CrabAndBroom ,

Oh yeah I worked in animation for a bit too. Those 4K master files are no joke lol

Taleya ,

Fucking hell the raws woulda been gigantic

WldFyre ,

You legally have to tell us if that movie was Shrek.

MoonMelon ,

Hah, nope. Shrek was made in Glendale, so they probably had everything on site or right next door.

Azzk1kr ,

Eh, what’s a dcp?

Dlayknee ,

Digital Cinema Package; basically the movie file you’re watching when you’re in a movie theater.

CrabAndBroom ,
Taleya ,

That article was a weird mix of insider info and wild inaccuracies

CrabAndBroom ,

Oh sorry! Here ya go!

Taleya ,

Digital Cinema Package. Films come out in a buncha files that rather resemble a dvd rip. You got your video files (still called reels!) and your audio files, maybe some subtitle files and other bits and pieces and your assetmap (list of files) all in a big fat folder collectively called a DCP

daq ,

I used to work in the same industry. We transferred several PBs from West US to Australia using Aspera via thick AWS pipes. Awesome software.

Taleya ,

Hahahah did you enjoy Australian Internet? It’s wonderfully archaic

(MPS, Delux, Gofilex or Qubewire?)

potajito ,

Ahhh thanks for the reply! Makes sense! We also use Aspera here at work (videogames) but dont move that ammount, not even close.

therealjcdenton , in Troubleshooting a desktop that does not go into sleep mode/suspend

Do you have steam running in the background? Or do you have gamemoded enabled?

independantiste OP ,
@independantiste@sh.itjust.works avatar

Good question, I will check after work if steam starts in the background or something, I’ve had some issues with steam in the past so what you’re saying could make sense…

taaz , (edited ) in Java uses double ram.

As a side note and a little psa, if you need to squeeze out more overall performance of out of MC (and you are playing vanilla or Fabric modpack) I very much recommend using these Fabric mods: Sodium, Lithium, FerriteCore and optionally Krypton (server-only), LazyDFU, Entity Culling, ImmediatelyFast.

UnRelatedBurner OP ,

haha, thanks! But I already knew about most of them :D

taaz ,

You could also (hard) limit the total (virtual) memory process will use (but the system will hard kill it if tries to get more) with this:
systemd-run --user --scope -p MemoryMax=8G -p MemorySwapMax=0 prismlauncher

You would have to experiment with how much Gs you want to specify as max so that it does not get outright killed. If you remove MemorySwapMax the system will not kill the process but will start aggressively swapping the processes’ memory, so if you do have a swap it will work (an depending on how slow the disk of the swap is, start lagging).

In my case I have a small swap partition on an m2 disk (which might not be recommended?) so I didn’t notice any lagging or stutters once it overflow the max memory.
So in theory, if you are memory starved and have swap on a fast disk, you could instead use MemoryHigh flag to create a limit from where systemd will start the swapping without any of the OOM killing (or use both, Max has to be higher then High obv).

UnRelatedBurner OP ,

terminating if X is a very bad idea. I wouldn’t fancy loosing progress and corrupting my world

boredsquirrel ,
@boredsquirrel@slrpnk.net avatar

I only know Optifine. What is fabric?

taaz ,

Fabric is one of many mod loaders ala Forge. It’s newer and less bulky then Forge (but afaik it already did have it’s own drama so now we also have a fork called Quilt, the same goes for Forge and NeoForge).

The mods I’ve specified above can be considered as a suite replacement for the (old) OptiFine.

E: For example this all the mod loaders modrinth (mod hosting website, curseforge alternative) currently lists:

https://biglemmowski.win/pictrs/image/9191dc35-8ac5-46e2-974c-0ecb5eeef741.webp

LazaroFilm , (edited )
@LazaroFilm@lemmy.world avatar

Look for the modpack AdditiveIt’s based on Fabric with some great mods oriented towards speed and QOL which replace OptiFine in one package.

boredsquirrel ,
@boredsquirrel@slrpnk.net avatar

Thanks!

Nibodhika , in What is the largest file transfer you have ever done?

Why would dd have a limit on the amount of data it can copy, afaik dd doesn’t check not does anything fancy, if it can copy one bit it can copy infinite.

Even if it did any sort of validation, if it can do anything larger than RAM it needs to be able to do it in chunks.

nik9000 ,

Not looking at the man page, but I expect you can limit it if you want and the parser for the parameter knows about these names. If it were me it’d be one parser for byte size values and it’d work for chunk size and limit and sync interval and whatever else dd does.

Also probably limited by the size of the number tracking. I think dd reports the number of bytes copied at the end even in unlimited mode.

FooBarrington ,

No, it can’t copy infinite bits, because it has to store the current address somewhere. If they implement unbounded integers for this, they are still limited by your RAM, as that number can’t infinitely grow without infinite memory.

CrabAndBroom ,

Well they do nickname it disk destroyer, so if it was unlimited and someone messed it up, it could delete the entire simulation that we live in. So its for our own good really.

data1701d OP ,
@data1701d@startrek.website avatar

It’s less about dd’s limits and more laughs the fact that it supports units that might take decades or more for us to read a unit that size.

dukatos , in Windows 11 vs. Ubuntu 24.04 Linux Performance For The AMD Ryzen 9 9950X Review

The best thing here is Michael tested it with kernel 6.8.0, which is released in march - so there is no special optimization for new CPU in it. And, it still runs faster.

TGS , in What distro do you use for your servers?
@TGS@lemmy.world avatar

I use proxmox and run Debian containers and VMs

Lemmchen , in Java uses double ram.
@Lemmchen@feddit.org avatar
Guenther_Amanita , in 2GB Raspberry Pi 5 on sale now at $50

I don’t see any reason to use a Raspi instead of an used thin client for selfhosting.
They use about the same energy, but the Mini-PC has x86, which has better software support, has more ports, and runs more stable.

I have a RPI for my 3D-printer (Octoprint), and I will soon replace it with a “proper” PC, because it always crashes.

Raspberry Pis are good for very small appliances, but for anything more, they suck imo

nerdschleife ,

What’s a thin client?

dinckelman ,

A low-power computer typically used just to remotely connect to a proper server

PhictiveHomeRowing ,

Think of a browser and nothing else. Computation happens somewhere else (except JS)

Guenther_Amanita ,

A small form factor PC. Think of a Mac Mini. Small, often not-high-performance, low-powered PCs that are often used in business environments.

I use one as my home server.

nerdschleife ,

Ah, okay. I thought OP was referring to a thinkpad/thinkcentre

ghurab ,

That’s not what a thin client, that’s just a mini PC. A thin client is a computer that connects to remote sessions, and since that’s their main function, they’re they don’t need more computing power than you need to connect to a report desktop environment.

pbjamm ,
@pbjamm@beehaw.org avatar

that is not a thin-client in the traditional sense, just a small form factor (1liter) pc. Thin clients were minimal spec machines that were made to connect to a much more powerful server somewhere on the network that did all the work. The thin client handled the display and I/O.

Mini PCs are generally a far better deal than a Pi and much more powerful for any kind general computing use.

dinckelman ,

They are what you make of them. I have three 3b+ units sitting upstairs, one of which runs my entire media stack, and the second is mostly just for Pihole, and the last is for general tinkering I might need. The pin array is awesome to have.

No one’s arguing they are low performance (although a 5 is practically 5x the performance of a 3b+ unit), but they definitely don’t suck

pastermil ,

At least your 3B+ doesn’t cost $50.

Guenther_Amanita ,

I don’t even mean performance in terms of computing power.

RPIs are, imo, not meant as a server. It might (and will) work fine, but one of the main problems I have is the power supply. As soon as I send a more advanced print job to my RPI, it crashes. Even though I have the official power cord.

If it works for you - fine! I don’t want to tell badly about them. They are great.

It’s just that they are very inflexible.

atzanteol ,

RPIs are, imo, not meant as a server.

That’s not just your opinion, it’s a fact.

spaghettiwestern ,

Could be a bad board. I have a Pi 3B+ that intermittently crashes and shows insufficient voltage no matter what power supply is used.

Nisaea ,
@Nisaea@lemmy.sdf.org avatar

NGL, changing the computer instead of the power supply seems a bit expensive if that’s the problem

scarilog ,

Could also be your sd card btw.

Thebeardedsinglemalt ,

I bought a couple a few years ago, the only one I still use is the PiHole, which has been phenomenal. I did try to use one as a media server but turned out to be more of a pain than it was worth.

dinckelman ,

I handle everything through docker, and a Portainer agent on top of that, so it’s actually been quite painless. Would definitely recommend

azimir ,

We used a RPi 4 for a Plex server for a while. It was fine except it couldn’t do any live transcoding or handle h265 worth beans.

I upgraded to an OrangePi 5. I’m on a sata drive for the OS and a external USB disk for media. The thing is amazing!

No, it’s not a $50 computer. Yes, it works great.

I love RPi boards, but their hardware limitations are quick to be found as you move past simple hobbyist projects.

Stizzah ,
@Stizzah@lemmygrad.ml avatar

Which mini pc? I have an Intel NUC Intel i5 and looking for something smaller but can run a dev desktop (xfce, vscode, node, docker).

azimir ,

I use Intel NUC boards for desktop systems. The form factor is nice and compact. The only limiting factor would be the volume limits the GPU, but that’s not a requirement for me.

31337 ,

They’re good for media centers, since the support 4k HDR. Can also use Moonlight to stream games from a PC. GPIO is useful, but I guess the PI is overpowered for most GPIO use cases at this point.

narc0tic_bird ,

I agree, once you factor in a power supply (or PoE hat), case and storage a Raspberry Pi really isn’t all that cheap anymore nowadays. Unless you have a project that specifically benefits from the GPIO pins or the form factor, just get a cheap barebones mini PC or a used one with RAM and SSD already included.

This will get you a system that’s way more powerful even if it’s a couple of years old (the Pi’s SoC is fairly weak) and I/O throughput is no contest, normally with at least a dozen PCIe lanes to use for NVMe storage or 10 gigabit network cards, if you so desire.

Thebeardedsinglemalt ,

I’ve actually been considering getting a mini-pc. My old setup at home used to be my main PC hooked up to my TVs in the living room with a wireless keyboard. Id do some low end gaming on it and mostly streaming. Im in process of selling that house and looking to go back into a more traditional setup, with my main PC In a den with actual monitors, but still want to consider the option of having a mini PC in the living room TV for the occasional PC needs, and running lower end party games from steam like Jack Box.

jol , in New Mozilla Logo Spotted

I feel like we were just getting used to the old new logo?

It’s possible this new logo is only for using next to other brands.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines