I never bought CDs after about 1999, so this never affected me. However, if I’m remembering correctly, you could get past that nonsense by running a black sharpie marker along the outside of the CD, effectively making that portion unreadable. Unless I’m thinking of something else. Pls correct if this was about another nonsense DRM.
I think it would be better if we waited for these discussion posts to hit just as the movie releases. By the time it comes out this post will be over a week old.
Maybe if it’s something really big we could have prerelease discussion and then the released discussion.
Right then, how about posting 24 hours before the movie premiere? How’s that? Some people get really geeked about certain movies (or sewer-dwelling talking turtles and sensei rats, for that matter). Or do more than five people want it the day of the premiere? Let me know! We’re still in trial phase here with the pinned Discussion Megapost thang.
I think for all movies the main discussion threads should open morning of release or maybe the day before.
For the super hot movies it would be good to do prerelease thread (1 week ahead), regular discussion (day of), and a post release thread (1-2 week after release).
Also, wanted to be sure I wasn’t being shitty with my comment. This is a great format and I’m loving that we’re getting good discussions like this going.
I think another cool idea would be to do a throw back Thursday thing where a thread is opened on old movies. They could be left open forever to recreate the old IMDb forum days.
What’s the benefit to the average end user to modernizing NTFS?
Sure, I love having btrfs on my NAS for all the features it brings, but I’m not a normal person. What significant changes that would affect your average user does NTFS require to modernize it?
I just see it as an “if it’s not broken” type thing. I can’t say I’ve ever given the slightest care about what filesystem my computer was running until I got into NAS/backups, which itself was a good 10 years after I got into building PCs. The way I see it, it doesn’t really matter when I’m reinstalling every few years and have backups elsewhere.
At the very least, better filesystem level compression support. A somewhat common usecase might be people who use emulators. Both Wii U and PS3 are consoles where major emulators just use a folder on your filesystem. I know a lot of emulator users who are non-technical to the point that they don’t have “show hidden files and folders” enabled.
Also your average person wouldn’t necessarily need checksums, but having them built into the filesystem would lead to overall more reliability.
Near instantaneous snapshots and rollback (would help with system restore etc)
Compression that uses a modern algorithm
Checking for silent corruption, so users know if their files are no longer correct
I’d add better built in multi-device support and recovery (think RAID and drive pooling) but that might be beyond the “average” user (which is always a vague term and I feel there are many types of users within that average). E.g. users that mod their games can benefit from snapshots and/or reflink copies allowing to make backups of their game dirs without taking up any additional space beyond the changes that the mods add.
I agree all those are nice things to have, and things I’d want to see in an update. Now how can you sell those features to management? How do these improve the experience for the everyday end user?
I’d say the snapshots feature could be a major selling point. Windows needs a good backup/restore solution.
It just seems like potentially a ton of work to satisfy the needs of “people who think about filesystems”, which is an extremely small subset of users. I can see how it might be hard to get the manpower and resources needed to rework the Windows default filesystem.
I really have no clue how much work it takes though, so it’s just speculation on my end. I’m just curious; on one hand, I do see where NTFS is way behind, but on the other… who cares? I’ve somehow made it past 20 years of building WIndows PCs without really caring what filesystem I’ve used, from 95 all the way to 11.
I’m not sure you need to sell it to actual users. A lot of benefits of an advanced filesystem could be done by the OS itself, almost transparently. All of the features I mentioned could be managed by Windows, with only minimal changes to the UI. Even reflink copies could just be a control panel option then used by default in Explorer (equivalent of cp --reflink=auto in Linux). And from the OS side, deduplication would help a lot on Windows given all of the DLL bundling, and weird shit they have to do to maintain legacy compatibility, and that’s no small thing given how space inefficient modern Windows installs have become.
It would be some work to upgrade it (maybe a lot given how ancient and likely full of cruft that Windows is full of with legacy compatibility) but it would eventually make the system more reliable and more space efficient.
But yeah, there are challenges. I’m mainly speaking in terms of btrfs which would take some time to port to Windows (although there is a 3rd party driver they’d want to handle it themselves I suspect) but they’ll probably want to use their own ReFS and I’ve not really investigated it seriously so I can’t say how ready that is for prime time. But given that it’s being included as an option in some enterprise/server editions of Windows maybe it will be soon in consumer editions soon anyway (as much as I’d prefer something more open and widely supported, at least it’s a step forward on Windows).
I heard, this commercial distribution “Windows” still uses it. But this thing just recently got a (very limited) package manger. So they seem to be very late with adapting to current technology.
It forces you to update and then works at “something something” for 5 minutes to 5 hours and then reboots and does the same thing again but after logging in, none of your applications are updated and also none of the system seems to be changed with the updates. You don’t even get proper status information during updates.
Of course it doesn’t destroy itself when it doesn’t change anything …
Oof this is only thing if you have the os on an HDD. I’ve had similar behavior on *buntu running off of an HDD.
On an sdd or nvme you’ll never have stuff like this happen.
There is an argument to be made for it being better ux to not have programs update without telling you. Winget isn’t perfect, but it can auto update your stuff if need be.
I think they mean the full path length. As in you can't nest folders too deep or the total path length hits a limit. Not individual folder name limits.
File paths. Not just the filename, the entire directory path, including the filename. It’s way too easy to run up against limit if you’re actually organized.
You like diving 12 folders deep to find the file you’re after? I feel like there’s better, more efficient ways to be organized using metadata, but maybe I’m wrong.
Not OP, but I occasionally come across this issue at work, where some user complains they they are unable to access a file/folder because of the limit. You often find this in medium-large organisations with many regions and divisions and departments etc. Usually they would create a shortcut to their team/project’s folder space so they don’t have to manually navigate to it each time. The folder structure might be quite nested, but it’s organized logically, it makes sense. Better than dumping millions of files into a single folder.
Anyways, this isn’t actually an NTFS limit, but a Windows API limit. There’s even a registry value[1] you can change to lift the limit, but the problem is that it can crash legacy programs or lead to unexpected behavior, so large organisations (like ours) shy away from the change.
I would be pissed if they made me use such a ridiculously long login name at work. Mine is twelve characters and that’s already a pain in the ass (but it’s a huge company and I have a really common name, so I guess all the shorter variations were already taken).
Edit: Also, I checked it’s really very simple to enable 32kb paths in recent versions of Windows.
No, you don’t need to change any settings, that’s the thing! Windows, unlike other OSes, has several APIs. Old apps (and dumb apps) are using old API and are limited to 260 characters. New apps are using new API and are limited by 32k characters. This “new API” is available since NT4, btw.
I remember I had to change a setting when using Windows. And it even showed me an “Are you sure?” dialog. It wasn’t that long ago. Is that not a thing anymore?
I’ve run into it at work where I don’t get to choose many elements. Thanks “My Name - OneDrive” and people who insist on embedding file information into filenames.
The limit was 260. The OS and the filesystem support more. You have to enable a registry key and apps need to have a manifest which says they understand file paths longer than 260 characters. So while it hasn’t been a limitation for awhile, as long as apps were coded to support lesser path lengths it will continue to be a problem. There needs to be an conversion mechanism like Windows 95 had so that apps could continue to use short file names. Internally the app could use short path names while the rest of the OS was no longer held back.
Furthermore, apps using the unicode versions of functions (which all apps should be doing for a couple decades now) have 32kb maximum character length paths.
i’m confused… are you talking about ext4, xfs, zfs…? because these are the filesystems linux people talk about and these are also the filesystems that run the worlds databases and data storage systems
en.wikipedia.org
Oldest