Atom feeds are widely supported (it's how I found this post!) and there are many libraries/apps/plugins for aggregation. Robust old tech. And no need to limit feeds to Git activity if you don't want to :) Good luck!
Probably not, since Git is federated and decentralized. There are no git “accounts”. Git asks for your name and email but those are basically meaningless unless the repository hosting platform does something with them like ties them to an account identity.
You could maybe use the GitHub activity view by also mirroring your projects from elsewhere onto GH
theuppermostinlife - It’s just one person scooping up all of the music they like and curating a bunch of compilations. I’ve found lots of music I listen to regularly thanks to their efforts.
Worst is that they dangle(d /not sure if they still do) a lower tier for accounts in good standing for a certain amount of time. I had an aced account but every time I was close to the supposed tier I got some psycho customer that would find nonsense to complain about and just stop me from qualifying.
In truth it is corruption at every level. Taxes are stupid high for a modern business model. Like if a business can exist on 25% margins what the hell am I paying 10% more for. I can’t pass that shit on and it is taxing used goods where the government has done absolutely nothing to justify the cost, it is just a criminal tariff to destroy social mobility as far as I’m concerned. Shipping prices are ridiculous for what they are doing, and largely inefficient. Then eBay doesn’t do very much at all in terms of actual support. It is like reddit where almost everything they are working on is absolute trash while the real core service is neglected and misunderstood. They should be cutting all of their fees to a third of what they are and push all corporate airhead plans and projects into their own business spaces as independent business like operations funded by foolish investors. Screw the design prettiness and seasonal junk; just make it fiercely efficient and bulletproof so that selling on eBay is not even noticeable on the bottom line. PayPal is like 3 times as much as a bad terminal transaction processor too.
I download to my HDD and anything truly worth of keeping gets burned to a BD-R disc for long term cold storage. HDD is more likely to fail in 10+ years than a BD-R.
Building a NAS is a large upfront cost but it’s worth it IMO. Giant HDDs are fairly cheap now and you can use cool filesystems like Btrfs to combat bit flips from cosmic rays and the like. I’m not sure I’d trust a dye based optical media, but there are apparently some archive quality 100 year BD-R. Most have a drastically shorter lifespan, though.
According to the Canadian Conservation Institute, which publishes a paper on media longevity, BD-R discs are expected to last between 5 and 20 years, depending on the material they are made out of. BD-RE, which is erasable Blu-ray, is estimated for 20 to 50 years while DVD-R and CD-R, which hold a lot less data, can last 50 to 100 years.
Building a NAS is a large upfront cost but it’s worth it IMO.
Too much of a hassle. With discs, they can be transported far easier than a NAS + drives and they can be compartmentalized and distributed to other people easier than with a NAS.
I’m not sure I’d trust a dye based optical media, but there are apparently some archive quality 100 year BD-R.
I wouldn’t trust dye-based optical media either. The BD-R discs I use incorporate an inorganic writable layer that’s rated for 100+ year storage under ideal conditions. BD-R discs are WORM (write once, read many times) so they cannot be re-written-- another massive benefit for archival purposes.
The author of this article did a very poor job at researching the subject matter. There’s zero mention of things like the difference between HTL vs LTH, or things like Verbatim’s MABL layers. There’s a good reason why one form of preferred media storage archivists use is BD-R. Let’s take the 100+ year ratings with a grain of salt, and assume say… 50 years. The average hard drive can be relied on for about 10 years. You can see where I’m going with this, which is why I’m far more comfortable using BD-R discs with HTL/MABL for long term data storage instead of hard drives which would have to be replaced every 10 years or so.
BD-R discs are expected to last between 5 and 20 years, depending on the material they are made out of. BD-RE, which is erasable Blu-ray, is estimated for 20 to 50 years while DVD-R and CD-R, which hold a lot less data, can last 50 to 100 years.
I’ve seen that Canadian govt link passed around on other forums and I’d remind people of how painfully outdated that info is. Again, no mention of HTL, which is the big factor that significantly improves longevity and reliability. What I’ve always found really bizarre is that they single academic paper that the Canadian govt page relies on in terms of BD-R’s lifespan (Iraci 2018) is hardly adequate. If you read Iraci 2018, you’ll see how it… really isn’t based on good data or testing practices at all. I think the problem is people see a scientific citation and (understandably) assume the info is legit, but in this case scratching the surface reveals an incredibly bad research paper written by an author who appears to have very little past/future experience in that field.
Testing involved the exposure of samples to conditions of 80 °C and 85 % relative humidity for intervals up to 84 days
^ That’s from Iraci 2018. Testing the reliability of a product should involve realistic conditions. I’d ask anyone who supports Iraci’s paper to answer this-- in what kind of remotely plausible situation would you find yourself in where conditions are 80 °C with 85% RH? Further, do you trust a paper that purports these conditions to be suitable when testing the longevity of optical media? To me, this is like testing various panes of glass by throwing them off a high rise building. Iraci’s paper is ridiculous, IMO-- and there’s a good reason why it’s been cited like 2 times in the last 6 years.
For me, it’s just too much risk. I don’t want to have to worry about counterfeit discs or a silent downgrade from the manufacturer. Those inorganic discs are slated to last a long time, but who really knows? A set of HDDs in RAID with a 3-2-1 backup strategy is the gold standard. HDDs do fail, and I’ve already planned for that.
You do point out some good points I didn’t consider before for BD-R, but for me, it’s NAS and sneakernet with flash drives for the homies. Hardly anyone I know has an optical drive anymore, much less a Blu-ray drive in their PC.
This is just Reddit falling for misinformation. That thread has been debunked so many times. There’s a bunch of good YouTube videos covering it but long story short, redditors noticed something odd and immediately assumed it was some huge conspiracy when it wasn’t.
And again, that 2018 paper… I encourage people to read it and see just how silly the methodology was.
Regarding the testing - Short of waiting 100 years, how else would you accelerate the degradation of the discs to simulate aging?
Not totally surprised about Reddit falling for some misunderstood labeling. Just curious about that, mainly.
However, even if they are perfect they still wouldn’t meet my needs. I couldn’t use them to share data with anyone I know, as nobody has a data Blu-ray drive. I can’t access the data on them at a whim, and they’re slower than a RAID array. I can’t easily perform automated routine data scrubbing to ensure corruption hasn’t occurred. Speaking of; how often do you verify the data on your discs, and how do you do it?
I can see its usefulness in some scenarios (cold storage), but I’m quite happy with my NAS.
I’m not sure, but I can say with certainty that increasing temps to 80C with 85% RH isn’t any kind of demonstrable way of accurately predicting longevity under realistic conditions.
If I wanted to safety test a car, it would make sense to run a series of conventional car crashes. It wouldn’t make sense to drive the car off a cliff and then claim that during testing, the car was proven to be unsafe.
I agree with a lot of other points. Personally, I just find it works better in my brain to have all media (TV shows, eBooks, movies, and music) organized on discs. Same goes with personal photos and videos. For certain things, I keep copies on my PC like photos and music, but for other things that I don’t access frequently, I prefer to have them on discs. That said, I do have a HDD backup of everything. I’d love to get another large HDD but just can’t justify the $$$.
I thought ebooks are cool for a while but the smell, the memories, the sound of paper books. It’s my childhood. Life is the whole experience and nowadays I don’t even watch movies until I have a proper setup first, which sometimes also means stimulants. It’s why cinemas are still a thing.
We can strip it down to just clean, raw information but the noise enriches the taste.
@xnx PieFed won’t have an app any time soon due to the way it’s implemented. It’s still awesome without a native app because it’s fast and doesn’t really need direct access to hardware to do its thing.
Tech detail: PieFed is a Python app using Flask and server-side rendered HTML templates. It is super fast as there’s no heavy Javascript framework being used. The maintainer has written about how PieFed is developed with poor internet connections in mind: https://piefed.social/post/6102
There isn't anything stopping this. It's just that no one is working on an app. And there isn't any API implemented (yet) for an app to hook in to and fetch posts and comments. Both could be programmed. Someone could also copy the Lemmy API and use arbitrary Lemmy apps with Piefed. I think the developer is open to any of that and I'm pretty sure I read some feature request. It's just that the focus currently is on other things. And Piefed works well as an progressive web app. You can open it in your browser and click "Add to home screen" and you'll get an icon and a browser window that pretty much feels like an app. I'm using that and also don't see any benefit in putting in the effort to maintain an app, when it works well as is.
Both could be programmed. Someone could also copy the Lemmy API and use arbitrary Lemmy apps with Piefed.
This seems like an interesting idea. On one hand, I could see how it could hamper development, but on the other hand, it would be nice if all of the threadiverse platforms (Lemmy, Piefed, Sublinks, Mbin?) were standardized enough that the apps could be interoperable. I think giving users multiple options for how to access and interact with the content would be good for the fediverse as a whole.
That would be nice. In practice, not even ActivityPub as the underlying protocol is standardized enough to ensure interoperability between the microblogging, threaded conversations, videos, etc. As far as I understand, it's pretty minimal and even voting etc isn't as standardized as it needed to be. So I don't have much hope for another protocol being that well-defined and agreed upon, if we don't even have that.
That being said... ActivityPub defines server to server and client to server communication. I think a good way to tacke this is do away with extra Lemmy, Piefed, Mastodon and Peertube clients/apps, and have all the apps speak ActivityPub with the servers/instances. That's already implemented on the server side. It'd do away with implementing any extra APIs. And make any app compatible with any Fediverse project. But we need a new ActivityPub protocol revision for that. Well-defined and with quite some extras. compared to what we have now. And everyone needs to agree on this and implement it. But in my eyes that would solve a lot of issues that are currently slowing down the Fediverse.
@threelonmusketeers@hendrik This is how many Fediverse microblogging systems currently work; they serve the Mastodon API for client to server (e.g. app to server) interactions. GoToSocial doesn't even provide any user interface; you use it from some app originally designed for Mastodon. Why? I think because Mastodon's HTTP API is simpler, better documented and well-tested compared to something like ActivityPub's Client-To-Server API.
@skullgiver Good Q. Some thoughts... a standard Python, Flask, PostgreSQL app can handle hundreds of requests per second on a single machine. Any bottlenecks - Lemmy or PieFed - would probably not be at the language yet. For example, Lemmy's poor performance when I looked ~1 year ago came from a bizarre disregard for things like relational DB query optimisation, HTTP caching, and how the stock frontend lemmy-ui fetched data. Yet Lemmy is written in Rust which is known for speed.
kbin.life
Active