Seriously, that title is worded like a straight up attack. Such a question, while open ended in who would consider what truth, still leads to the same outcome: engagement based purely on outrage and “proving the other side wrong.”
I sometimes wonder if people post things like this with the intention of filtering through comments to block people that post their political viewpoints in response. If thats the case, I would conssider this a very effective and intelligent post. However, I don’t think that this is the case.
On one hand people often don't like to hear bad news or an idea that means they have to do a thing or face a problem. On the other hand how a person is told the idea is a big part of a negative reaction. Often there is no reason to tell someone the thing at all.
I'll be straight forward if someone asks but I'm not "brutality honest". OP sounds like the "brutality honest" without anyone asking type.
I’ve been editing OSM for years. (896,339 edits in 3,427 changesets, apparently!) For me, it’s all about the free data. I once got a thank you note from someone who worked for a city with a particularly large municipal park. I’d added almost all the trails to the park and other information, and they’d used it to produce a printed map for the general public. Exactly the kind of thing I’d hoped for!
Personally, I do a lot of dualsport motorcycling and most backcountry maps around here are subpar. I map tons of trails and 2track and put them on the Garmin so I know where I’m going.
OSM is also great in lots of Europe–tons of detail.
JOSM is great.
Someone just recommended Organic Maps for the phone–it’s way snappier than Google Maps, but still not great with finding addresses.
I’ve had the best luck with BBBike and OpenMapChest for getting pre-built map files.
Basically you have to get one of these files with all the data you want in it and then stick it on your SD card on the GPS. (The GPS should mount like a thumb drive. If you already have a gmapsupp.img file on there, you might want to back it up in case things go sideways.) Some GPSes support multiple gmapsupp.img files, but a lot don’t. Here’s a thread on merging .
When I needed super fresh data, I’d download raw OSM data from Overpass and use mkgmap to build the gmapsupp.img.
That’s really cool to hear about the parks. Most of the parks around here are pretty well mapped out. Presumably the local community is pretty strong.
I really want to produce something for my city’s NET and BEECN emergency response programs. They already have a few different maps, but not one unified map. My ideal is a map that could be taken offline or printed to spec.
The international sponsor list is a list by Ukranian government, for all the companies that are doing business in Russia. Totally understandable move from their side of course. By actively sponsoring war effort, I think they are refering to the fact that they are paying taxes.
The title and the article is a bit misleading in some sense.
No, it’s not just the taxes, though that’s obviously a component of it. A quick google search will lead you to find that they are obligated by law to directly contribute to Russia’s military efforts by registering its draft-eligible staff, turning over information relevant to the war, assisting in the delivery of military equipment, and providing physical infrastructure, among other things.
Ukrainian or not, this isn’t just “oh well you’re kind of indirectly supporting the war by funding the government”. It is a very direct form of involvement.
Can you give me an article, because I could not find anything googling, maybe I didn’t put the right search terms. I found a website under the domain boycottrussia.info, but a website like this I can hardly consider objective. Keep in mind there is a lot of disinformation on the internet, and one should be careful using references from both the Russian and Ukranian governments and their allies.
Applying the same standard, should we boycott also all companies having business in Saudi Arabia, USA, and other countries that are involved in war efforts?
A lot of community types just simply don’t work without a minimum critical mass of members.
Imagine asking a programming question on a software development community of just 5 people. You end up with 3 people who aren’t active enough to see the question, 1 person sees but doesn’t have an answer and doesn’t respond (classic lurker), and one person sees it and responds that they don’t know the answer. Now imagine a community of 5 thousand people…it’s suddenly much more feasible to even bother asking the question.
Sure, fediverse could exist with just 5 people, but it would be worthless and pointless.
yea the reason to want more users is for niche communities, I don’t need a billion people just for memes or news, but when you subdivide your users down to niche communities suddenly you’ll want more
I wish there were more people on Lemmy talking about Deus Ex, The 7th Guest, DOS games, Randomizers, or specific TV shows that I’m currently watching (Reddit always had a pretty active sub for each and every show)
One could make the argument that 5000 users is still not mass adoption. If that is enough activity, then mass adoption is not a requirement for the fediverse to be a nice social place to be.
5,000 users in a niche community would need hundreds of millions in the wider network.
This is how bulletin boards used to work. The most successful were focused on a niche and one with 5,000 users would be big enough to be of use to people interested in that niche. But when your niche is part of a much larger community covering all niches, that community needs to be vast to get 5,000 subscribing to any given niche.
I think users should be able to filter entire instances. There is at least one that comes to mind that I know for sure doesn’t host content that I want to see, but I rather not be forced to hunt for that perfect list of federated instances or run my own server nor should I have to block each community one by one.
It should be really easy to implement, too. Thus, I consider it a missing feature.
I’m already working on just running my own instance lol. But no users, just me. That way I can federate/defederate with whoever I want and call it a day.
It’s kinda tricky because you can get a VPS set up really cheap. Like 5 dollars a month cheap or a bit more if you want something more feature rich. But the more instances you federate with, the more storage you’ll need since you’re basically mirroring all of the content you can see. So your cost for storage will go up every month pretty much forever.
Most of the aspects have already been covered but I would want to add one:
This was always the plan, it just wasn’t as highly prioritised as growth.
I work as a developer at a big tech company. We (the company) had our roadmap and it was mostly about getting more users. The more users you have the day the economy turns - the better off you are (… If you manage to turn an profit).
So when the economy went to shit and we (and other tech companies) no longer can loan money for free to cover our running expenses - the priorities shift. Working towards attracting more users is only going to increase your costs at the point and you don’t want to run out of money. So all roadmaps changed and cost saving efforts became the highest prio all of the sudden.
Gain a monopoly, get users addicted and reliant, then change the rules of the game and hope they stick with you. It’s happening now because of the economy for sure, but it’s not like it’s surprising.
Honestly, as a newbie to Linux I think the ratio of well documented processes vs. “draw the rest of the fucking owl” is too damn high.
The rule seems to be that CLI familiarity is treated as though its self-evident. The exception is a ground-up documented process with no assumptions of end user knowledge.
If that could be resolved I think it would make the Linux desktop much more appealing to wider demographics.
That said, I’m proud to say that I’ve migrated my entire home studio over to linux and have not nuked my system yet. Yet… Fortunately I have backups set up.
Linux on the desktop almost never needs CLI interaction though. Maybe you’ll need to copy/paste a command from the internet to fix some sketchy hardware, but almost everything works OOTB these days.
However, self-hosting isn’t a desktop Linux thing, it’s a server Linux thing. You can host it on your desktop, but as soon as you do anything remotely server-related, CLI familiarity is pretty much essential.
That depends on your use case for desktop linux of course. For me, yabridge is the tool I needed to run VSTs on Linux. Its CLI only as far as I know.
Don’t get me wrong; I’m not afraid of the CLI. Its just some tools are assuming the end user is a server admin or someone with deeper than the upper crust knowledge of how Linux works.
Don’t forget the situations where you find a good blog post or article that you can actually follow along until halfway through you get an error that the documentation doesn’t address. So you do some research and find out that they updated the commands for one of the dependency apps, so you try to piece together the updated documents with the original post, until something else breaks and you just end up giving up out of frustration.
That sounds an awful lot like modifying an ESP32 script I’ve been trying to follow from a YouTube tutorial published a while back. Research hasn’t uncovered anything for me to troubleshoot the issue so it’s a really shit experience.
That shouldn’t be too bad if you understand systemd though, right? Or is there something weird i’m missing? Do you have an example guide that illustrates the problem?
CLI familiarity is fine. CD, Nano, mkdir, rm. I am proficient with that. But I am not necessarily proficient with Docker (went with it because it worked nicely for another thing which was well documented and very straight forward). It’s just I’m trying to self host stuff. Some things like Wordpress and Immich are straightforward. Some things aren’t like Matrix and Mastodon. Lemmy is also notoriously bad.
I think if you’re talking wider demographics your model OSs are (obviously) Windows and macOS. People buy into that because CLI familiarity isn’t required. Especially with Apple products everything revolves around simplicity.
I do dream of a day when Linux can (at least somewhat) rival that. I love Linux because I am (or consider myself) intricately familiar with it and I can (theoretically) change every aspect about it. But mutability and limitless possibilities are not what makes an OS lovable to the average user. I think the advent of immutable Linux distros is a step in the right direction for mass adoption. Stuff just needs to work. Googling for StackOverflow or AskUbuntu postings shouldn’t ever be necessary when people just want to do whatever they were doing on Windows with limited technical knowledge.
However on another note, if you’re talking a home studio migration, not sure what that entails, but it sounds rather technical. I don’t want to be the guy to tell you that CLI familiarity is simply par for the course. Maybe your work shouldn’t require terminal interaction. Maybe there is a certain gap between absolutely basic linux tutorials and the more advanced ones like you suggest. Yet what I do want to say is that if you want to do repairwork on your own car it’s not exactly like that is supposed to be an accessible skill to acquire. Even if there are videos explaining step by step what you need to do, eventually you still need to get your own practice in. Stuff will break. We make mistakes and we learn from them. That is the point I’m trying to get at. Not all knowledge can be bestowed from without. Some of it just needs to grow organically from within.
Existentialism doesn’t necessarily claim that nothing matters, so yours sounds more like optimistic nihilism, which is very similar if not identical to absurdism.
Being rude to service staff. Immediate indicator on who they are as a person. There is zero reason to ever be mad at someone making near minimum wage whose job it is to grab you a drink or check you out or something. It also shows that they’ve themselves never worked service, which is a rite of passage
“Why should I tip?! I’m already paying for the service!”
Immediately leave without even telling them to take a cab.
Edit: 1, I am in the US, yes.
2: the wording sounded like it implied a behavior of a date while out dining. I was answering based on how I’d respond if a prospective mate treated underpaid US staff shitty.
Not necessarily. I’m not in the US, but we’ve imported a lot of their less savoury customs, and tipping culture is one of them. It is costumary to tip 18% where I’m from.
We send out a monthly internal newsletter to management summarizing what we did that month in layman’s terms.
We also include info about major security breaches, hacks or system failures that affected other companies along with a short explanation about why it didn’t affect us.
It still goes over the head of management, but it gives them the feeling we’re smart, on top of things, and important.
IT PR good managers do to sell the team and keep them off the block when layoffs happen because of say poor investment decisions like commercial real estate as that market plummets
What kind of prompt does your company 2FA provide? Using openconnect with networkmangler, I get a pop up to input my pin+totp. I haven’t done the script way in the last few years, but the connection script is plain shell and I was able to handle the 2FA from there too
It's some time ago I dug deeper on what was happening, but openconnect was getting a different response from the server than it expected and it just failed because of that.
I use a little oneliner with tofi (rofi/wofi would also work) to select the current output and avoid pavucontrol. It’s mapped to a sway binding but would probably work in any wm/de:
<span style="color:#323232;">pactl set-default-sink $(pactl list short sinks |awk '{print $2}' |tofi $tofi_args)
</span>
I’m using pipewire so the functionality of pactl is actually provided through pipewire-pulse I think
Does set-default-sink change an already current stream? Or do you need move-sink-input.
I’ve looked at the manpages but was a bit overwhelmed and didn’t try to make my own script. Your solution gives me motivation to do so. I also use sway and pipewire. Though I use fuzzel for my launcher.
I love programs like freecad despite the really hard/unintuitive gui. 95% of all the modelling i need to do (as an amateur) can be done easily in a python script.
The finishing touches like adding filets and chamfers are the annoying part were gui is easier, due to the way edges are referenced.
Likewise at work, we have to produce a lot of regular reports in excel. All done via python / sql.
It’s been years since I had to admin Windows servers, but I was quite impressed with the number of MS products where the install and configuration tools would output the Powershell commands to carry out the changes you’d asked for. It made it quite a lot easier to automate. I’d love to see that paradigm catch on more widely, with the GUI and CLI having the same functionality and the GUI giving you the commands to run.
I want to preserve and archive information I used because it’s a reflection of the things I did, learned and studied throughout life.
Then my use case are:
Orientation about “events”: places to visit on daytrips or holidays (musea, nature, parks, campsites) and looking for practical information and background as well.
Gather a “dossier”: info to help make a decision (buying expensive things, how to do home improvement etc)
Building a personal knowledge database: interesting articles and blogs.
My current workflow:
Browse
Bookmark extensively
Download pdf or other content (maps, routes, images) when provided.
Open bookmarks.
Fireshot every webpage to pdf and png
Save everything with a consequent filename (YYYYMMDD - Source - Title)
I would like to automate the last 3 steps of my workflow.
This one doesn’t actually seem to load new network requests, but the way the scrolling works seems to break any other screenshot application I’ve tried.
Can confirm, tested it with Signal forum, also discourse. Fireshot stops at the end of the current loaded messages (20 of 94) and doesnt scroll further by itself.
Yes it is hard, and that was their damn fault. I can’t believe they expected developers to have to program which processors take which loads with such granularity. Unbelievably stupid.
It is, and it does provide improved performance at the expense of complexity. Both India and the US Air Force actually used clusters of PS3s to create supercomputers.
Nah, that’s still a bunch of bull, they designed it and have all the documentation. They know all of its functionality, hidden or otherwise, it’s “undocumented” functions, it’s quirk’s, the very ins and outs of it. They probably still have original designers on staff. They have far more knowledge and experience of their own design than any game developers.
And yet RPCS3, an open source PS3 emulator based on reverse engineered research is able to achieve decent playability on most games.
Not to mention, they’re a multi-billion dollar company, don’t make excuses for them.
Most of the games I’ve played on RPCS3 look way better and run much smoother than how they did on the console itself. And no long wait times to load into the console OS save menuz saving was nearly instant. So good.
I’ve worked at companies where the documentation was either non-existent, not digitized, or very poor in quality. Add 10+ years to that when nobody is left at the company who worked on the original project and it can cause this exact level of frustration.
AFAIK, the documentation isn’t the main problem. I’m pretty sure PS3 is quite well understood.
The problem is how to translate the code to a typical X86 architecture. PS3’s uses a very different architecture with a big focus on their own special way on doing parallelism. It’s not an easy translation, and it must be done at great speed.
The work on RPCS3 incredible, but it took them more than a decade of optimizations to get where they are now. Wii U emulation got figured out relatively quickly in comparison, even if it uses similar specs to PS3.
There can be a lot of subtle changes going from one uarch to another.
Eg, C/C++ for x64 and ARM both use a coprocessor register to store the pointer to thread-local storage. On x64, you can offset that address and read it from memory as an atomic operation. On ARM, you need to first load it into a core register, then you can read the address with offset from memory. This makes accessing thread-local memory on ARM more complicated to do in a thread safe manner than on x64 because you need to be sure you don’t get pre-empted between those two instructions or one thread can end up with another’s thread-local memory pointer. Some details might be off, it’s been a while since I dealt with this issue. I think there was another thing that had to line up perfectly for the bug to happen (like have it happen during a user-mode context switch).
And that’s an example for two more similar uarchs. I’m not familiar with cell but I understand it to be a lot more different than x64 vs ARM. Sure, they’ve got all the documentation and probably still even have the collective expertise such that everything is known by at least someone without needing to look it up, but those individuals might not have that same understanding on the x64 side of things to see the pitfalls before running into them.
And even once they experience various bugs, they still need to be debugged to figure out what’s going on, and there’s potential that the solution isn’t even possible in the paradigm used to design whatever go-between system they were currently working on.
They are both Turing complete, so there is a 1:1 functional equivalence between them (ie, anything one can do, the other can). But it doesn’t mean both will be able to do it as fast as the other. An obvious example of this is desktops with 2024 hardware and desktops with 1990 hardware also have that 1:1 functional equivalence, but the more recent machines run circles around the older ones.
That’s all understandable, for a startup or young company. But this is Sony a multi-billion dollar electronics company with many MANY released devices and software projects under its belt.
If they had taken things seriously, invested the proper funding and pulled the appropriate personnel they would have no problems getting something out that can beat RPCS3 in a year tops.
They tried to just slap something together as (what someone around here commented a while back) a minimum value add product and shove it out the door. Any claims of “It’s just too hard” they try to make is nothing but cover AFAIC now that people are starting to call them out on it
I don’t think it being hard is really the issue. Sony is a billion dollar multi-national corporation and they don’t get any benefit of the doubt whatsoever. Is it hard? Maybe it is, but maybe they should have thought of what they were going to do in the future when they were designing this. As was pointed out elsewhere, volunteers making an open source emulator are managing it so Sony not wanting to, or being unable to, isn’t an excuse.
Am I the only one that appreciates those people? I actually might be one of those people myself, the jury’s still out.
I have a coworker that LOVES math, and he dumps topology and set theories on me all the time. I don’t know what half of it means, but I find the concepts very interesting.
When did we as a society decide that being passionate about something is a bad thing.
Sure sometimes I’m not in the mood of listening to an infodump, but I appreciate that someone cares enough to want to share it with me.
I as someone who is bad at coming up with things to talk about Those kinds of people are so fun to talk with as i don’t have to constantly worry about what to talk about next.
Just how much cheaper and longer lasting keeping thing like rice, dried beans and flour can be. It’s amazing to me that no matter how empty my cupboards/fridge is I can always make fresh tortillas, refried beans, and rice in like an hour.
My wife’s Italian. Replace your items with always having a bottle of sauce and a packet of pasta in the cupboard, and there’s always a meal to be had no matter how empty the fridge is.
My GF is Italian too. One of the most important things I learned from her is literally this. Also, as long as you have any kind of vegetables in your house, you are always one step away from a pasta sauce.
100% For us, a passata, an onion, and some garlic is the minimum needed.
Probably helps that the FIL delivers us boxes of homemade passata all the time - we never have less than a dozen bottles on our storage shelves in the garage. But even if we were to ever run out, a couple of store-bought bottles in the pantry is our fallback option.
Amen to that. But I can’t do jar/bottled sauce so if I want easy noodles, it’s cook noodles, leave some pasta water after draining, throw in some butter at the end to make it thicc, then serve topped with olive oil/red pepper flakes/salt/pepper/parmigiano Reggiano (all things I make sure I always have in stock always)
I also keep a stack of cans of San Marzano tomatoes to make a red sauce any time I want, but that takes a couple hours instead of 20 minutes.
There are good sauces you can make from canned tomatoes in 20 minutes (depending on your prep speed).
My go tos are Putanesca & Vodka sauce, but there’s a lot more you can do. Mark Bittman’s How to Cook Everything has a simple recipe and then a big list of variants, most of which can be done in 20 minutes.
kbin.life
Top