Docker has an install script on their https://github.com/docker/docker-install page that takes a lot of the headache out. Also ‘sudo usermod -aG docker $USER’ will allow you to run docker without sudo.
I would suggest having a look at podman. It’s a drop-in replacement for docker, except it doesn’t require a constantly running daemon, it comes in the main package repositories, so you don’t have to do the key and repository stuff, and cockpit has a plugin to help manage podman containers.
Yes, as stated in the title and post. It is stable and easy to use and update. It also is available on a wide variety of architectures and devices. So far it never failed me.
I believe my post could apply to other systems.
If that was a rethorical question, I believe you’re not in favor of it. I think it would be more constrictive to elaborate and participate to the conversation:)
http://google.com/ works fine for me, tested in Firefox and with curl -6. So it could actually be your side that is broken, although it is probably your ISP’s.
My side works fine, Google just doesn’t like the address. It’s a tunnelbroker address, maybe they consider that bots… but only for some of their servers? It’s weird
Oh okay, IMO IPv6 tunnels are worse than just disabling it, because it’s basically just a proxy with IPv6, and since there’s no encryption (at this layer) both your ISP and now the tunnel could collect your data, as well as added latency.
But I guess it’s okay for experimentation or if you actually require IPv6 for something.
Hard disagree there. It is a tunnel, it is plenty fast if the intermediate node is close enough, and why would you want encryption at the IP layer.
It works great and gives me IPv6 that I otherwise wouldn’t have with my ISP (Optimum), allowing me to connect to native IPv6 site and use all the IPv6 functionality I want (dedicated IPs for containers/VMs etc).
Easy. Kbin/lemmy admins actually listen to their users, unlike spez and his cronies. The asshole can move to their own instance and continue there, but that might get them defederated.
Reddit wasn’t like that all the time. I do remember a time on Reddit where it was - or at least felt like it - a bit like it feels on Lemmy right now. Lots of nerdy outsider communities and admins who seemed to care. Not trying to be a negative nancy here, just a reality check or something like that.
Federation is over-hyped when joining or promoting. It’s only really important when there’s a problem.
It’s like trying to sell a new car by featuring the warranty, or trying to sell a computer by talking about backups. Both important, but not a shiny selling point for new users.
I get why the devs want to feature federation. If you build a car, you’re probably going to want to talk about the engine and its design. But the users just want “car go fast”.
An easier way to subscribe to communities from other instances. For example, a small or initially hidden subscribe button on the main feed next to any post.
Also greater visibility of communities on other instances, as they won’t be recognized until someone manually searches for them by their address. Right now you have to rely on the Lemmy Community browser or !newcommunities@lemmy.world.
Maybe, but I keep seeing the same complaint over and again from people, I do t really care about it for myself, I have a separate account for the naughty stuff, everything else is blocked on my main account, but sometimes nsfw doesn’t mean porn, maybe it’s explicitly described violence, or something else, not just porn. Maybe I would like the option to have nsfw content presented to me that isn’t people getting fucked. More nuance in the content delivery would be appreciated
I studied bot patterns on Reddit for a few years while using the site and was active in their takedown. My username is the same on there if you want to see the history of my involvement. What drove me to stop being so involved in bot takedowns is the extent to which Reddit as a site was continually redesigned to favor bots. In fact, I woke up today to a 3 day suspension for reporting a spam bot as a spam bot. I think what we need to examine in these cases, if possible, is if the bots were made strictly for the purpose of contesting blackouts (i.e. by Reddit themselves) or if they were made by a hobbyist or spammer. Given that these are on r/programming, that makes it seem more likely that a hobbyist programmer made these bots for a laugh or something, rather than it being an inside job. If the usual resources of Reddit’s API were accessible enough to provide a total history of these bot accounts’ posts and comments, then that would help to clarify (this is what I mean about Reddit redesigns favoring bots). On the subject, I think Lemmy needs to start implementing preemptive anti-bot features while it is in an embryonic stage of becoming a large social media site (or a pseudo-social media site like Reddit) to future-proof its development
I’m very new to this site so I’m not sure what all already exists. Some features that come to mind based on my experience on Reddit and other sites:
Ability to search the entire site to see if a string of text (or multiple select strings of text) has already appeared there, including removed content. On Reddit, this was useful for seeing if an account has copied the comment, the text within a post, or the post title from elsewhere on the site. SocialGrep, Reveddit, and Unddit were my preferred sources of this info for Reddit. Text may also have been copied by a bot from other sites, but the original tends to be more accessible in those cases.
Ability to search the entire site to see if an image has already appeared there. This was essentially only relevant for repost bots and for bots that recognize an image from another post and re-comment from that other post. I do have concerns about this becoming relevant in the future for comments that contain images. TinEye and reverse image search on Google were my preferred sources of this info, but I don’t know if Lemmy posts will show up on those sites. u/RepostSleuthBot and the like were also helpful, especially if summonable in the comments.
Blocking users should only filter them from the blocker’s feed, rather than make the blocked user unable to comment on the blocker’s posts and comments. Spammers and scammers would abuse this system to prevent human users from calling them out on being spammers and scammers. While this design makes sense for sites based on personal profiles such as FaceBook or Twitter, it does not work for sites categorized by subject matter with impersonal user profiles.
Say what you will about the bad aspects of 4chan (and you should!), but the use of Captchas prior to publishing a post or comment seems to majorly mitigate bot activity.
This doesn’t seem to be a problem on Lemmy, but on Reddit, not all of the information of a spam report was sent to the subreddit mods. A report for Spam -> Harmful Bots would tell the admins that it was a Harmful Bots report, but the mods would only see it as a generic spam report and not be fully informed of the issue. Also, unbeknownst to mods, admins could link a subreddit rule report to a sitewide rule report. What I think Lemmy could improve on in this regard is to keep the openended custom report option, but also include pre-written report options for community rules, instance rules, and sitewide rules.
Some sort of indicator for groups of accounts which seem to be commenting only on the exact same posts as each other, which commonly are bots.
Entirely dependent on the subreddit or community, requiring some sort of verification post or other verification with a photograph of a paper with their username, the community name, and current date on it prior to permitting the user to post/comment may be beneficial.
A sitewide blacklist structured like r/BotDefense, wherein suspect accounts can be submitted and, if determined to be bot, will be automatically blacklisted from participating communities. Blacklist appeals will also be essential just due to human error.
As someone who had my 16+ year old account on Reddit permabanned for writing antibot scripts trying to keep the community I modded free from scammer and spammers, this is spot on.
I know it’s a joke, but it’s an old one and it doesn’t make a lot of sense in this day and age.
Why are you comparing null to numbers? Shouldn’t you be assuring your values are valid first? Why are you using the “cast everything to the type you see fit and compare” operator?
Other languages would simply fail. Once more JavaScript greatest sin is not throwing an exception when you ask it to do things that don’t make sense.
i’ve read marx. at precisely no point does he say anything justifying the various atrocities state capitalist countries have committed. i think he’s wrong about some stuff, but even if you accept that his word is gospel, tankies are still just people who took leftist principles as an excuse for the imposing the kind of brutal authoritarianism that leftists are supposed to be against.
lemmy.ml
Top