There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Bots are running rampant. How do we stop them from ruining Lemmy?

Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

dailydot.com/…/chatgpt-bot-x-russian-campaign-mem…

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

Metz ,

Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.

See e.g. wikipedia.org/wiki/Hashcash

This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.

tatterdemalion ,
@tatterdemalion@programming.dev avatar

Why resort to an expensive decentralized mechanism when we already have a client-server model? We can just implement rate-limiting on the server.

Metz ,

Can’t this simply be circumvented by the attackers operating several Lemmy servers of their own? That way they can pump as many messages into the network as they want. But with PoW the network would only accept the messages work was done for.

UndercoverUlrikHD ,

A chain/tree of trust. If a particular parent node has trusted a lot of users that proves to be malicious bots, you break the chain of trust by removing the parent node. Orphaned real users would then need to find a new account that is willing to trust them, while the bots are left out hanging.

Not sure how well it would work on federated platforms though.

Fedizen ,

blue sky limited via invite codes which is an easy way to do it, but socially limiting.

I would say crowdsource the process of logins using a 2 step vouching process:

  1. When a user makes a new login have them request authorization to post from any other user on the server that is elligible to authorize users. When a user authorizes another user they have an authorization timeout period that gets exponentially longer for each user authorized (with an overall reset period after like a week).
  2. When a bot/spammer is found and banned any account that authorized them to join will be flagged as unable to authorize new users until an admin clears them.

Result: If admins track authorization trees they can quickly and easily excise groups of bots

DandomRude ,
@DandomRude@lemmy.world avatar

I think the only way to solve this problem for good would be to tie social media accounts to proof of identity. However, apart from what would certainly be a difficult technical implementation, this would create a whole bunch of different problems. The benefits would probably not outweigh the costs.

wewbull ,
  1. Make bot accounts a separate type of account so legitimate bots don’t appear as users. These can’t vote, are filtered out of post counts and users can be presented with more filtering option for them. Bot accounts are clearly marked.
  2. Heavily rate limit any API that enables posting to a normal user account.
  3. Make having a bot on a human user account bannable offence and enforce it strongly.
lvxferre ,
@lvxferre@mander.xyz avatar

As others said you can’t prevent them completely. Only partially. You do it four steps:

  1. Make it unattractive for bots.
  2. Prevent them from joining.
  3. Prevent them from posting/commenting.
  4. Detect them and kick them out.

The sad part is that, if you go too hard with bot eradication, it’ll eventually inconvenience real people too. (Cue to Captcha. That shit is great against bots, but it’s cancer if you’re a human.) Or it’ll be laborious/expensive and not scale well. (Cue to “why do you want to join our instance?”).

Ensign_Crab ,

How do we even fix this issue or prevent it from affecting Lemmy??

Simple. Just scream that everyone whose opinion you dislike is a bot.

P1nkman ,

I disagree with this statement, so Ensign_Crab must be a bot. Reported.

pop ,

Internet is not a place for public discourse, it never was. it’s the game of numbers where people brigade discussions and make it confirm to their biases.

Post something bad about the US with facts and statistics in US centric reddit sub, youtube video or article, and see how it divulges into brigading, name calling and racism. Do that on lemmy.ml to call out china/russia. Go to youtube videos with anything critical about India.

For all countries with massive population on the internet, you’re going to get bombarded with lies, delfection, whataboutism and strawman. Add in a few bots and you shape the narrative.

There’s also burying bad press with literally downvoting and never interacting.

Both are easy on the internet when you’ve got the brainwashed gullible mass to steer the narrative.

DandomRude ,
@DandomRude@lemmy.world avatar

Well, unfortunately, the internet and especially social media is still the main source of information for more and more people, if not the only one. For many, it is also the only place where public discourse takes place, even if you can hardly call it that. I guess we are probably screwed.

MentalEdge , (edited )
@MentalEdge@sopuli.xyz avatar

Just because you can’t change minds by walking into the centers of people’s bubbles and trying to shout logic at the people there, doesn’t mean the genuine exchange of ideas at the intersecting outer edges of different groups aren’t real or important.

Entrenched opinions are nearly impossibly to alter in discussion, you can’t force people to change their minds, to see reality for what it is even if they refuse. They have to be willing to actually listen, first.

And people can and do grow disillusioned, at which point they will move away from their bubbles of their own accord, and go looking for real discourse.

At that point it’s important for reasonable discussion that stands up to scrutiny to exist for them to find.

And it does.

AnarchistArtificer ,

I agree. Whenever I get into an argument online, it’s usually with the understanding that it exists for the benefit of the people who may spectate the argument — I’m rarely aiming to change the mind of the person I’m conversing with. Especially when it’s not even a discussion, but a more straightforward calling someone out for something, that’s for the benefit of other people in the comments, because some sentiments cannot go unchanged.

AlexWIWA ,

By being small and unimportant

Absolute_Axoltl ,

Excellent. That’s basically my super power.

AmidFuror ,

One argument in favor of bots on social media is their ability to automate routine tasks and provide instant responses. For example, bots can handle customer service inquiries, offer real-time updates, and manage repetitive interactions, which can enhance user experience and free up human moderators for more complex tasks. Additionally, they can help in disseminating important information quickly and efficiently, especially in emergency situations or for public awareness campaigns.

greengear5 ,

This reads like a chatgpt reply 😅

AlexanderESmith ,

Maybe stop letting any random person create an account with no verification whatsoever

Cadeillac ,
@Cadeillac@lemmy.world avatar

Are you THE AlexanderESmith of social.alexanderesmith.com fame??

AlexanderESmith ,

Indeed I am! But I don't let all that fame go to my head (I have a special deal for autographs right now, just $20!)

But seriously, while I consider lackluster (or completely missing) new-account verification to be the much larger issue, federation is one to watch as well. My instance is so-named because I'm the only one who uses it.

At least it's a fairly significant effort to set up an entire instance for a single user. That should keep spam from single-user instances reasonably low. And if someone sets up a vaguely legitimate-looking instance, but enough users are muted/blocked/moderated/etc, you can just block the entire instance. Changing instance names is more of a hassle than nuking it entirely and starting over (new domain, new database, new IPs if the admins are paying attention, etc).

Cadeillac ,
@Cadeillac@lemmy.world avatar

Sounds reasonable I suppose. I don’t know a whole lot of the under the hood workings of Lemmy and I’m not going to pretend I do. I was mostly poking fun in the spirit of that one guy that kept getting asked if he was from some forum

Edit: The Reference

AlexanderESmith ,

heh, indeed.

Yeah, technically I run mbin (a fork of the now-defunct kbin) which has both threaded (reddit/lemmy/etc) and microblog (deadbird/mastodon/etc) features. I originally set myself up on kbin.social , but after it died I decided to not let my account (history/rep/preferences/subscriptions/etc) continue to be subject to the whim of random admins that might run out of funding, see something shiny, do something stupid and get defederated, etc. I thought "Wait, I'm a random admin, I'll just make my own instance, with blackjack, and hookers..."

Cadeillac ,
@Cadeillac@lemmy.world avatar

Hell yeah! I dig it. Thanks for the explanation. Why did they skip over lbin?

AsudoxDev ,
@AsudoxDev@programming.dev avatar

You can’t get rid of bots, nor spammers. The only thing is that you can have a more aggressive automated punishment system, which will unevitably also punish good users, along with the bad users.

Feathercrown ,

Some sort of “report as bot” --> required captcha pipeline would be useful

linearchaos ,
@linearchaos@lemmy.world avatar

Captcha is already mostly machine breakable, I’ve seen some new interesting pattern-based stuff but nothing that you couldn’t do image training against.

At some point not too far in the future you won’t be able to use captcha to stop bots from posting. It simply won’t even be a hurdle, a couple extra pennies of computational power.

There’s probably some power in detecting accounts that are blocked by many people. The problem is no matter what we do we’re heading towards blocking them with an algorithm or AI. And I’d hate to see that for Lemmy.

This place is just the stuff you follow with the raw up and down votes. We don’t hide unpopular posts making brigading less useful.

PenisDuckCuck9001 ,

deleted_by_author

  • Loading...
  • catloaf ,

    I have never seen this happen. Have you? Can you share a link?

    Jimmycakes ,

    You don’t.

    You employ critical thinking skills in all interactions on the web.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines