There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

How do we get "normies" to adopt the Fediverse?

This is a follow-up from my previous thread.

The thread discussed the question of why people tend to choose proprietary microblogging platfroms (i.e. Bluesky or Threads) over the free and open source microblogging platform, Mastodon.

The reasons, summarised by @noodlejetski are:

  1. marketing
  2. not having to pick the instance when registering
  3. people who have experienced Mastodon’s hermetic culture discouraging others from joining
  4. algorithms helping discover people and content to follow
  5. marketing

and I’m saying that as a firm Mastodon user and believer.

Now that we know why people move to proprietary microblogging platforms, we can also produce methods to counter this.

How do we get “normies” to adopt the Fediverse?

BeAware ,
@BeAware@social.beaware.live avatar

@dch82 first, "normies" have to not get harassed when they come here.

Unfortunately the biggest Fedi software refuses to add automated reporting of offensive posts so if it's not reported, the admins won't even see it.

People coming from corporate social media are used to ignoring the report button because in their experience, it either doesn't work, or gets ignored by admins anyway.

We need automated reporting.

@fediverse

Blaze ,
@Blaze@feddit.org avatar

Federated reporting would help too

AterNox ,
@AterNox@atergens.com avatar

@BeAware @dch82 Maybe im a little lost. Isn't there a block and report button on Mastodon? I'm using Misskey and both buttons seem to work. I mean im reporting to myself, but the button seems to work. What kind of automated blocking are you trying to do here?

BeAware ,
@BeAware@social.beaware.live avatar

@AterNox @dch82 blocking and reporting work fine.

However, people from corporate social media won't report posts because in their experience, it either doesn't get taken seriously or the admins ignore it. Corporate social media sites don't exactly act on reports in a timely manner.

I'm on my own instance, I moderate for myself. I don't want slurs to exist on my instance at all. However, if I don't see them with my own eyes, I cannot ban the user.

PS. I'm talking about banning users that are harassing others on the instance level. These are user actions. I am an admin. I run my own instance.

@fediverse

AterNox ,
@AterNox@atergens.com avatar

@BeAware @dch82 So Mastodon not have a wordlist you can populate that "removes" posts with the keywords you provide? It took me a while to find it in Misskey, works like a charm,

BeAware ,
@BeAware@social.beaware.live avatar

@AterNox @dch82 doesn't exist for admins. It works on a "user" level. But that won't remove the post or data from the instance, it just "hides" it so the single user can't see.

@fediverse

cm0002 ,

I’m confused, do you mean like automated enforcement rules/algorithms like big SM has? I.e. if user gets reported for breaking Y rule X amount of times ban user for Z amount of time and forward to admin for further action?

BeAware ,
@BeAware@social.beaware.live avatar

@cm0002 no, I want automated reports.

A user using the n word, full on with the hard R, isn't gonna be a good post. It should be automatically reported to me so that I can judge context and take action.

If a user doesn't report it, I won't see it.

I'm on my own instance, I am the user.

If I don't report it, nobody sees it.

That's dumb.

@fediverse

cm0002 ,

Ah, makes sense now, that is dumb. I can totally see why they would have issues with automated enforcement, but what you described I don’t see why anyone would be against it lol

ALostInquirer ,

By automated reporting do you mean something like filters on the backend to flag offensive posts per some custom settings?

osaerisxero ,

I unironically think it would be easier to train users that the report button works now than it would to get automated reporting that was worth a damn implemented.

Lost_My_Mind ,

We need automated reporting.

I’m fine with auto REPORTING, but the actual moderation needs to be a human. Auto moderation is bad. It gets things wrong. It’s how I got banned from both twitter (calm down, this was back in 2018 before it was an elon owned nazi cesspool), and reddit.

On twitter I saw a funny video that was posted, and I replied “Aw man, that killed me”.

I was banned for “inciting death threats”

BeAware ,
@BeAware@social.beaware.live avatar

@Lost_My_Mind yeah, just reporting.

I want to do the actual judgement, but if I don't know the post exists, I can't judge anything and it makes me so mad that possible racist stuff can exist on my instance without my knowledge because I havent "seen" it.

@fediverse

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines