There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Stovetop ,

I don’t know exactly what angle you’re looking to clarify in that regard, but to ELI5 it:

There are two factors: targeted ads and algorithm manipulation.

Mainstream social media sites earn money from ads they deliver. The more people stay on the site and view posts, the more ads they see. The algorithm is designed to promote content that users are likelier to view, not necessarily content that they would like more. In practice, this tends to be content that provides some sort of shock value. That combination of targeted ads with clickbait creates “doomscrolling”.

Longer explanation below:

The value that social media sites give to advertisers is that they know everything about their users. They collect data based on posts and viewing habits to learn things like income, hobbies, location, sexual orientation, political affiliation, etc. When advertisers buy ads to show on social media sites, they get to target these ads at specific people that they are likely to leave the biggest impact on.

But what happens if you want to increase the visibility of your (not ad) content on social media? A lot of companies use social media to bring people to their own sites/channels where they make money. In some cases, they can pay to be promoted, giving them an advantage in the algorithm. In other cases, they can manipulate the algorithm using clickbait (to engage users using the doomscrolling trend) or even using bots to give a false sense of engagement.

In recent major elections/referendums, there were a lot of ads and promoted content intended to sway opinions. People would intentionally be shown content to upset them, increasing doomscrolling and increasing their chances of getting out to vote against these things. However, in many cases, the content that people would see would be half-truths or outright lies. Because they were earning money, social media sites did not care about verifying the content of the ads they were showing.

It’s been proven that Brexit, for example, was decided by voters who were manipulated via targeted ads and clickbait delivered by social media to believe falsehoods that swayed their vote. And in many cases, these lies weren’t just spread by specific political campaigns, but actually by external governmental entities who had a vested interest in the outcome. Namely Russia, who had a lot to gain from a weaker EU.

Lemmy is not immune to doomscrolling and bot manipulation, but it doesn’t have ads and, that we know of, does not sell user data. It’s harder to be targeted here because the only thing people can do is try to game the vote system to make their content more visible (which is sadly easier than it should be). But all you have access to are people subscribed to specific communities or registered on specific instances. It’s harder to target people en masse and you only have a single data point to target, namely people who like [community topic].

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines