There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

ssm ,
@ssm@lemmy.sdf.org avatar

I hope all big corporate SEO trash follows suite, once they’ve all filtered themselves out for profit we can hopefully get some semblance of an unshittified search experience.

CanadaPlus ,

Man, wouldn’t that be nice. There’s too much money in appearing on searches for me to ever expect that to happen, though.

Moonrise2473 , (edited )

A search engine can’t pay a website for having the honor of bringing them visits and ad views.

Fuck reddit, get delisted, no problem.

Weird that google is ignoring their robots.txt though.

Even if they pay them for being able to say that glue is perfect on pizza, having


<span style="color:#323232;">User-agent: *
</span><span style="color:#323232;">Disallow: /
</span>

should block googlebot too. That means google programmed an exception on googlebot to ignore robots.txt on that domain and that shouldn’t be done. What’s the purpose of that file then?

Because robots.txt is completely based on honor (there’s no need to pretend being another bot, could just ignore it), should be


<span style="color:#323232;">User-agent: Googlebot
</span><span style="color:#323232;">Disallow:
</span><span style="color:#323232;">User-agent: *
</span><span style="color:#323232;">Disallow: /
</span>
MrSoup ,

I doubt Google respects any robots.txt

Moonrise2473 ,

for common people they respect and even warn a webmaster if they submit a sitemap that has paths included in robots.txt

DaGeek247 ,
@DaGeek247@fedia.io avatar

My robots.txt has been respected by every bot that visited it in the past three months. I know this because i wrote a page that IP bans anything that visits it, and l also put it as a not allowed spot in the robots.txt file.

I've only gotten like, 20 visits in the past three months though, so, very small sample size.

thingsiplay , (edited )

Interesting way of testing this. Another would be to search the search machines with adding site:your.domain (Edit: Typo corrected. Off course without - at -site:, otherwise you will exclude it, not limit to.) to show results from your site only. Not an exhaustive check, but another tool to test this behavior.

mozz ,
@mozz@mbin.grits.dev avatar

I know this because i wrote a page that IP bans anything that visits it, and l also put it as a not allowed spot in the robots.txt file.

This is fuckin GENIUS

Moonrise2473 ,

only if you don’t want any visits except from yourself, because this removes your site from any search engine

should write a “disallow: /juicy-content” and then block anything that tries to access that page (only bad bots would follow that path)

Miaou ,

That’s exactly what was described…?

mozz ,
@mozz@mbin.grits.dev avatar

You need to read again the thing that was described, more carefully. Imagine for example that by “a page,” the person means a page called /juicy-content or something.

MrSoup ,

Thank you for sharing

skullgiver ,
@skullgiver@popplesburger.hilciferous.nl avatar

I think Reddit serves Googlebot a different robots.txt to prevent issues. For instance, check Google’s cached version of robots.txt: it only blocks stuff that you’d expect to be blocked.

tal , (edited )
@tal@lemmy.today avatar

I guessed in a previous comment that given their new partnership, Reddit is probably feeding their comment database to Google directly, which reduces load for both of them and permits Google to have real-time updates of the whole kit-and-kaboodle rather than polling individual pages. Both Google and Reddit are better-off doing that, and for Google it’d make sense for any site that’s large-enough and valuable enough to warrant putting forth any effort special-cased to that site.

I know that Reddit built functionality for that before, used it for pushshift.io and I believe bots.

I doubt that Google is actually using Googlebot on Reddit at all today.

I would bet against either Google violating robots.txt or Reddit serving different robots.txt files to different clients (why? It’s just unnecessary complication).

jarfil ,

Google is paying for the use of Reddit’s API, not for scraping the site.

That’s the new Reddit’s business model: want “their” (users’) content, then pay for API access.

theangriestbird OP ,

The beef between Microsoft and Reddit came to light after I published a story revealing that Reddit is currently blocking every crawler from every search engine except Google, which earlier this year agreed to pay Reddit $60 million a year to scrap the site for its generative AI products.

I know the author meant “scrape”, but sometimes it really does feel like AI is just scrapping the old internet for parts.

cybermass ,

Yeah, aren’t like over half of reddit comments/posts by bots these days?

originalucifer ,
@originalucifer@moist.catsweat.com avatar

yep, and the longer that happens the less value to the dataset. its becoming aged.

RiikkaTheIcePrincess ,
@RiikkaTheIcePrincess@pawb.social avatar

[Joke] See, Reddit’s doing a nice thing here! They’re making sure nobody ends up toxifying their own dataset by using Reddit’s garbage heap of bot posts!

originalucifer ,
@originalucifer@moist.catsweat.com avatar

google needs a checkbox of 'ignore reddit' im sick of having to manually add -reddit

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines