I feel like AI companies have been scraping Reddit for their datasets already since the beginning and without permission. In fact, unless there’s been a regulation change that i’m not aware of, i’m not sure why they would have Reddit “sign away” the data when they can just scrape it.
Also dubious if the current form of AI has a future. They seem like they should revolutionize every sector when you look at their capacities, but in practice their applications might be more limited than we thought?
Anyway, if Reddit does go public i will be deleting my account within the hour. The only reason i haven’t yet is that i’ve been a moderator of the same subreddit for eight years and it’s the only thing that’s been consistent in my life in that time, i’m kind of attached. The reason i will is i didn’t sign up to create value for shareholders, i signed up to create value for a community.
You need to go ahead and delete your account and give up the ghost on modding whatever sub you are referring to. I’m tired of these types of posts where you are both beholden to Reddit and also not. Pick a dang side.
Well no, because the old sub will continue to exist and will therefore always be where everyone goes until Reddit itself dies. I really doubt admins would let me delete the sub.
They say it’s $60 million on an annualized basis. I wonder who’d pay that, given that you can probably scrape it for free.
Maybe it’s the AI act in the EU. That might cause trouble in that regard. The US is seeing a lot of rent-seeker PR, too, of course. That might cause some to hedge their bets.
Maybe some people had not realized that yet, but limiting fair use does not just benefit the traditional media corporations but also the likes of Reddit, Facebook, Apple, etc. Making “robots.txt” legally binding would only benefit the tech companies.
This is the most frustrating thing, so many people are arguing against their own interests with their efforts to "lock down" their content to prevent AIs from training on it. In this very thread I've been accused of being pro-giant-company when I'm quite the opposite. The harder we make it to train AI, the stronger the advantage that the existing giant companies have in this field.
Just like that? No thought or anything put into what makes good vs bad training data?
Good luck lmfao.
Makes you wonder how hard it would be to clog up the training data with outputs from other AI models to really bake in that echo defect that they all seem to have to some extent as fast as possible. Wouldn’t that suck!