There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

A social app for creatives, Cara grew from 40k to 650k users in a week because artists are fed up with Meta’s AI policies | TechCrunch

Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.

Instagram is a necessity for many artists, who use the platform to promote their work and solicit paying clients. But Meta is using public posts to train its generative AI systems, and only European users can opt out, since they’re protected by GDPR laws. Generative AI has become so front-and-center on Meta’s apps that artists reached their breaking point

IHeartBadCode ,

From Cara:

We do not agree with generative AI tools in their current unethical form, and we won’t host AI-generated portfolios unless the rampant ethical and data privacy issues around datasets are resolved via regulation

Okay I wanted to talk real quick about this aspect. Lot's of folks want AI to require things only held in copyright. And fine, let's just run with that for sake of brevity. Disney owns everything. If you stick AI to only models which the person holds copyright, only Disney will generate AI for the near future.

I'm just going to tell you. The biggest players out there are the one who stand to profit the most from regulation of AI. And likely, they'll be the one's tasked by Congress to write drafts of the regulation.

In the event that legislation is passed to clearly protect artists, we believe that AI-generated content should always be clearly labeled, because the public should always be able to search for human-made art and media easily

And the thing is, is Photoshop even "human-made art"? I mean that was the debate back in the 90s, when a ton of airbrush artist lost their jobs. And a large amount of Photoshop that was done, was so bad back then we had the whole Ralph Lauren, Filippa Hamilton thing go down.

So I don't disagree with safe from AI places. But the justification of Cara's existence, is literally every argument that was leveled at Photoshop back in the 90s by airbrush artist who were looking to protect their jobs and failed because they focused way too heavily on being anti-Photoshop that the times changed without them. When they could have started learning Photoshop and kept having a job.

I think AI presents a unique tool for artist to use to become more creative than they have ever been. But I think that some of them are too caught up in how CEOs will eventually use that tool as justification to fire them. And there's a lot of propensity to blame AI when it's the CEO's writing the pink slips, just like the airbrush artists blamed Photoshop, when it was newspapers, the magazines, and so on that were writing the pink slips.

I just feel like a lot of people are about to yet again get caught with their pants down on this. And it's easy to diss on AI right now, because it's so early. Just like bad Photoshop back in the 90s led to the funny Snickers ad.

Like I get that people building models from other people's stuff is bad. No argument there. But, open models, things built from a community of their own images, are things too but that's all based on the community and people who decide to be in a collaborative effort to provide a community model. And I think folks are getting so hung up on being anti-AI, that it's going to hurt their long term prospects, just like the airbrush folks who started picking up Photoshop way too late.

There's not a stopping Disney and the media companies from using AI, they're going to, and if you enjoy getting a paycheck, having some skill in the thing they use is going to be required. But for regular people to provide a competitor, to fight on equal footing, the everyday person needs access to free tools. Imagine if we had no GIMP, no Kitra, no Inkscape. Imagine if it was just Adobe and nothing else and that was enforced by regulation because only Adobe could be "trusted".

PlexSheep ,

Good comment. Thanks.

tyler ,

I’ve heard the “big guys are the only ones that will profit from AI regulation” and I haven’t ever heard an actual argument as to why.

And in my mind the biggest issues with AI image generation have nothing to do with using it as a tool for artists. That’s perfectly fine. But what it is doing is making it infinitely easier to spread enormous amounts of completely unidentifiable misinformation, due to being added with indistinguishable text to speech and video generation.

The barrier is no longer “you need to be an artist”. It’s “you need to have an internet connection”.

IHeartBadCode ,

Ah. No problem. So the notion behind the "big guys are the ones that stand to profit from AI regulation" is that regulation curtails activity in a general sense. However, many of the offices that create regulation defer to industry experts for guidance on regulatory processes, or have former industry experts appointed onto regulatory committees. (good example of the later is Ajit Pai and his removal of net neutrality).

AI regulation at the Federal level has mostly circled "trusted" AI generation, as you mentioned:

But what it is doing is making it infinitely easier to spread enormous amounts of completely unidentifiable misinformation, due to being added with indistinguishable text to speech and video generation

And the talk has been to add checks along the way by the industry itself (much like how the music industry does policing itself or how airline industry has mostly policed itself). So this would leave people like Adobe and Disney to largely dictate what are "trusted" platforms for AI generation. Platforms that they will ensure that via content moderation and software control, that only "trusted" AI makes it out into the wild.

Regulation can then take the shape of social media being required to enforce regulation on AI posts, source distributors like github being required to enforce distribution prohibitions, and so on.

This removes the tools for any AI out of the hands of the public and places them all in the hands of Adobe, Disney, Universal, and so on. And thus, if you wanted to use AI you must use one of their tools, which may in turn have within the TOS that you can not use their product to compete with their product. Basically establishing a monopoly.

This happens a lot in regulatory processes which is why things like the RIAA, the MPAA, Boeing, and so on are so massive and seemingly unbreakable. They aren't enshrined in law, but regulatory processes create a de facto monopoly that becomes difficult to enter because of fear of competition.

The big guys, being the industry leaders, in a regulatory hearing would be the first to get a crack at writing the rules that the regulatory body would debate on. In addition to the expert phase, regulatory process also includes a public comment, this would allow the public to address concerns about the expert submitted recommendation. But as demonstrated back in the public comment of the debate to remove rules regulating ISPs for net neutrality, the FCC decided that the comments were "fake" and only heard a small "selected" percentage of them.

side note: in a regulatory hearing, every public comment accepted must be debated and rationale on the conclusion of the argument submitted to the record. This is why Ajit Pai suspended comments on NN because they didn't want to enter justification that can be brought up in a court case to the record.

The barrier is no longer “you need to be an artist”. It’s “you need to have an internet connection”

And yeah, that might be worth locking AI out of the hands of the public forever. But it doesn't stop the argument of "AI taking jobs". It just means that small startups will never be able to create jobs with AI. So if the debate is "AI shouldn't take our jobs, let's regulate it", that will only make it worse in the end (sort of how AWS has mostly dominated the Internet services and how everyone started noticing that as not being incredibly ideal around 2019-2021 when Twitter started kicking people off their service and people wanting to build the next Twitter were limited to what Amazon would and would not accept).

So that's the argument. And there's pros and cons to each. But we have to be pretty careful about which way to go, because once we go a direction, it's pretty difficult to change directions because corporations are incredibly good at adapting. I distinctly remember streaming services being the "breath of fresh air from cable" all the way up till it wasn't. And now with hard media becoming harder to purchase (it's not impossible mind you) we've sort of entrenched streaming. Case in point, I love Pokémon Concierge, it is not available for purchase as a DVD or whatever (at least not a non-bootleg version), so if I ever want to watch it again I need Netflix.

And do note, I'm not saying we shouldn't have regulation on AI, what I am saying is that there's a lot for consideration with AI regulation. And the public needs to have some unified ideas about it for the regulatory body's public comment section to ensure small businesses that want to use AI can still be allowed. Otherwise the expert phase will dominate and AI will be gone from the public's hands for quite some time. We're just now getting around to reversing the removal of net neutrality that started back in 2017. But companies have used that 2017 to today to form business alliances (Disney + Hulu Verizon deal as an example) that'll be hard to compete with for some time.

mo_lave ,

I’m very wary of the measures that could potentially pass if the some of the anti-AI art people get their way. I know how messy and difficult putting fair-use material in YouTube can be. There would be more of that in more platforms.

I agree unregulated AI is problematic. At the same time, I’m cynical on what the actual measures would look like.

IHeartBadCode ,

I agree unregulated AI is problematic. At the same time, I'm cynical on what the actual measures would look like.

OMG, Thank you, this is the correct take.

FaceDeer ,
@FaceDeer@fedia.io avatar

And then that growth promptly blew its budget because it's using expensive cloud AI services from Vercel and it has no means of monetization whatsoever to bring money in.

People can do whatever they want, of course. But they have to pay for the resources they consume while doing that, and it seems Cara didn't really consider that aspect of this.

simple ,

Oh no… Are they running it entirely on serverless functions? What a disaster. I’m surprised the website is still up, is the owner not worried about going bankrupt?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Well, now’s a great time to let them know about Pixelfed, although explosive growth like this will be a strain on any website.

FaceDeer ,
@FaceDeer@fedia.io avatar

I get the sense that a federated image hosting/sharing system would be counter to their goals, that being to lock away their art from AI trainers. An AI trainer could just federate with them and they'd be sending their images over on a silver platter.

Of course, any site that's visible to humans is also visible to AIs in training, so it's not really any worse than their current arrangement. But I don't think they want to hear that either.

brbposting ,

Hmm their About is all about not hosting AI images until ethical issues are resolved.

Ah! Gotta hit FAQ: “Cara Glaze”, then the linked University of Chicago Glaze FAQ:

https://sh.itjust.works/pictrs/image/3cb2b7ed-e954-4353-b4c9-f70e882483e1.jpeg

https://sh.itjust.works/pictrs/image/f5a72730-759f-496c-9588-1e4a32accac8.jpeg

Anti-AI cloaking. Neat!

FaceDeer ,
@FaceDeer@fedia.io avatar

Aside from it not really working, though.

Glaze attempts to "poison" AI training by using adversarial noise to trick AIs into perceiving it as something that it's not, so that when a description is generated for the image it'll be incorrect and the AI will be trained wrong. There are a couple of problems with this, though. The adversarial noise is tailored to specific image recognition AIs, so it's not future-proof. It also isn't going to have an impact on the AI unless a large portion of the training images are "poisoned", which isn't the case for typical training runs with billions of images. And it's relatively fragile against post-processing, such as rescaling the image, which is commonly done as an automatic part of preparing data for training. It also adds noticeable artefacts to the image, making it look a bit worse to the human eye as well.

There's a more recent algorithm called Nightshade, but I'm less familiar with its details since it got a lot less attention that Glaze and IIRC the authors tried keeping some of its details secret so that AI trainers couldn't develop countermeasures. There was a lot of debate over whether it even worked in the first place, since it's not easy to test something like this when there's little information about how it functions and training a model just to see if it breaks is expensive. Given that these algorithms have been available for a while now but image AIs keep getting better I think that shows that whatever the details it's not having the desired effect.

Part of the reason why Cara's probably facing such financial hurdles is that it's computationally expensive to apply these things. They were also automatically running "AI detectors" on images, which are expensive and unreliable. It's an inherently expensive site to run even if they were doing it efficiently.

IMO they would have been much better served just adding "No AI-generated images allowed" to their ToS and relying on their users to police themselves and each other. Though given the witch-hunts I've seen and the increasing quality of AI art itself I don't think that would really work for very long either.

brbposting ,

That’s really interesting. Will have to watch how this turns out, see if any 2025 image models can imitate Cara artists.

pavnilschanda ,
@pavnilschanda@lemmy.world avatar

And now they’re asking for donations. I don’t know how that’ll work out, though

corus_kt ,

I was hyped for non-shitty artstation for all of 15 minutes, damn. This is some ridiculously shitty planning.

thefrankring ,
@thefrankring@lemmy.world avatar

These people should create an instance on Pixelfed, a libre alternative to Instagram.

autonomoususer ,

Don’t focus on specific apps or you will start all over again from the beginning when every new piece of anti-libre software, malware, appears.

thefrankring ,
@thefrankring@lemmy.world avatar

People chose Cara because they identify with the art aspect of this social network. They don’t care if it’s anti-libre. They probably don’t even know what it means.

The purpose of a federated instance like Pixelfed is to be a blank state. You can do anything with it. Any niche. Art in this case.

The issue here is to bring these people to Pixelfed and make them feel at home within their niche.

autonomoususer , (edited )

They don’t care if it’s anti-libre.

And that’s why keep getting abused again and again. So, this is what we must target. Unless we like wasting all of our time just to restart when the next malware arrives because they don’t see the difference, see it’s anti-libre.

thefrankring ,
@thefrankring@lemmy.world avatar

Yes and no, you and me both value software freedom so we both understand that.

Education is obviously part of the process.

But I think most people don’t really care if libre or not. Libre or anti-libre is mostly tech jargon for non-tech people.

They just want to be part of their own communities and be where the party is.

Kolanaki ,
@Kolanaki@yiffit.net avatar

Does libre just mean “free?” The way I have been seeing it used in context, I assumed it was a platform of some kind. This thread made me not so sure of that.

aniki ,

Libre is free and open

Knock_Knock_Lemmy_In ,

Libre is open, but not necessarily no cost.

It’s not illegal to charge for a derived product.

autonomoususer ,
Zak ,
@Zak@lemmy.world avatar

Libre means free as in freedom rather than free as in cost. A service that costs money to use, but communicates using open protocols, gives you full control over your data, and allows you to easily migrate to competitors and self-hosted solutions might be described as “libre”.

autonomoususer , (edited )

most people don’t really care if libre or not. Libre or anti-libre is mostly tech jargon for non-tech people.

Yes, that’s the problem to solve.

They just want to be part of their own communities and be where the party is.

Which they can’t when their software keeps abusing them, anti-libre software. So, we connect the effect to this root cause.

thefrankring ,
@thefrankring@lemmy.world avatar

Libre/anti-libre is one of the problem to solve.

Cara seems to be working for them and for now.

For how long? I don’t know.

Another problem is related to the instance creation, management and promotion.

From my understanding, only tech people can do that, there aren’t many companies providing those services and it’s not something average users are interested in.

Zak ,
@Zak@lemmy.world avatar

I think it would be great for new social things like this to just speak ActivityPub. They can build up their own user experience and culture while joining a larger network. I don’t have a problem with the software itself being non-free if the protocols are and they commit to supporting account migration.

thefrankring ,
@thefrankring@lemmy.world avatar

Pixelfed already does support the image import from Instagram.

Mastodon doesn’t seem to support any import from Twitter/X.

I’m assuming account migration from the main social media platforms to be an important feature.

But I don’t think supporting ALL social media is realistic unless they all follow the same norms. Which I really doubt.

Zak ,
@Zak@lemmy.world avatar

ActivityPub supports alsoKnownAs and movedTo so that users can migrate their social graphs to a different server or software. Of course that doesn’t work for migrating from networks that don’t support ActivityPub.

Content import is a separate issue, but I can imagine it being helpful as well.

dan ,
@dan@upvote.au avatar

ActivityPub supports alsoKnownAs and movedTo so that users can migrate their social graphs to a different server or software.

The annoying thing with ActivityPub is that your username/handle is tightly coupled to a particular server, and moving server requires you to change your handle. Everywhere you’ve mentioned/documented your old handle is now out of date.

Bluesky handles this a lot better. If you own a domain, you can use it with any Bluesky server by creating a TXT record for validation. Your username is the domain name - if you own example.com, you can be @example.com on Bluesky, without having to self-host it. If you move server, you don’t have to change your username. Currently there’s just one main Bluesky server but they plan to introduce federation at some point, and their protocol is already mostly designed for it.

Zak ,
@Zak@lemmy.world avatar

And the losing server has to cooperate, which is why I mentioned the commitment to support migrating away.

ATProto/Bluesky has some interesting ideas, and I’m interested to see how that develops as third parties start supporting the protocol. For a new service launching now, I think ActivityPub is the more important protocol to support, but it’s presumably possible to support both.

mryessir ,

This would be a good approach to improve growth of the community.

Does the ActivityPub protocol support copyright for user content? E.g. an artist releases some picture and they explicitly prompt a license. Each client should accept that they are obligated to prompt this license when using the content… Something like this

far_university1990 ,

Not found anything on send license or agree before receive for protocol.

But could maybe do with federation: federate only with instance that agree on license for all user content, make all user read and sign license. Turn off access without account.

People can always break license, so can never be perfect.

mryessir ,

Would this again segregate the users? Some attribute on a submission which refers to a license would be nice though

Imgonnatrythis ,

Neat. I like the concept. From a viewing perspective do wish it had some filters and better browsing capacity for finding art, but definitely bookmarkable - glad it’s growing.

TimeNaan ,

What’s keeping this from repeating the same scenario?

morgunkorn ,
@morgunkorn@discuss.tchncs.de avatar

Nothing but by now we got used to switching service whenever it gets bad

TimeNaan , (edited )

Then maybe it’s time to switch to a FLOSS federated alternative, like Pixelfed? That way nobody can implement bad changes like this without the community fixing or forking the code.

morgunkorn ,
@morgunkorn@discuss.tchncs.de avatar

Yup I’m already there but it’s hard to get any traction, im posting stuff into the void, its gonna take a while to get the typical Meta users over there :/

TimeNaan ,

I understand but then again it goes in a circle - more content ➡️more users➡️more content

BURN ,

Unfortunately that isn’t really the reality. Apps like Vero have plenty of creators, but no regular users. And since there’s no regular users, it never grows beyond a network of creators trying to make it big.

Critical mass is almost impossible to overcome for a new platform. Reddit, Facebook, Instagram, Twitter all still have exponentially more users than any of their supposed alternatives, and no matter how they treat their users the vast majority of them have no problems staying.

Chadus_Maximus ,

Doesn’t the data disappear once the host decides to cease providing the service? From this perspective I don’t see how a small team or an individual could keep the data for longer compared to a large firm.

Imgonnatrythis ,

I mean avoiding AI images is baked into their mission statement. I guess they could go full asshole and renig on this, but unlike Meta who can piss off a lot of people without affecting their bottom line. If Cara renigs on their whole point of being, a huge chunk of their user base is going to run off. It would likely be suicidal and only good for a quick cash grab exit strategy. I mean, I fully believe almost anything tech should sadly be expected to crumble to enshitification on increasingly shorter arcs. If you are looking for long term quality online services that don’t decay, you are in for lots of dissapointment.

autonomoususer ,

To get abused again by yet more anti-libre software, malware. Some people never learn.

autonomoususer , (edited )

Tell them this:

🚩 Anti-libre software, Cara, bans us from removing malicious source code. We don’t have time to waste your life repeating the same failure.

They might ask:

What is anti-libre? We don’t control. It controls us.

And:

How do we know? It fails to include a libre software license file, like the AGPL.

Say this instead:

open source libre software (‘open source’ is created to subvert libre software)

closed anti-libre (closed implies open, see above)

We are the product. (paid stuff abuses too) With anti-libre software, we are no the user, we are used.

More in video here or text here.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

What do you mean by this?:

Cara, bans us from removing malicious source code

Is there obviously malicious source code? Is there a policy that specifically says we can’t remove any source code? Is this even open source?

autonomoususer , (edited )

‘Open source’ is created to subvert libre software. The ban alone is a 🚩 red flag.

Warl0k3 ,

Waht is “libre software”? this is a totally new term to me and searching for it has turned up nothing.

autonomoususer , (edited )

Literally the first search result is here but even better is this video here.

Warl0k3 ,

You understand that search results are different for different people, right? I’ve been a dev for… an embarrassingly long time, I’ve never heard “libreware” outside of specifically the libreoffice suite. Sorry I’m not as in-tune with the slang as you are or whatever.

autonomoususer ,

Maybe yours does.

Warl0k3 ,

YES, IT DOES, THATS MY ENTIRE POINT.

autonomoususer , (edited )

deleted_by_moderator

  • Loading...
  • Warl0k3 ,

    Lmao, okay thats patheticly bad baiting. Come on.

    demonsword ,
    @demonsword@lemmy.world avatar

    This essay may help clearing things up

    QuadratureSurfer ,
    @QuadratureSurfer@lemmy.world avatar

    What ban?

    autonomoususer ,

    Source, or did they include an AGPL, libre software license?

    QuadratureSurfer ,
    @QuadratureSurfer@lemmy.world avatar

    What does copyright law have to do with a ban on removing malicious code?

    autonomoususer ,

    What do you think bans it? Copyright law, unless they include, for example, a libre software license.

    QuadratureSurfer ,
    @QuadratureSurfer@lemmy.world avatar

    You realize that copyright law still applies… whether you add some additional license to your software or not… right?

    autonomoususer ,

    You know its license changes what we are allowed to do with it?

    Zak ,
    @Zak@lemmy.world avatar

    They’re using loaded language to say that without access to the source code and the ability to modify it, Cara could start behaving in a way you don’t like and you wouldn’t be able to do anything about it.

    autonomoususer , (edited )

    Meta uses loaded language to takeover our computing.

    mholiv ,

    I really appreciate your super stark pro libre software attitude. I want to support you here. You should know that the approach you are taking is ultra abrasive and would probably cause more harm than help.

    People would just associate libre software with militant weirdos, if all they saw where your posts.

    If you want to make meaningful change I strongly recommend taking a softer less abrasive approach.

    We want libre software to be connected with safety, friendliness and personal autonomy, not militarism, chanted phrases, and dogma.

    Even on Lemmy the ultra pro libre software social network (relative to non federated networks) your current approach is off putting. I want you to succeed and I think a different approach may be better.

    Just my two cents.

    autonomoususer , (edited )

    People try that and we still get news like this but your feedback is welcome.

    tyler ,

    Every time you call a product “malware” with absolutely no facts to back it up, you make yourself (and the movement) look idiotic. Please just stop.

    autonomoususer , (edited )

    Please, stop making yourself look gullible. You have absolutely no proof it’s safe but we know this anti-libre software bans us from removing malicious source code.

    tyler ,

    Dude you are the one making yourself look dumb. And you still make absolutely no sense, “removing malicious source code”? Removing it from what? Your comments make no sense.

    tyler ,

    Dude you are the one making yourself look dumb. And you still make absolutely no sense, “removing malicious source code”? Removing it from what? Your comments make no sense.

    brbposting ,

    Love your ethos.

    You familiar with the Curse of Knowledge?:

    Using the two words “source code” with a developer is expected.

    With a random artist? Or like 20 or 40 or 75% of artists? Potential dead end.

    Keep up the core mindset for sure buddy. Approaches can always be refined and I see you gave it a shot in your edit!

    autonomoususer , (edited )

    Thanks, they can web search it. Not saying ‘source code’ give attackers too much space. Feedback is welcome.

    brbposting ,

    You may be interested in running a little experiment. The next few times you see a Lemmy post that is best understood with additional context, you can try posting a relevant Wikipedia link.

    The next few times after that, you can try posting not only a link but also your own summary, a quoted paragraph, and/or a screenshot.

    I would be shocked if you do not have significantly more engagement from simply taking an extra 10 to 15 seconds to screenshot, crop, and embed.

    Now, remember, your point of comparison is against where you were already providing a DIRECT LINK to information. It’s a simple fact (in my eyes) that fewer people click than scroll. Translate this to IRL: you want to preach the good word, right? How high do you want the barrier to be: hope someone will DuckDuckGo (naw Google obviously) that term you didn’t understand, or know that there’s barely a barrier thanks to meeting the person where they are by pre-translating to normie?

    We can always let the perfect be the enemy of the good, if we care more about minority perfection than real widespread results.

    I should help work on this pitch with you later, will leave a final thought for now:

    https://sh.itjust.works/pictrs/image/3d6331ff-fe35-4a67-bba4-b1e84786cb01.png

    autonomoususer ,

    Excellent comment, bookmarked, thank you!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines