There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Pro75357

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Pro75357 ,

Can you give an example of what you mean by “bad uses”?

Pro75357 ,

OK Thanks - So you are asking about protections from misinformation- deepfakes and such.

As technology improves, it may be downright impossible to tell real from fake with our own eyes- at which point what is “proof” becomes blurry. It will become “this is why we can’t have nice things…” where innocents are at risk of harm (non-AI art getting rejected from competitions because it looks like AI art), and bad actors more often get away with shenanigans. Hopefully we’re smart enough to figure out ways to avoid that kind of future.

However, I don’t think restricting the technology itself, through legislation or otherwise, would be practical nor would it be very effective. Forgery and deception are age-old concepts, and people aren’t going to stop trying to cheat/lie/steal. Some people (VFX artists?) can probably already make a believable fake homicide. And just look at all the fake UFO footage out there- we don’t really need AI to deceive people, it’s just that AI makes it more accessible- and perhaps now within reach for some lowlife that needs to cheat to be successful in life. And, most countries already have laws in place against fraud, forgery, and libel- things that hurt others. It would be very difficult to regulate “misinformation” though, because it can overlap with legitimate uses such as art and entertainment.

Of course, it would be nice to have only “Ethical” AI - and this is what you are starting to see in the commercial space, but it is pretty easy to bypass these restrictions (not endorsing this, just an example of a quick search result). Also, not all AI systems will even bother trying to be ethical, and once the technology is more accessible bad actors could just make their own AI systems from scratch. I also think any attempt at restriction through legal means would significantly hinder legitimate research in the field and slow progress on what may be our best chance at overcoming humanity’s biggest challenge (climate change, etc.).

I like to think of AI as an extension of the human intellectual tool set - so let’s not treat it like guns or drugs (physical things) but rather like libraries or the internet. Regulated to a practical extent, yes, but not really restricted with regards to what it can do. The fact that the internet was not highly regulated or highly-controlled during it’s inception is a major part of why it is the amazing global network we have today.

Pro75357 ,

I’m about to start this journey myself. I found this, which looks promising: https://github.com/ggerganov/llama.cpp

Would be nice if someone here with some experience could share.

Edit: also this https://gpt4all.io/index.html

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines