There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

amio ,

It's not even a good idea to let quite a lot of adults use ChatGPT. People don't know how it works, don't treat the answers with anything close to appropriate skepticism, and often ask about things they don't have the knowledge/skills to verify. And anything it tells you, you likely will need to verify.

It's quite unlikely to affect their personality, but it might make them believe a bunch of weird shit that some unknowable, undebuggable computer program hallucinated up. If you've done an uncommonly great job with their critical thinking skills, great. If not, better get started. That is not specific to "AI" though.

NoiseColor ,

People don’t know how TV works and we are hardly gonna tell people not to use it.

As long as people are aware that some responses might be made up it should be fine for anyone to use it.

AlwaysNowNeverNotMe ,
@AlwaysNowNeverNotMe@kbin.social avatar

The context of the word "let" is interesting here.

I would recommend a collaborative approach, it's not as if they can't use it because you tell them no. They don't need a credit card or a driver's license or even a computer.

Dirk ,
@Dirk@lemmy.ml avatar

To use as a tool? Yes.
To use as a friend? No.

A person using a tool for a longer time will become better in using said tool.

PeepinGoodArgs ,

Have you asked ChatGPT? Jk lol

Honestly, whatever they use ChatGPT for is probably fine. If you feel like they’re going to cheat on their homework or something, you can just ask them to do a small sample in front of you. Plus, it’s not like ChatGPT is going away, no matter how much the NYT and Disney complain. Best bet is for them to get familiar with the technology now.

Also, there’s literally no way to the long-term effects of AI. I strongly suspect that if people use it as a crutch, it will create intellectually and creatively stunted people. But it’s not like we don’t have that now…

LWD , (edited )

Disney’s fighting AI by using the 4D chess method of figuring out how to use it in their movies

arstechnica.com/…/disney-ai-task-force-aims-to-cu…

user224 ,
@user224@lemmy.sdf.org avatar

It’s just AI chatbot, I don’t see how it would be dangerous.

And I am also pretty sure a 16 year old knows to expect inaccurate results from it, unless they’ve been living restricted from the outside world until now.

The only negative thing I see from it so far is kids using it to create essays, but it’s not like there wasn’t a countless number of them available on the internet before. It was just easier to detect as you could search up the text and see if you can find it online.

Anyway, for just playing around it gets boring after 15 minutes.
Why don’t you try?

LWD ,

Something that appears more human is more likely to elicit them sending their private data. And that data is then sold, obviously without consent, and used however the buyers feel.

Instead of being scared to share information with it, you will volunteer your data…

– Vladimir Prelovac, CEO of Kagi AI and Search

Remember Replika, the AI chatbot that sexually harassed minors and SA victims, and (allegedly) repeated the contents of other people’s messages verbatim?

It might not be as mind-rotting as TikTok but it’s not good.

bilboswaggings ,

Why would you want that?

AI does not know things, it’s answers depend on the wording of the guestion. I guess it could be used if limited (teaching how to use it responsibly and showing how they make mistakes even in very simple situations)

Much like a calculator both are more effective if you know what is happening so you can catch the mistakes and fix them

NoiseColor ,

Ai know things. They are a collection of knowledge. Not everything they respond with is made up.

bilboswaggings ,

If it doesn’t understand what it’s saying can you really say it knows it? It has access to a lot of training data so it can get many things correct, but it’s effectively just generating the most likely answer from the training data

NoiseColor ,

Well obviously it doesn’t “know” know, it’s not alive.

We are all generating the most likely answer from the training data. But going back to the original question : what do you fear chatgpt would say that would be detrimental to a 16 year old?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines