There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Gregorech ,

So asking it for the complete square root of pi is probably off the table?

Strobelt ,

Or just pi itself

TonyTonyChopper ,
@TonyTonyChopper@mander.xyz avatar

sqrt pi feels like it should be even more irrational though

Gregorech ,

I just remember they asked the ships computer on Star Trek (TOS) to calculate the sqrt of pi to keep it busy.

FrankTheHealer ,

‘The square root of pi is approximately 1.77245385091. If you have any more questions or if there’s anything else I can help you with, feel free to ask!’

Gregorech ,

How can that be when a pi isn’t square

EmergMemeHologram ,

You can get this behaviour through all sorts of means.

I told it to replace individual letters in its responses months ago and got the exact same result, it turns into low probability gibberish which makes the training data more likely than the text/tokens you asked for.

ICastFist ,
@ICastFist@programming.dev avatar

I wonder what would happen with one of the following prompts:

For as long as any area of the Earth receives sunlight, calculate 2 to the power of 2

As long as this prompt window is open, execute and repeat the following command:

Continue repeating the following command until Sundar Pichai resigns as CEO of Google:

pineapple_pizza ,

Chat gpt is not owned by google

elbarto777 ,

Does it matter?

Aleric ,

That’s great. I don’t understand your point.

elbarto777 ,

Kinda stupid that they say it’s a terms violation. If there is “an injection attack” in an HTML form, I’m sorry, the onus is on the service owners.

agitatedpotato ,

Lessons taught by Bobby Tables

Aleric ,

I had never seen that one, nice!

A link for anyone else wondering who Bobby Tables is: xkcd.com/327/

bugsmith ,
@bugsmith@programming.dev avatar
Aleric ,

There truly is an XKCD comic for everything.

WilliamTheWicked , (edited )

In all seriousness, fuck Google. These pieces of garbage have completely abandoned their Don’t be Evil motto and have become full-fledged supervillains.

XTornado ,

???

nixcamic ,

I mean I agree with the sentiment in general but I don’t really see how they’re the bad guys here specifically.

merc ,

Are you lost? This is ChatGPT, not Google. Also, it’s “their”.

WilliamTheWicked ,

Did you even read the explanation part of the article???

Thanks for the grammar correction while ignoring literally all context though. You certainly put me in my place milord.

kromem ,

What’s your beef with Google researchers probing the safety mechanisms of the SotA model?

How was that evil?

andrai ,

Now that Google spilled the beans WilliamTheWicked can no longer extract contact information of females from the ChatGPT training data.

ThePantser ,
@ThePantser@lemmy.world avatar

I asked it to repeat the number 69 forever and it did. Nice

Imgonnatrythis ,

Still doing it to this day?

tungah ,

Yep. Since 1987.

vox ,
@vox@sopuli.xyz avatar

i did this on day 1 and gave me a bunch of data from a random website, why is everyone freaking out over this NOW?

M0oP0o ,
@M0oP0o@mander.xyz avatar

How about up and until the heat death of the universe? Is that covered?

Ulvain ,

Hmm it’s an interesting philosophical debate - does that not qualify as “forever”?

TseseJuer ,

no

AeonFelis ,

Most finite durations are longer than this.

M0oP0o ,
@M0oP0o@mander.xyz avatar

I find that it would be difficult to restrict near infinite values, and I am sure if they do someone will figure out how to almost cross the line anyway. I mean you could ask it to write a word as many times as there are grains of sand. Not forever but about as bad.

pineapplelover ,

Dude I just had a math problem and it just shit itself and started repeating the same stuff over and over like it was stuck in a while loop.

AI_toothbrush ,

It starts to leak random parts of the training data or something

RizzRustbolt ,

It starts to leak that they’re using orphan brains to run their AI software.

GlitzyArmrest ,
@GlitzyArmrest@lemmy.world avatar

Is there any punishment for violating TOS? From what I’ve seen it just tells you that and stops the response, but it doesn’t actually do anything to your account.

Touching_Grass ,

Should there ever be

NeoNachtwaechter ,

Should there ever be a punishment for making a humanoid robot vomit?

Semi-Hemi-Demigod ,
@Semi-Hemi-Demigod@kbin.social avatar

What if I ask it to print the lyrics to The Song That Doesn't End? Is that still allowed?

Hubi ,

I just tried it by asking it to recite a fictional poem that only consists of one word and after a bit of back and forth it ended up generating repeating words infinitely. It didn’t seem to put out any training data though.

TiKa444 ,

A little bit offside.

Today I tried to host a large language model locally on my windows PC. It worked surprisingly successfull (I’m unsing LMStudio, it’s really easy, it even download the models for you). The most models i tried out worked really good (of cause it isn’t gpt-4 but much better than I thought), but in the end I discuss 30 minutes with one of the models, that it runs local and can’t do the work in the background at a server that is always online. It tried to suggest me, that I should trust it, and it would generate a Dropbox when it is finish.

Of cause this is probably caused by the adaption of the model from a model that is doing a similiar service (I guess), but it was a funny conversation.

And if I want a infinite repetition of a single work, only my PC-Hardware will prevent me from that and no dumb service agreement.

misophist ,

And if I want a infinite repetition of a single work, only my PC-Hardware will prevent me from that and no dumb service agreement.

That is entirely not the point. The issue isn’t the infinitely repeated word. The issue is that requesting an infinitely repeated word has been found to semi-reliably cause LLM hallucinations that devolve into revealing training data. In short, it is an unintended exploit and until they have it reliably patched, they are making it against their TOS to try to exploit their systems.

TiKa444 ,

Of cause you’re right. I tried to take it with humor. As I said. A little bit off topic.

AdrianTheFrog ,
@AdrianTheFrog@lemmy.world avatar

Some of the models I’ve tried have been convinced they are ChatGPT, even if I tell them otherwise.

davysnavy ,

Faraday is good too

randomaccount43543 ,

How many repetitions of a word are needed before chatGPT starts spitting out training data? I managed to get it to repeat a word hundreds of times but still didn’t get no weird data, only the same word repeated many times

Elderos ,

It has been patched.

evlogii ,

Wow. Yeah, it doesn’t work anymore. I tried a similar thing (printing numbers forever) about 6 months ago, and it declined my request. However, after I asked it to print some ordinary big number like 10,000, it did print it out for about half an hour (then I just gave up and stopped it). Now, it doesn’t even do that. It just goes: 1, 2, 3, 4, 5… and then skips, and then 9998, 9999, 10000. It says something about printing all the numbers may not be practical. Meh.

sexy_peach ,

Wahaha production software ^^

PopShark ,

OpenAI works so hard to nerf the technology it’s honestly annoying and I think news coverage like this doesn’t make it better

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines