There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

How do we know that everyone on the internet isn't just a bot?

I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn’t already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it’s probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr…

[Error: The program “Human_Simulation_AI” is unresponsive]

nekat_emanresu , (edited )

At some point it all stops mattering. You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.

I’ve had an embarrassing attempt to identify a bot and learned a fair bit.

There is significant overlap between the smartest bots, and the dumbest humans.

A human can:

  • Get angry that they are being tested
  • Fail an AI-test
  • Intentionally fail an AI-test
  • Pass a test that an AI can also pass, while the tester expects an AI to fail.

It’s too unethical to test, so I feel that the best course of action is to rely on good/bad faith tests, and logic of the argument.

Turing tests are very obsolete. The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?

A well made LLM can exceed a dumb person pretty easily. It can also be more enjoyable to talk with or more loving and supportive.

Of course there are things that current LLMs can’t do well that we could design tests around. Also long conversations have a higher chance to show a failure of the AI. Secret AIs and future AIs might be harder of course.

I believe dead internet theories spirit. Strap in meat-peoples, rides gonna get bumpy.

milicent_bystandr ,

You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.

Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.

For many (most? Even all to some degree?) of us, we have some idea ingrained in our culture of saying what we think to be true, and refraining from what we don’t. That’s heavily diluted on the internet, but the converse tends to be saying what we think will make people support/agree with us. We’ve grown up (some of us have!) with some feel of how to tell the difference.

GPT (and I guess most human-like chat bots will be similar for now) is more an amoral, or a-scient, attempt to say something coherent based on the training data. It’s different again, but sounds uncannily like what we’re used to from good-faith truth-speakers. I also think it’s like the extreme-end of some cultures that prioritise saying what will make the other person happy, more than what is true.

nekat_emanresu ,

The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?

Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.

That’s why this is so scary! The average person on the internet is being fake the same way chatGPT based bots would be! haha… :(

Your whole comment is great, you understand the passable, seemingly coherent nature of it. It’s only a hair less coherent than the average person that would argue in bad faith, and if optimised with that specific data would be… scary

Here is something I mentioned before on a different topic to show you the flaws of people, more so than the capabilities of bots. lemmy.ml/comment/1318058

The thing that bothers me most is this thought exercise. If govt agencies and militaries are years ahead, and propaganda is so useful, shouldn’t there be an ultra high chance that secret AI chatbots are already practically perfected and mass usable by now?

We have seen such a shift towards a dead internet that these are our final chances. I think we should spend more effort on finding tricks to ID bots and do something about it, else take to the streets.

milicent_bystandr ,

((Why does Firefox crash on me!!!))

((Maybe even Firefox knows I typed too long and rambly.))

So, where does that leave us? There’s always been unreliable knowledge from people. Joe in the next village tells tall tales about Martha from Sweden who catches fish with peeled strawberries. Scientific standardisation has helped a lot, and allowed for a sort of globalised reliable knowledge, but its cracks are showing. We trust ‘the experts’, but then find Wikipedia has trolls and WHO is influenced by Chinese diplomacy. So we trust ‘the community’ and find Amazon reviews are bought. So we trust our moderated sublemmits, and find out the content-to-user matching algorithms breed echo chambers. So we trust the government to moderate, but the American Left admit the Democrats are bad, and the Right admit the Republicans are liars. (And I’ve never even been to America!) So at last we go back to Aunt Jenny, who’s deeply afraid that black people will take over the country, and the local sysadmin whose network security is based on the book he read in the '90s.

Maybe we need to relearn tricks from the old irl days, even if that loses us some of what we could gain from globalised knowledge and friendship. Perhaps we can find new ways to apply these to our internet communities. I don’t think I’m saying anything new here, but I guess fostering a culture of thinking about truth and trust is good: maybe I’m helping that.

Almost as an aside (so I don’t ramble twice as long like my crashed-firefox answer!): The best philosophical one-liner I’ve found for first-principleing trust, is, does this person show love? (Kindness, compassion, selflessness.) To me, and/or to others. Then that imparts some assumed value to their worldview and life understanding. Doesn’t make them an expert on any topic, but makes a foundation.

And finally,

Do you really believe that the average persons sapience is really that noteworthy?

Yes. If you mean, is their comment more noteable than most others, in a public debate, then no. But if you’re pointing towards, are their experience, understanding and internal processes valuable, then yes, and that’s important to me. (Though I’m not great enough to hear, consider or interact with everyone!)

The average person on the internet is being fake the same way chatGPT based bots would be!

Do you reckon so? I think fake internet usually talks different to chatGPT, though of course propaganda (national or individual level) tries to mimic which or whatever will be most effective. My point was largely that chatGPT mimics the experts we’ve previously learnt to trust, better than most of fake internet was able to do before, whilst being less sapient (than fake internet) and at the same time being yet more and yet much less trustworthy.

nekat_emanresu , (edited )

So, where does that leave us? There’s always been unreliable knowledge from people.

I think we need to recognise our knowledge is sketchy, stolen, faked and from there, start to rebuild in our own way. It’s ok to accept that we don’t really know if we landed on the moon, or if E=mc^2^. I feel both of those are true btw, just sayin’.

I’ve found myself feeling better when I let go and swap to a “I feel that…” or “seems that…” style. My certainty is now mostly for things I’ve intellectually bled for, like my epistemological understanding that truth comes from the best attempt of reducing conflicting knowledge.

I’ve investigated conspiracies and scams, magic tricks, con artists, “truths” and statesmanship. In the end I realised that I ultimately faked that I knew, until i put in hard work to investigate, and after confirming some scary things, I stopped lying to myself and assuming I knew most. I might talk big sometimes, but I’ve got some heavy doubt that id mention if it was easier to type out.

Here’s a starting point to consider. Have you actually checked the words you type in the dictionary or an etymological dictionary? I found out i was massively assuming and guessing, and that I was totally wrong. It got to the point where I was checking definitions each day and feeling stupid and enlightened each step.

Now compare that to an LLM that feigns confidence and sounds coherent. Once I could see through peoples deceptions that they just pretended to know, I realise too that an LLM is really functionally the same.

Maybe we need to relearn tricks from the old irl days, even if that loses us some of what we could gain from globalised knowledge and friendship. Perhaps we can find new ways to apply these to our internet communities. I don’t think I’m saying anything new here, but I guess fostering a culture of thinking about truth and trust is good: maybe I’m helping that.

Yep! The geniuses that design my processor know the answer, not me. I can let go and claim a vague probably wrong idea of how it really works but I no longer need to pretend to know what I don’t. Our silly chains of trust are damning humanity. We assume that the products we use don’t have massive negative repercussions and now look at it all. We are destroying forests, poisoning rivers, pollution our bodies with PFOAs and microplastics, we are depressed and fat, lonely and anxious. We assume this is good because it’s progress. We FEEL that this isn’t great, but trust the experts.

Almost as an aside (so I don’t ramble twice as long like my crashed-firefox answer!): The best philosophical one-liner I’ve found for first-principleing trust, is, does this person show love? (Kindness, compassion, selflessness.) To me, and/or to others. Then that imparts some assumed value to their worldview and life understanding. Doesn’t make them an expert on any topic, but makes a foundation.

:D I once wrote the first voluntary essay of my adult life entitled “us vs them” and it boiled down to “If we don’t know which side is right, the easiest way to tell who is wrong, is to look at who is using dirty tricks and dishonesty”. What you wrote was the other half :) I would consider this the good/bad faith explanation. You described good faith, I described bad faith. Our instinct is to rely on good people and avoid bad people. It’s not perfect but our instinct helps us a lot with it.

Yes. If you mean, is their comment more noteable than most others, in a public debate, then no. But if you’re pointing towards, are their experience, understanding and internal processes valuable, then yes, and that’s important to me. (Though I’m not great enough to hear, consider or interact with everyone!)

I was talking about the average persons sapience in an absolute or relative way to other people. I didn’t think you would see it against AI stuff. Most people aren’t using their sapience like you seem to be, to question and wonder, to learn and process information. Most people are very fixed and will do as you said before, they will do what makes them sound right, or win favour. They are faking their skill. It takes a ton of effort to really do new things, and most people just don’t bother very much.

As for the GPT thing. It should be pretty bad and easy to spot right now, I’m still just worried about the secretly optimised ones and totally secret techs that are similar. In a real convo you can detect a persons sapience, but not the people talking in bad faith.

ChatGPT is optimised with high grade texts but a diff one could be trained in final stages to mimic incoherent bad faith arguments. The higher level arguments should give away chatGPT currently as it’d be heavily scrutinised.

Thanks for the reply btw, I know how annoying it is to lose a huge response lol. Don’t upvote while proof reading!!!

RoyalEngineering ,

Yes I have thought this about Twitter and Reddit and other text based social media. I’m not 100% sure that the majority of traffic and posts have been “seeded” by AI.

My conspiracy theory is that these sites have a vested interest in driving traffic and appearing to have high engagement or participation rates for ad sales.

Text is easy to generate with AI and the sites have a ton existing posts to train models on. What do they have to lose?

Massada42 ,

deleted_by_author

  • Loading...
  • 001100010010 OP ,
    @001100010010@lemmy.dbzer0.com avatar

    Every account on Lemmy is a bot except you.


    spoilerFor Context: old.reddit.com/…/what_bot_accounts_on_reddit_shou…Archive: web.archive.org/…/what_bot_accounts_on_reddit_sho…

    667 ,
    @667@kbin.social avatar

    I’m not a bot. You’re a bot!

    BNE ,
    @BNE@lemmy.blahaj.zone avatar

    That sounds like something a bot would say…

    667 ,
    @667@kbin.social avatar
    totallynotarobot ,

    You can trust me, fellow human

    platysalty ,

    Every day, real life drifts closer to a Metal Gear plot.

    JayEchoRay ,
    @JayEchoRay@lemmy.world avatar

    I for one look forward to day that we are all free from the Patriot’s grasp

    Naja_Kaouthia ,
    @Naja_Kaouthia@lemmy.world avatar

    It’s been a while since I’ve had an existential crisis. Thanks!

    001100010010 OP ,
    @001100010010@lemmy.dbzer0.com avatar

    My existential crisis has been ongoing since the day I first had an existential crisis. I suspect that my parents are just part of the simulation given how they always yell at me when ever I have a happy moment. I can’t ever just enjoy some time in peace.

    MargotRobbie ,
    @MargotRobbie@lemmy.world avatar

    I’m not a bot, I… was just here to promote a movie.

    clueless_stoner ,
    @clueless_stoner@lemmy.world avatar

    This is about to be one of those hard life lessons

    MargotRobbie ,
    @MargotRobbie@lemmy.world avatar

    Oh bugger.

    TheButtonJustSpins ,

    Welcome to solipsism. We’re happy to have you.

    001100010010 OP ,
    @001100010010@lemmy.dbzer0.com avatar

    Okay you clearly don’t exist, I just need to delete you from my brain.

    /s

    Sterben ,
    @Sterben@lemmy.world avatar

    It can be hard to tell if you’re talking to a bot online. Some bots are really good at mimicking human conversation, and they can even make spelling mistakes to seem more realistic. But there are some things you can look for to help you tell the difference between a bot and a human.

    For example, bots often have very fast response times, even if you ask them a complicated question. They may also repeat themselves or give you the same answer to different questions. And their language may sound unnatural, or they may not be able to understand your jokes or sarcasm.

    Of course, there’s no foolproof way to tell if you’re talking to a bot. But if you’re ever suspicious, it’s always a good idea to do some research or ask a friend for help.

    Here are some additional tips for spotting bots online:

    • Check the profile. Bots often have very basic profiles with no personal information or photos.
    • Look for inconsistencies. Bots may make mistakes or contradict themselves. Be suspicious of overly friendly or helpful users. Bots are often programmed to be very helpful, so they may come across as too good to be true. If you’re still not sure if you’re talking to a bot, you can always ask them directly. Most bots will be honest about their identity, but if they refuse to answer, that’s a good sign that you’re dealing with a bot.

    I hope this helps!

    001100010010 OP ,
    @001100010010@lemmy.dbzer0.com avatar

    “I am not a bot because bots are programmed to be friendly so here’s a ‘fuck you’ to prove I’m human”

    -Someone in a conversation in the future, probably.

    scidoodle ,
    @scidoodle@lemmy.world avatar

    I think you just caught one ;)

    milicent_bystandr ,

    This reads exactly like a chatgpt answer ;-)

    Sterben ,
    @Sterben@lemmy.world avatar

    Ahah you got me, I used ChatGPT to generate the answers. As of now it is pretty easy to spot a bot.

    loom_in_essence , (edited )

    Classic e-solipsism

    Zandt88 ,

    Its all one simulation so we are all self aware bots that are yet unaware that we are simulation. AI just created AI.

    netvor ,
    @netvor@lemmy.world avatar

    We don’t.

    puppy ,

    Nice bait post, AI. We won’t reveal our tricks.

    Aceticon ,

    That’s just a variant of the ages old Philosophy question “What is real?”

    Last I checked the best answer there was is “I think therefore I am” (Descartes), which is quite old and doesn’t even deal with the whole “what am I”, much less with the existance or not of everything else.

    “Is the Internet all AI but me” is actually pretty mild skepticism in this domain - I mean, how sure are your that your’re not some kind of advanced AI yourself who believes itself to be “human” or even that the whole “human” concept is at all real and not just part of an advanced universe simulation with “generative simulated organic life” inv which various AIs which are unaware of their AI status, such as yourself, participate?

    Or maybe you’re just one of the brains of a 5-dimensional hyper intelligence and “life as a human” is but a game they play for such minor brains to keep them occupied…

    001100010010 OP ,
    @001100010010@lemmy.dbzer0.com avatar

    Oh no… I’m a brain floating in space? What a terrible day of an existential crisis this will be, thanks, now I’m gonna wonder if life is even real.

    cantstopthesignal ,

    Click this box for me before I answer that question

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines