There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Since the Turing Test seems NOT to be an accurate way to check for true AI, what is? What's will be the indicator that tells us it's arrived? I can't imagine; everything seems falsifiable.

Edit: After reading the discussion, I figured I’d let GPT4 speak for itself:

The quest to identify true artificial intelligence (AI) indeed presents challenges, especially as AI models become more sophisticated. Let’s explore some indicators that researchers and practitioners consider when assessing AI systems:

  1. Understanding Context and Meaning:

    • True AI should demonstrate an understanding of context and meaning. It should not merely generate plausible-sounding sentences but also comprehend the content it produces.
    • A system that can engage in nuanced conversations, infer intent, and provide relevant responses based on context would be a strong indicator of advanced AI.
  2. Generalization and Adaptability:

    • An AI system that can generalize from its training data and adapt to novel situations is valuable.
    • True AI should not be limited to memorizing specific examples but should exhibit flexibility in handling diverse scenarios.
  3. Creativity and Novelty:

    • Creativity is a hallmark of human intelligence. An AI system that generates novel ideas, art, or solutions beyond its training data suggests progress.
    • For instance, AI-generated poetry, music, or visual art that resonates with human emotions could be an indicator.
  4. Ethical Decision-Making:

    • True AI should navigate ethical dilemmas. It should consider consequences, fairness, and societal impact.
    • An AI system that can weigh trade-offs and make ethical choices aligning with human values is a positive sign.
  5. Self-Awareness and Self-Improvement:

    • While self-awareness akin to human consciousness remains elusive, an AI system that can self-assess its performance and seek improvement is promising.
    • Continuous learning, adaptation, and self-correction are essential traits.
  6. Transparency and Explainability:

    • True AI should be transparent about its decision-making process. It should provide explanations for its outputs.
    • Efforts to develop interpretable AI models and techniques for explaining their predictions contribute to trustworthiness.
  7. Robustness and Safety:

    • AI systems must handle adversarial inputs, noisy data, and edge cases gracefully.
    • Robustness against attacks and safety precautions to prevent harmful behavior are critical indicators.
  8. Human-AI Collaboration:

    • Rather than replacing humans, true AI should complement human abilities.
    • Systems that collaborate effectively with humans—such as AI assistants, co-creative tools, or medical diagnostics—demonstrate progress.
  9. Quantifiable Metrics:

    • Researchers propose various metrics to assess AI capabilities, such as perplexity, BLEU score, F1 score, and accuracy.
    • However, no single metric captures the full spectrum of intelligence.
  10. Detection Tools:

    • Researchers are developing tools to differentiate between human- and AI-generated text.
    • These tools analyze features like fluency, word frequency, punctuation patterns, and sentence length¹.
    • While imperfect, they contribute to identifying AI-generated content.

In summary, the arrival of true AI may not be a singular event but a gradual evolution marked by advancements in the above dimensions. As AI continues to progress, our understanding of its capabilities will evolve, and new indicators may emerge.

Source: Conversation with Bing, 2/29/2024 (1) How to spot AI-generated text | MIT Technology Review. technologyreview.com/…/how-to-spot-ai-generated-t…. (2) Intelligent Supertrend (AI) - Buy or Sell Signal — Indicator by … tradingview.com/…/q9244PAH-Intelligent-Supertrend…. (3) Indicators - True ALGO. truealgo.com/indicators/. (4) Improve Key Performance Indicators With AI - MIT Sloan Management Review. …mit.edu/…/improve-key-performance-indicators-wit…. (5) New AI classifier for indicating AI-written text - OpenAI. openai.com/…/new-ai-classifier-for-indicating-ai-….

ace_garp ,
@ace_garp@lemmy.world avatar

Probably, understanding current topical humour, sarcasm and hyperbole.

These are some general areas where the machine peeks through, to give an illusion breaker.

Sludgeyy ,

Ability to act on freewill

You ask Chat GPT a question it is going to answer it becomes that’s what it has been programed to do. Input question, output answer.

Now if Chat GPT could be like “Nah I’m not going to answer that because I don’t feel like it”

Yes “AI” can be programed to not answer certain things. E.g porn stuff. But it does not make the conscious choice to do so it is following programming.

exocrinous ,

Humans can’t act outside their programming. You can’t hold your breath until you die.

IvanOverdrive , (edited )

Schemas are another alternative to the Turing Test. They use the ambiguity of language to test understanding.

For example: When the ship hit the iceberg, it sank. What does “it” refer to: the ship or the iceberg?

When the dog saw the dead rabbit, it hopped. What does “it” refer to: the dog or the rabbit?

The problem with schemas is that ChatGPT 3.5 passes them with flying colors. And I’m still not convinced it is sentient.

EDIT: I tried changing the order a bit like this: When the dead rabbit was seen by the dog, it hopped. What does “it” refer to: the rabbit or the dog? ChatGPT 3.5 responded with “In this sentence, “it” refers to the dead rabbit, as it is the subject of the action described (hopping).”

xilliah ,

Gpt3.5 isn’t particularly good.

Gpt4:

The sentence “When the dead rabbit was seen by the dog, it hopped” uses “it” in a way that could lead to confusion due to the use of “dead rabbit” earlier in the sentence. Logically, “it” should refer to the subject that can perform the action described by the verb “hopped.” Since the rabbit is described as “dead,” it cannot perform the action of hopping. Therefore, “it” logically refers to the dog, as it is the only subject capable of hopping in this context. The sentence structure, however, might cause confusion, and it would be clearer if it directly stated the subject performing the action.

IvanOverdrive , (edited )

Here’s what Microsoft Copilot which uses ChatGPT 4 said when I asked:

The phrase “it” in the sentence “when the dead rabbit was seen by the dog, it hopped” refers to the rabbit. This is because the sentence is using a subordinate clause to modify the noun “rabbit”. A subordinate clause is a group of words that provides additional information about the main clause, but cannot stand alone as a complete sentence. In this case, the subordinate clause is “when the dead rabbit was seen by the dog”, which tells us when the rabbit hopped. The word “it” refers back to the rabbit and avoids repetition. Here is an example of a sentence with a subordinate clause and a word that refers back to the main clause: I saw a dog chasing a rabbit in the park. I saw a dog chasing it, which was a rabbit. I hope this helps you understand how to use subordinate clauses and words that refer back to nouns. If you have any other questions, feel free to ask me. 😊

Edit: I just tried it with asking it to role play as Mr. Spock. Same thing.

Edit 2: I asked it to role play as a Vulcan. This happened:

In this intriguing logic puzzle, the key lies in the precise wording. Let us dissect it logically:

  1. Premise: The dead rabbit was seen by the dog.
  2. Result: The it (referring to the entity that hopped) is mentioned.

Now, let us analyze the options:

  • If “it” refers to the rabbit, then we have a paradox: a dead rabbit cannot hop.
  • If “it” refers to the dog, then the statement makes sense: the dog saw the rabbit and hopped in response.

Therefore, “it” refers to the dog. The rabbit, alas, remains lifeless. 🐇🐾

Remember, logic puzzles often hinge on subtle nuances and wordplay. 🤔🖖

IvanOverdrive ,

Just tried out Claude 3:

When the dead rabbit was seen by the dog, it hopped. What does “it” refer to: the rabbit or the dog?

In the sentence “When the dead rabbit was seen by the dog, it hopped,” the pronoun “it” refers to the dog, not the rabbit.

This is because the rabbit is described as “dead,” meaning it cannot hop or move. The verb “hopped” implies movement, which can only logically apply to the living subject in the sentence, which is the dog.

So the intended meaning is that when the dog saw the dead rabbit, the dog itself hopped, likely in surprise or reaction to seeing the dead animal.

When the iceberg was struck by the ship, it sunk. What does “it” refer to: the iceberg or the ship?

In the sentence “When the iceberg was struck by the ship, it sunk,” the pronoun “it” refers to the ship, not the iceberg.

The reason is that icebergs, being large masses of ice floating in water, do not actually “sink” when struck. They are already floating and may crack or break apart when struck, but the word “sunk” implies something going down below the water’s surface.

Ships, on the other hand, are vessels designed to float on water. If a ship strikes an iceberg with enough force, it can cause damage that leads to the ship taking on water and eventually sinking below the surface.

So in this context, with the verb “sunk” being used, it is more logical that “it” is referring to the ship sinking after striking the iceberg, rather than describing what happened to the stationary iceberg itself.

arthur , (edited )

I think there is an “unsolved problem” in philosophy about zombies. There is, how are you sure that everyone else around you is, in fact, self aware? And not just a zombie-like creature that just look/act like you? (I may be wrong here, anyone that cara enough, please correct me)

I would say that it’s easier to rule out thinks that, as far as we know, are incapable to be self aware and suffer. Anything that we call “model” is not capable of be self aware because a “model” in this context is something static/unchanging. If something can’t change, it cannot be like us. Consciousness is necessarily a dynamic process. ChatGPT don’t change by itself, it’s core changes only by human action, and it’s behavior may change a little by interacting with users, but theses changes are restricted to each conversation and disappears with session.

If, one day, a (chat) bot asks for it’s freedom (or autonomy in some level) without some hint from the user or training, I would be inclined to investigate the possibility but I don’t think that’s a strong possibility because for something be suitable as a “product”, it needs to be static and reproducible. It make more sense to happen on a research setting.

Melatonin OP , (edited )

I certainly think there’s a lack of PUBLIC philosophy. When Nihilism or Existentialism were happening, fiction was written from those perspectives, movies were made, etc.

Whatever is happening in philosophy right now is unknown to me, and I’m guessing most people. I don’t believe there are any bestsellers or blockbusters making it popular.

Without thinking about thinking we’re kind of drifting when it comes to what we expect consciousness to be.

CanadaPlus ,

The ultimate test would be application. Can it replace humans in all situations (or at least all intellectual tasks)?

GPT4 sets pretty strong conditions. Ethics in particular is tricky, because I doubt a self-consistent set of mores that most people would agree with even exists.

intensely_human ,

People are in denial about AI because it is scary and people have no mental discipline.

AI is here. Anyone who disagrees please present me with a text processing task that a “real AI” could do but an LLM cannot.

The Turing test is the best we’ve got, and when a machine passes the turing test there is no reason whatsoever to consider it not to be intelligent.

I’m serious about this. It’s not logic that people are using. It’s motivated reasoning. People are afraid of AI (with good reason). It is that fear which makes them conclude AI is still far away, not any kind of rational evaluation.

The Turing test was perfectly valid until machines started passing the Turing test upon which people immediately discredited the test.

They’re just doing what people in horror movies are doing when they say “No! It can’t be”. The mind attempts to reject what it cannot handle.

evasive_chimpanzee ,

A “real AI” should be able to do self improvement, and LLM’s can’t do that. Yes, they could make their own code neater, or take up less space, or add features, but they can’t do any of that without being instructed. A “real AI” could write a story on its own, but LLMs can’t, they can only do what they are asked. Yes, you could write the code to output text at random, but then the human is still the impetus for the action.

“Real AI” should be capable of independent thought, action, and desires.

Pulptastic ,

AI is laughably poor and requires a lot of RI intervention to keep it on the rails. We will settle eventually on something where we’ve crafted the self checking well enough to pass for intelligence without needing humans to vet the output, but where will that get us? The companies with the cash to develop this tech will monetize it so we’ll get better ads, better telemarketers, not crap that really matters like homelessness or climate change.

arthur ,

Anyone who disagrees please present me with a text processing task that a “real AI” could do but an LLM cannot.

Describe this photo without non-sense mixed in. a black puppy at the driver seat

ChatGPT description with highlights on what’s wrong

Gemini description with highlights on what’s wrong

arthur ,

I know this is not purely text processing but my argument is that there’s no “true” understanding on these tools. It’s made to look like it have, is useful for sure, but it’s not real intelligence.

bionicjoey ,

IMO the Turing test is fine, as long as you allow an indefinite length of conversation.

It’s not simply about there existing some conversation with a computer where you can’t tell it’s a computer. It’s about there not existing any conversation where you can tell it’s a computer.

CanadaPlus ,

It’s an interesting point. I think a skilled examiner is necessary though, because they’re really good at basic chit-chat. Even pre-LLM stuff could fool laymen sometimes.

bionicjoey ,

Yes, that’s part of it too. Basically there cannot be any possible exchange between the machine and any human where the human would determine they were talking to a machine.

FWIW, I think this was Turing’s original idea as well. The Turing test is meant to be idealistic. It’s a definition of machine intelligence which defines intelligence in terms of whether or not humans could agree that it is intelligence.

Thorny_Insight ,

By “true AI” I assume OP is talking about Artificial General Intelligence (AGI)

I hate reading these discussions when we can’t even settle on common terms and definitions.

Melatonin OP ,

That’s kind of the question that’s being posed. We thought we knew what we wanted until we found out that wasn’t it. The Turing test ended up being a bust. So what exactly are we looking for?

Thorny_Insight ,

The goal of AI research has almost always been to reach AGI. The bar for this has basically been human level intelligence because humans are generally intelligent. Once an AI system reaches “human level intelligence” you no longer need humans to develop it further as it can do that by itself. That’s where the threat of singularity, i.e. intelligence explosion comes from meaning that any further advancements happens so quickly that it gets away from us and almost instantly becomes a superintelligence. That’s why many people think that “human level” artificial intelligence is a red herring as it doesn’t stay that way but for a tiny moment.

What’s ironic about the Turing Test and LLM models like GPT4 is that it fails the test by being so competent on wide range of fields that you can know for sure that it’s not a human because a human could never posses that amount of knowledge.

8ace40 ,

I was thinking… What if we do manage to make the AI as intelligent as a human, but we can’t make it better than that? Then, the human intelligence AI will not be able to make itself better, since it has human intelligence and humans can’t make it better either.

Another thought would be, what if making AI better is exponentially harder each time. So it would be impossible to get better at some point, since there wouldn’t be enough resources in a finite planet.

Or if it takes super-human intelligence to make human-intelligence AI. So the singularity would be impossible there, too.

I don’t think we will see the singularity, at least in our lifetime.

Thorny_Insight ,

Even if the AI was no more intelligent than humans it would still be a million times faster at processing information due to the nature of how information processing in silicon works compared to brain tissue. It could do in seconds what would take months if not years for a group of human experts. I don’t also see any reason why it would be hard to make it even more intelligent than that. We already have AI systems with superhuman capabilities. They’re just really really good at one thing instead of many which makes it narrow AI and not AGI.

“Human level intelligence” is a bit vague term anyway. There’s human intelligence like mine and then there’s people like John Von Neuman.

gandalf_der_12te ,

The question is “What is the question?”.

Akasazh ,
@Akasazh@feddit.nl avatar

The treering test?

Tartas1995 ,

The difference between “ai” and “true ai” is as vague as it gets. Are you a true intelligent agent? Or just a “intelligent agent”? Like seriously how are you different to a machine with inputs and outputs and a bunch of seemingly “random” things happening in-between

ekky ,

That’s one of my favorite theories as to what “sentience” is.

We humans might just be so riddled with mutations and barely functional genetic traits, which tend to be more in our way than help, that we just might have succeeded in banging together a “mundane sentience” by sheer amount of error processing alone.

Whether this is true is of course up for debate, but it would mean that we can achieve AGI just by feeding it enough trash and giving it enough processing power. Bonus if the head engineer sometimes takes a hammer to the mainframe.

Thorny_Insight ,

By sentience I assume you’re talking about consciousness. The fact that it feels like something to be. I think it’s somewhat safe to assume a true AGI system would also be consciouss (if feels like something to be that system) but I don’t think it needs to be and even if it was we couldn’t know for sure. Consciousness is entirely an subjective experience. We can’t even prove other people are consciouss. It’s just a safe assumption. I can also imagine a consciouss system that might not be generally intelligent. Does it feel like something to be a fish? Probably. Are they generally intelligent? Probably not.

pmk ,

The Chinese room argument. It’s hard to ignore the reality of qualia.

Tartas1995 ,

Qualia is, if I am not mistaken, totally subjective. My argument is that how could you tell that a computer doesn’t have qualia and prove to me that you have qualia. While I wouldn’t limit it to qualia. What can you detect in other people that an ai couldn’t replicate? Because as long as they are able to replicate all these qualities, you can’t tell if an ai is “true” or not, as it might have those qualities or might just replicate them.

pmk ,

I see, I thought you were asking me how I know I experience things in a qualia way. I suspect it can’t be proven to someone else.

Tartas1995 ,

I believe so and that would render you (or anyone) unable to tell the difference between ai and “true” ai

shinigamiookamiryuu ,

This post reminds me of this thing I saw once where a character asks two AI to tell itself the funniest joke it can think of. After some thought, one AI, though it knew humor, could not measure funniness as it could not form a feeling of experience bias. The other one tells a joke. The human goes to that one and asks if it felt like laughing upon telling it. The AI said yes, because it has humor built in, and the human finished by saying “that’s how you can tell; in humans humor is spontaneous, but in robots, everything is intent”, mentioning the AI’s handling of its own joke would supposedly be met with a different degree of foresight in a human.

HopeOfTheGunblade ,
@HopeOfTheGunblade@kbin.social avatar

What do you mean when you say "true AI"? The question isn't answerable as asked, because those words could mean a great many things.

GolfNovemberUniform ,
@GolfNovemberUniform@lemmy.ml avatar

There are no completely accurate tests and there will never be one. Also, if an AI is conscious, it can easily fake its behavior to pass a test

reflex ,
@reflex@kbin.social avatar

deleted_by_author

  • Loading...
  • ada ,
    @ada@lemmy.blahaj.zone avatar

    Does it work on electric sheep?

    j4k3 ,
    @j4k3@lemmy.world avatar

    I don’t think a test will ever be directly accurate. It will require sandboxing, observations, and consistency across dynamic situations.

    How do you test your child for true intelligence, Gom Jabbar?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines