There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@Redacted@lemmy.world avatar

Redacted

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Redacted ,
@Redacted@lemmy.world avatar

I fully back your sentiment OP; you understand as much about the world as any LLM out there and don’t let anyone suggest otherwise.

Signed, a “contrarian”.

Redacted ,
@Redacted@lemmy.world avatar

I believe OP is attempting to take on an army of straw men in the form of a poorly chosen meme template.

Redacted , (edited )
@Redacted@lemmy.world avatar

Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.

This argument always boils down to one’s definition of the word “understanding”. For me that word implies a degree of consciousness, for others, apparently not.

To quote GPT-4:

LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.

Redacted ,
@Redacted@lemmy.world avatar

Understanding is a human concept so attributing it to an algorithm is strange.

It can be done by taking a very shallow definition of the word but then we’re just entering a debate about semantics.

Redacted , (edited )
@Redacted@lemmy.world avatar

Yes sorry probably shouldn’t have used the word “human”. It’s a concept that we apply to living things that experience the world.

Animals certainly understand things but it’s a sliding scale where we use human understanding as the benchmark.

My point stands though, to attribute it to an algorithm is strange.

Redacted ,
@Redacted@lemmy.world avatar

Well it was a fun ruse while it lasted.

Redacted ,
@Redacted@lemmy.world avatar

Yes you do unless you have a really reductionist view of the word “experience”.

Besides, that article doesn’t really support your statement, it just shows that a neural network can link words to pictures, which we know.

Redacted ,
@Redacted@lemmy.world avatar

That last sentence you wrote exemplifies the reductionism I mentioned:

It does, by showing it can learn associations with just limited time from a human’s perspective, it clearly experienced the world.

Nope that does not mean it experienced the world, that’s the reductionist view. It’s reductionist because you said it learnt from a human perspective, which it didn’t. A human’s perspective is much more than a camera and a microphone in a cot. And experience is much more than being able to link words to pictures.

In general, you (and others with a similar view) reduce complexity of words used to descibe conciousness like “understanding”, “experience” and “perspective” so they no longer carry the weight they were intended to have. At this point you attribute them to neural networks which are just categorisation algorithms.

I don’t think being alive is necessarily essential for understanding, I just can’t think of any examples of non-living things that understand at present. I’d posit that there is something more we are yet to discover about consciousness and the inner workings of living brains that cannot be fully captured in the mathematics of neural networks as yet. Otherwise we’d have already solved the hard problem of consciousness.

I’m not trying to shift the goalposts, it’s just difficult to convey concisely without writing a wall of text. Neither of the links you provided are actual evidence for your view because this isn’t really a discussion that evidence can be provided for. It’s really a philosophical one about the nature of understanding.

Redacted ,
@Redacted@lemmy.world avatar

No one is moving goalposts, there is just a deeper meaning behind the word “understanding” than perhaps you recognise.

The concept of understanding is poorly defined which is where the confusion arises, but it is definitely not a direct synonym for pattern matching.

Redacted ,
@Redacted@lemmy.world avatar

I agree, there is no formal definition for AGI so a bit silly to discuss that really. Funnily enough I inadvertantly wrote the nearest neighbour algorithm to model swarming behavour back when I was an undergrad and didn’t even consider it rudimentary AI.

Can I ask what your take on the possibility of neural networks understanding what they are doing is?

Redacted ,
@Redacted@lemmy.world avatar

Bringing physically or mentally disabled people into the discussion does not add or prove anything, I think we both agree they understand and experience the world as they are conscious beings.

This has, as usual, descended into a discussion about the word “understanding”. We differ in that I actually do consider it mystical to some degree as it is poorly defined and implies some aspect of consciousness to myself and others.

Your definitions are remarkably vague and lack clear boundaries.

That’s language for you I’m afraid, it’s a tool to convey concepts that can easily be misinterpreted. As I’ve previously alluded to, this comes down to definitions and you can’t really argue your point without reducing complexity of how living things experience the world.

I’m not overstating anything (it’s difficult to overstate the complexities of the mind), but I can see how it could be interpreted that way given your propensity to oversimplify all aspects of a conscious being.

This is an argument from incredulity, repeatedly asserting that neural networks lack “true” understanding without any explanation or evidence. This is a personal belief disguised as a logical or philosophical claim. If a neural network can reliably connect images with their meanings, even for unseen examples, it demonstrates a level of understanding on its own terms.

The burden of proof here rests on your shoulders and my view is certainly not just a personal belief, it’s the default scientific position. Repeating my point about the definition of “understanding” which you failed to counter does not make it an agrument from incredulity.

If you offer your definition of the word “understanding” I might be able to agree as long as it does not evoke human or even animal conscious experience. There’s literally no evidence for that and as we know, extraordinary claims require extraordinary evidence.

Redacted ,
@Redacted@lemmy.world avatar

I have a theory… They are sophisticated auto-complete.

Redacted ,
@Redacted@lemmy.world avatar

Orders of magnitude of differece between the most complex known object in the universe and some clever statistical analysis.

We understand very little about the human brain. For example, we don’t know if it leverages quantum interactions or whether it can be decoupled from its substrate.

LLMs are pattern matching models loosly based on the structure of neurons that work well for deriving predictions from a vast body of data but are not anywhere near human brain level of understanding. I personally don’t think they will ever be until we have solved the hard problem of conciousness.

Redacted ,
@Redacted@lemmy.world avatar

The key word here is “seems”.

Redacted ,
@Redacted@lemmy.world avatar

This whole argument hinges on consciousness being easier to produce than to fake intelligence to humans.

Humans already anthropomorphise everything, so I’m leaning towards the latter being easier.

Redacted ,
@Redacted@lemmy.world avatar

Thank you, much more succinctly put than my attempt.

Redacted ,
@Redacted@lemmy.world avatar

Welp looks like we both know the arguments and fall on different sides of the debate then.

Much better than being confidently wrong like most LLMs…

Redacted ,
@Redacted@lemmy.world avatar

…or even if consciousness is an emergent property of interactions between certain arrangements of matter.

It’s still a mystery which I don’t think can be reduced to weighted values of a network.

Redacted ,
@Redacted@lemmy.world avatar

Bold of you to assume any philosophical debate doesn’t boil down to just that.

Redacted , (edited )
@Redacted@lemmy.world avatar

Standard descent into semantics incoming…

We define concepts like consciousness and intelligence. They may be related or may not depending on your definitions, but the whole premise here is about experience regardless of the terms we use.

I wouldn’t say Fibonacci being found everywhere is in any way related to either and is certainly not an expression of logic.

I suspect it’s something like the simplest method nature has of controlling growth. Much like how hexagons are the sturdiest shape, so appear in nature a lot.

Grass/rocks being conscious is really out there! If that hypothesis was remotely feasible we couldn’t talk about things being either consciousness or not, it would be a sliding scale with rocks way below grass. And it would be really stretching most people’s definition of consciousness.

Redacted ,
@Redacted@lemmy.world avatar

Absolutely no way the training set could have included knowyourmeme.com.

Redacted ,
@Redacted@lemmy.world avatar

Your meme probably wasn’t dank enough then.

Redacted ,
@Redacted@lemmy.world avatar

Think you’re slightly missing the point. I agree that LLMs will get better and better to a point where interacting with one will be indistinguishable from interacting with a human. That does not make them sentient.

The debate is really whether all of our understanding and human experience of the world comes down to weighted values on a graph or if the human brain is hiding more complex, as-yet-undiscovered, phenomena than that.

Redacted ,
@Redacted@lemmy.world avatar

I feel like an AI right now having predicted the descent into semantics.

Redacted ,
@Redacted@lemmy.world avatar

Brah, if an AI was conscious, how would it know we are sentient?! Checkmate LLMs.

Redacted ,
@Redacted@lemmy.world avatar

They operate by weighting connections between patterns they identify in their training data. They then use statistics to predict outcomes.

I am not particularly surprised that the Othello models built up an internal model of the game as their training data were grid moves. Without loooking into it I’d assume the most efficient way of storing that information was in a grid format with specific nodes weighted to the successful moves. To me that’s less impressive than the LLMs.

Redacted ,
@Redacted@lemmy.world avatar

So somewhere in there I’d expect nodes connected to represent the Othello grid. They wouldn’t necessarily be in a grid, just topologically the same graph.

Then I’d expect millions of other weighted connections to represent the moves within the grid including some weightings to prevent illegal moves. All based on mathematics and clever statistical analysis of the training data. If you want to refer to things as tokens then be my guest but it’s all graphs.

If you think I’m getting closer to your point can you just explain it properly? I don’t understand what you think a neural network model is or what you are trying to teach me with Pythag.

Redacted ,
@Redacted@lemmy.world avatar

It wouldn’t reverse engineer anything. It would start by weighting neurons based on it’s training set of Pythagorean triples. Over time this would get tuned to represent Pythag in the form of mathematical graphs.

This is not “understanding” as most people would know it. More like a set of encoded rules.

Redacted , (edited )
@Redacted@lemmy.world avatar

Seems to me you are attempting to understand machine learning mathematics through articles.

That quote is not a retort to anything I said.

Look up Category Theory. It demonstrates how the laws of mathematics can be derived by forming logical categories. From that you should be able to imagine how a neural network could perform a similar task within its structure.

It is not understanding, just encoding to arrive at correct results.

Redacted ,
@Redacted@lemmy.world avatar

You’re being downvoted because you provide no tangible evidence for your opinion that human consciousness can be reduced to a graph that can be modelled by a neural network.

Addidtionally, you don’t seem to respond to any of the replies you receive in good faith and reach for anecdotal evidence wherever possible.

I also personally don’t like the appeal to authority permeating your posts. Just because someone who wants to secure more funding for their research has put out a blog post, it doesn’t make it true in any scientific sense.

Redacted , (edited )
@Redacted@lemmy.world avatar

There you go arguing in bad faith again by putting words in my mouth and reducing the nuance of what was said.

You do know dissertations are articles and don’t constitute any form or rigorous proof in and of themselves? Seems like you have a very rudimentary understanding of English, which might explain why you keep struggling with semantics. If that is so, I apologise because definitions are difficult when it comes to language, let alone ESL.

I didn’t dispute that NNs can arrive at a theorem. I debate whether they truly understand the theorem they have encoded in their graphs as you claim.

This is a philosophical/semantical debate as to what “understanding” actually is because there’s not really any evidence that they are any more than clever pattern recognition algorithms driven by mathematics.

Redacted ,
@Redacted@lemmy.world avatar

Understanding as most people know it implies some kind of consciousness or sentience as others have alluded to here.

It’s the whole point of your post.

Redacted , (edited )
@Redacted@lemmy.world avatar

No I’m not.

You’re nearly there… The word “understanding” is the core premise of what the article claims to have found. If not for that, then the “research” doesn’t really amount to much.

As has been mentioned, this then becomes a semantic/philosophical debate about what “understanding” actually means and a short Wikipedia or dictionary definition does not capture that discussion.

Redacted , (edited )
@Redacted@lemmy.world avatar

I’ve read the article and it’s just clickbait which offers no new insights.

What was of interest in it to yourself specifically?

Redacted , (edited )
@Redacted@lemmy.world avatar

I question the value of this type of research altogether which is why I stopped following it as closely as yourself. I generally see them as an exercise in assigning labels to subsets of a complex system. However, I do see how the COT paper adds some value in designing more advanced LLMs.

You keep quoting research ad-verbum as if it’s gospel so miss my point (and forms part of the apeal to authority I mentioned previously). It is entirely expected that neural networks would form connections outside of the training data (emergent capabilities). How else would they be of use? This article dresses up the research as some kind of groundbreaking discovery, which is what people take issue with.

If this article was entitled “Researchers find patterns in neural networks that might help make more effective ones” no one would have a problem with it, but also it would not be newsworthy.

I posit that Category Theory offers an explanation for these phenomena without having to delve into poorly defined terms like “understanding”, “skills”, “emergence” or Monty Python’s Dead Parrot. I do so with no hot research topics at all or papers to hide behind, just decades old mathematics. Do you have an opinion on that?

Redacted , (edited )
@Redacted@lemmy.world avatar

Title of your post is literally “New Theory Suggests Chatbots Can Understand Text”.

You also hinted at it with your Pythag analogy.

Redacted , (edited )
@Redacted@lemmy.world avatar

You posted the article rather than the research paper and had every chance of altering the headline before you posted it but didn’t.

You questioned why you were downvoted so I offered an explanation.

Your attempts to form your own arguments often boil down to “no you”.

So as I’ve said all along we just differ on our definitions of the term “understanding” and have devolved into a semantic exchange. You are now using a bee analogy but for a start that is a living thing not a mathematical model, another indication that you don’t understand nuance. Secondly, again, it’s about definitions. Bees don’t understand the number zero in the middle of the number line but I’d agree they understand the concept of nothing as in “There is no food.”

As you can clearly see from the other comments, most people interpret the word “understanding” differently from yourself and AI proponents. So I infer you are either not a native English speaker or are trying very hard to shoehorn your oversimplified definition in to support your worldview. I’m not sure which but your reductionist way of arguing is ridiculous as others have pointed out and full of logical fallacies which you don’t seem to comprehend either.

Regarding what you said about Pythag, I agree and would expect it to outperform statistical analysis. That is due to the fact that it has arrived at and encoded the theorem within its graphs but I and many others do not define this as knowledge or understanding because they have other connotations to the majority of humans. It wouldn’t for instance be able to tell you what a triangle is using that model alone.

I spot another apeal to authority… “Hinton said so and so…” It matters not. If Hinton said the sky is green you’d believe it as you barely think for yourself when others you consider more knowledgeable have stated something which may or may not be true. Might explain why you have such an affinity for AI…

Redacted ,
@Redacted@lemmy.world avatar

Spot on.

Redacted ,
@Redacted@lemmy.world avatar

To hijack your analogy its more akin to me stating a tree is a plant and you saying “So are these” pointing at a forest of plastic Christmas trees.

I’m pretty curious why you imagine you have so many downvotes?

Redacted ,
@Redacted@lemmy.world avatar

Have you ever considered you might be the laypeople?

Equating a debate about the origin of understanding to antivaxxers…

You argue like a Trump supporter.

Redacted ,
@Redacted@lemmy.world avatar

Lol indeed, just seen you moderate a Simulation Theory sub.

Congratulations, you have completed the tech evangelist starter pack.

Next thing you’ll be telling me we don’t have to worry about climate change because we’ll just use carbon capture tech and failing that all board Daddy Elon’s spaceship to teraform Mars.

Redacted ,
@Redacted@lemmy.world avatar

*7 years earlier than the recently revised predictions YAY.

Redacted ,
@Redacted@lemmy.world avatar

+1 for the S23.

I’ve been a Nexus/Pixel fan since the beginning but performance/battery is much better now the Samsungs have the Snapdragon 8 Gen 2 in them.

Camera is decent (marginally worse than Pixel for stills but better for video) and OneUI is far much more customisable and less obtuse than it was in the past.

Redacted ,
@Redacted@lemmy.world avatar

Just flog it to the UK for a few million. Rishi’s probably already looking into the possibility of housing migrants on it.

Redacted ,
@Redacted@lemmy.world avatar

Apologies, busy day, saw headline, made joke about our awful PM.

Pretty insensitive now I’ve seen someone died in the incident.

Redacted ,
@Redacted@lemmy.world avatar

It’s a made up definition which varies depending on whose fairytales you believe.

Redacted ,
@Redacted@lemmy.world avatar

This is a blatantly pointless reply.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines