There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Phanatik

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Phanatik ,

Even the Wayback Machine has limits to what is available.

Phanatik ,

I mainly disagree with the final statement on the basis that the LLMs are more advanced predictive text algorithms. The way they've been set up with a chatbox where you're interacting directly with something that attempts human-like responses, gives off the misconception that the thing you're talking to is more intelligent than it actually is. It gives off a strong appearance of intelligence but at the end of the day, it predicts the next word in a sentence based on what was said previously but it doesn't do that good job of comprehending what exactly it's telling you. It's very confident when it gives responses which also means when it's wrong, it's very confidently delivering the incorrect response.

Phanatik ,

What you're alluding to is the Turing test and it hasn't been proven that any LLM would pass it. At this moment, there are people who have failed the inverse Turing test, being able to acerrtain whether what they're speaking to is a machine or human. The latter can be done and has been done by things less complex than LLMs and isn't proof of an LLMs capabilities over more rudimentary chatbots.

You're also suggesting that it minimises the complexity of its outputs. My determination is that what we're getting is the limit of what it can achieve. You'd have to prove that any allusion to higher intelligence can't be attributed to coercion by the user or it's just hallucinating based on imitating artificial intelligence from media.

There are elements of the model that are very fascinating like how it organises language into these contextual buckets but this is still a predictive model. Understanding that certain words appear near each other in certain contexts is hardly intelligence, it's a sophisticated machine learning algorithm.

Phanatik ,

So... who is going to be marching into Israel to make the arrest?

Phanatik ,

Ah gotcha so he just never leaves Israel or no one ever acts on the arrest warrant.

Phanatik ,

The Age rating is who can use the App, not how long it's been up.

Nearly 75% of journalists killed in 2023 died in Israel’s war on Gaza: CPJ (www.aljazeera.com)

Seventy-two of the 99 journalists killed worldwide in 2023 were Palestinians reporting on Israel’s war on Gaza, making those 12 months the deadliest for the media in almost a decade, according to the Committee to Protect Journalists (CPJ)....

Phanatik ,

The only people bringing Jews into the conversation has been Israel.

Phanatik ,

Israel are the ones who made being Jewish synonymous with being Israeli. So now if talk against Israel, you're being antisemitic. If you disagree with their conduct, you're being antisemitic. All they've done is muddy the waters so that criticism of their vile actions somehow means you're denying the Holocaust. How many times has Netanyahu brought up 7th Oct as justification for the actions they've committed against Palestinians? "Why should we stop bombing Gaza? Do you not remember 7th Oct?'

Phanatik ,

You can see my post history if you want to content yourself that I don't just copy-paste responses. I like to tailor my answer depending on how much of an asshole the person I'm replying to is.

Someone else commented what my point was but I'll make it clear myself. While Netanyahu believes it's favourable to classify all anything Jewish as being related to Israel, the inverse is what you're seeing play out.

Attacking Israel means you're attacking the Jewish faith therefore, attacking members of the Jewish faith means you're attacking Israel. This isn't a position I hold, this is the situation Israel has placed Jews around the world in as a result of muddying the waters. Israel is perfectly willing to manipulate the horror of the Holocaust to get allies to support their violence against Palestinians.

Phanatik ,

Well, it's because he's an old fuck already so his heinous crimes result in him spending the rest of his worthless life in prison. If he's lucky, he'll die before he reaches 100.

Phanatik ,

Shit will get real intense when Biden video calls him on Skype and starts wagging his finger!

Phanatik ,

My mischievous little tuxedo, back when she was really small. She's still small and mischievous at 2 years old.

Phanatik ,

My friend lives in Iran. She often shows pictures of Iran's beauty such as dried-up lakes and rivers because of dams.

Tried Arch for the first time | My experience and impressions (lemmy.ml)

I used linux intermittently in the last 15 or so years, migrating from early Ubuntu versions, to Manjaro, Pop!_OS, Debian, etc. And decided to give Arch a try just recently; with all the memes around its high entry point, I was really expecting to struggle for a long time to set it up just as I want....

Phanatik ,

One of the simplest ways to safeguard against breakage is to have your /home on a separate partition. I realised I wouldn't need to backup and reformat it from the beginning, I just need to wipe the root drive and reinstall again.

It's made even easier by writing an installation script. Simply put, you can pipe a list of packages into packstrap and use a little convenience package for pulling a partition scheme out of a file.

I like to tinker and I'm aware that things will break so I have these tools that let me rebuild the system again in as short a time as possible.

Phanatik ,

He died in 1982 but his works are hugely influential:
Philip K Dick.

Phanatik ,

Dude, Pakistan had politicians openly saying they'd ethnic cleanse the country of Pathans and Pashto people.

Phanatik ,

I don't understand the comments suggesting this is "guilty by proxy". These platforms have algorithms designed to keep you engaged and through their callousness, have allowed extremist content to remain visible.

Are we going to ignore all the anti-vaxxer groups who fueled vaccine hesitancy which resulted in long dead diseases making a resurgence?

To call Facebook anything less than complicit in the rise of extremist ideologies and conspiratorial beliefs, is extremely short-sighted.

"But Freedom of Speech!"

If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don't deserve to have that speech. Sorry, you've violated the social contract and those people's blood is on your hands.

Phanatik ,

YouTube will actually take action and has done in most instances. I won't say they're the fastest but they do kick people off the platform if they deem them high risk.

Phanatik ,

I am sure the Muslims will remain very calm and let the government know that they disapprove of this action.

/s

Phanatik ,

They'll send a strongly worded letter

Phanatik ,

Oh that's easy, just move the red line back and say "if you cross this line, then we'll consider not supporting you".

Phanatik ,

The aspect that's getting lost in all this is that the curator has basically put up a hit list of games for people to review bomb just for associating with a company. The curator has no evidence on the level of involvement SBI had with the game but they don't recommend the game based on them being involved at all.

They have taken something small and weaponised it so now it's harming game devs. No one has any evidence on how SBI were involved with any of the games they've listed on their website beyond vague mentions of "narrative" or "character development".

The worst part is, I'm not even surprised by this.

Phanatik ,

It'll come trickling down, any day now

Phanatik ,

Not even that. It's that his review isn't an objective assessment of the product because he stands to financially benefit from Framework doing well. He's worse than a hypocrite, he's a shill.

Phanatik ,

I'm talking less about the products and more about Linus's reviewing practices. We saw this in the watercooler debacle. He half-asses reviews and blames the product when he's the one messing up.

I dislike wayland

Quite the unpopular opinion, but I just wanted to post this to show the silent majority that we still exist. We have reached a point where voicing criticism against wayland is treated like the worst thing ever and leads you to being censored and what not. The red hat funded multi year long shill campaign has proven to be quite...

Phanatik ,

Wayland isn't trying to be X12 and since X11 has been around, there haven't been plans for there to be an X12 either. You want to discourage people from using Wayland but don't encourage people to contribute to X11. You're so hellbent on taking Wayland down, rather than further convincing people that X11 is superior and it's easier to improve.

Phanatik ,

Netflix is full of reptiles who don't care to offer a better service. All they want is enough market share to strongarm consumers into giving them more money.

Phanatik ,

I wonder if that will outlive the publicity of that situation.

Phanatik ,

Whenever some dipshit responds to me with "you're talking about AGI, this is AI", my only reply is fuck right off.

Phanatik ,

I don't need a theory for this, you're being highly reductive by focusing on a few features of human communication.

Phanatik ,

I've just done the dance already and I'm tired of their watered-down attempts at bringing human complexity down to a level that makes their chat bots seem smart.

Phanatik ,

I felt this with one of the laptops I put KDE Neon on. It had all manner of issues that never got a resolution.

Phanatik ,

I've heard what SovCits do being referred to as paper terrorism.

Phanatik ,

There's no way these chatbots are capable of evolving into Ultron. That's like saying a toaster is capable of nuclear fusion.

Phanatik ,

What research? These bots aren't that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can't evolve beyond what it is.

Phanatik ,

My mum's 2019 Toyota Yaris has to have its engine run every few days or the battery dies from just sitting on the driveway. It could be a faulty car battery but considering this car isn't even that old and has barely driven 30k miles, it's not doing so great. I discovered yesterday that my EV charges better after I've driven it around and the battery's warmed up a bit. The car goes a bit haywire when you cold start so it seems like it needs some prep time before a drive.

Phanatik ,

From the replies I've been getting, I think so.

Phanatik ,

Sounds like a great car! It does seem like something's wrong with the battery so a replacement is in order.

Phanatik ,

Yeah except a machine is owned by a company and doesn't consume the same way. It breaks down copyrighted works into data points so it can find the best way of putting those data points together again. If you understand anything at all about how these models work, they do not consume media the same way we do. It is not an entity with a thought process or consciousness (despite the misleading marketing of "AI" would have you believe), it's an optimisation algorithm.

Phanatik ,

It's so funny that this is something new. This was Grammarly's whole schtick since before ChatGPT so how different is Grammarly AI?

Phanatik ,

You are spitting out basic points and attempting to draw similarities because our brains are capable of something similar. The difference between what you've said and what LLMs do is that we have experiences that we are able to glean a variety of information from. An LLM sees text and all it's designed to do is say "x is more likely to appear before y than z". If you fed it nonsense, it would regurgitate nonsense. If you feed it text from racist sites, it will regurgitate that same language because that's all it has seen.

You'll read this and think "that's what humans do too, right?" Wrong. A human can be fed these things and still reject them. Someone else in this thread has made some good points regarding this but I'll state them here as well. An LLM will tell you information but it has no cognition on what it's telling you. It has no idea that it's right or wrong, it's job is to convince you that it's right because that's the success state. If you tell it it's wrong, that's a failure state. The more you speak with it, the more fail states it accumulates and the more likely it is to cutoff communication because it's not reaching a success, it's not giving you what you want. The longer the conversation goes on, the more crazy LLMs get as well because it's too much to process at once, holding those contexts in its memory while trying to predict the next one. Our brains do this easily and so much more. To claim an LLM is intelligent is incredibly misguided, it is merely the imitation of intelligence.

Phanatik ,

Neither is an LLM. What you’re describing is a primitive Markov chain.

My description might've been indicative of a Markov chain but the actual framework uses matrices because you need to be able to store and compute a huge amount of information at once which is what matrices are good for. Used in animation if you didn't know.

What it actually uses is irrelevant, how it uses those things is the same as a regression model, the difference is scale. A regression model looks at how related variables are in giving an outcome and computing weights to give you the best outcome. This was the machine learning boom a couple of years ago and TensorFlow became really popular.

LLMs are an evolution of the same idea. I'm not saying it's not impressive because it's very cool what they were able to do. What I take issue with is the branding, the marketing and the plagiarism. I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer.

It's easy to look at what people have created throughout history and think "this looks like that" and on a point by point basis you'd be correct but the creation of that thing is shaped by the lens of the person creating it. Someone might make a George Carlin joke that we've heard recently but we'll read about it in newspapers from 200 years ago. Did George Carlin steal the idea? No. Was he aware of that information? I don't know. But Carlin regularly calls upon his own experiences so it's likely that he's referencing a event from his past that is similar to that of 200 years ago. He might've subconsciously absorbed the information.

The point is that the way these models have been trained is unethical. They used material they had no license to use and they've admitted that it couldn't work as well as it does without stealing other people's work. I don't think they're taking the position that it's intelligent because from the beginning that was a marketing ploy. They're taking the position that they should be allowed to use the data they stole because there was no other way.

Phanatik , (edited )

I don't control the upvotes so I don't know why that's directed at me.

The refutation was based on around a misunderstanding of how LLMs generate their outputs and how the training data assists the LLM in doing what it does. The article itself tells you ChatGPT was trained off of copyrighted material they were not licensed for. The person I responded to suggested that comedians do this with their work but that's equating the process an LLM uses when producing an output to a comedian writing jokes.

Edit: Apologies if I do come across aggressive. Since the plagiarism machine has been in full swing, the whole discourse around it has gotten on my nerves. I'm a creative person, I've written poems and short stories, I'm writing a novel and I also do programming and a whole host of hobbies so when LLMs are used to put people like me out of a job using my own work, why wouldn't that make me angry? What makes it worse is that I'm having to explain concepts to people regarding LLMs that they continue to defend. I can't stand it so yes, I will come off aggressive.

Phanatik ,

First of all, we're not having a debate and this isn't a courtroom so avoid the patronising language.

Second of all, my "belief" on the models' plagiarism is based on technical knowledge of how the models work and not how I think they work.

a machine is now able to do a similar job to a human

This would be impressive if it was true. An LLM is not intelligent simply through its appearance of intelligence.

It's enabling humans

It's a chat bot that's automated Google searches, let's be clear about what this can do. It's taken natural language processing and applied it through an optimisation algorithm to produce human-like responses.

No, I disagree at a fundamental level. Humans need to compete against each other and ourselves to improve. Just because an LLM can write a book for you, doesn't mean you've written a book. You're just lazy. You don't want to put in the work any other writer in existence has done, to mull over their work and consider the emotions and effect they want to have on the reader. To what extent can an LLM replicate the way George RR Martin describes his world without entirely ripping off his work?

i’d question why it’s unethical, and also suggest that “stolen” is another emotive term here not meant to further the discussion by rational argument

If I take a book you wrote from you without buying it or paying you for it, what would you call that?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines