There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

RSS and OPML (libranet.de)

Can somebody explain me how OPML works for RSS? Are these files usually imported into the RSS reader apps or are they used where they are? If I import multiple OPML files with multiple feeds, will the feeds from the first OPML be overwritten by those in the second one or will they add up? Will article read/unread status be...

chaos ,
@chaos@beehaw.org avatar

OPML files really aren’t much more than a list of the feeds you’re subscribed to. Individual posts or articles aren’t in there. I would expect that importing a second OPML file would just add more subscriptions, but it’d be up to the reader app to decide what it does.

chaos ,
@chaos@beehaw.org avatar

In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

“We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

www.cnn.com/2024/01/08/tech/…/index.html

The thing is, it doesn’t really matter if you have to “manipulate” ChatGPT into spitting out training material word-for-word, the fact that it’s possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it’s a lot weaker than the original argument, which was that nothing of the original material really remains after training, it’s all synthesized and blended with everything else to create something entirely new that doesn’t replicate the original.

chaos ,
@chaos@beehaw.org avatar

If you ask an LLM to help you with a legal brief, it’ll come up with a bunch of stuff for you, and some of it might even be right. But it’ll very likely do things like make up a case that doesn’t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you’re going to have a bad time.

There’s a reason LLMs make stuff up like that, and it’s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren’t just associating the sounds they hear, they’re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

LLMs aren’t nearly at that level. That’s not to say what they do isn’t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they’ve never been trained on specifically. They’ve picked up a lot of surprising nuance just from the text they’ve been fed, and it’s convincing enough to think that something magical is going on. But ultimately, they’ve been optimized to predict words, and that’s what they’re good at, and although they’ve clearly developed some impressive skills to accomplish that task, it’s not even close to human level. They spit out a bunch of nonsense when what they should be saying is “I have no idea how to write a legal document, you need a lawyer for that”, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don’t have that. And how could they? Their training didn’t include any of that, it was mostly about words.

One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you’re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question “as an AI, do you want to take over the world?” is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren’t just doing statistics, but you don’t have to go too far down that spectrum before the answers start seeming thoughtful.

chaos ,
@chaos@beehaw.org avatar

I think that joke’s been around for a while, but there is the Terry Pratchett line about how if you had a button with a sign next to it saying “pressing this button will end the world, do not touch,” the ink wouldn’t even have time to dry.

chaos ,
@chaos@beehaw.org avatar

How many do you think it is, and how much more acceptable is that number than this one?

US asks Israel for ‘explanation’ of strike on Gaza refugee camp (news.yahoo.com)

The Biden administration requested Israel detail the thinking and process behind the recent strike on the Jabalia Refugee Camp in Northern Gaza, according to a U.S. official, who like others was granted anonymity to discuss sensitive conversations....

chaos ,
@chaos@beehaw.org avatar

“There was a particular bad guy near them” and “they all probably have bad opinions about Jews” are not sufficient justifications for indiscriminately bombing innocent people. What if there had been an Israeli leader at that rave? People in both refugee camps and at a music event should be able to exist without fear that they’ll die because they were near the wrong person. One seems to provoke a different reaction than the other for some reason though, and that might be worth thinking about.

Does Bing Chat give reliable answers to math and physics questions? If not is it possible to make it more reliable?

I realize and understand the criticisms of ChatGPT and I have personally seem how bad it can be. Once I asked to count the number of days till a random date giving the present date and it failed miserably, again and again. Trust me! I get the criticism. But, what about Bing Chat Bot?...

chaos ,
@chaos@beehaw.org avatar

These models aren’t great at tasks that require precision and analytical thinking. They’re trained on a fairly simple task, “if I give you some text, guess what the next bit of text is.” Sounds simple, but it’s incredibly powerful. Imagine if you could correctly guess the next bit of text for the sentence “The answer to the ultimate question of life, the universe, and everything is” or “The solution to the problems in the Middle East is”.

Recently, we’ve been seeing shockingly good results from models that do this task. They can synthesize unrelated subjects, and hold coherent conversations that sound very human. However, despite doing some things that up until recently only humans could do, they still aren’t at human-level intelligence. Humans read and write by taking in words, converting them into rich mental concepts, applying thoughts, feelings, and reasoning to them, and then converting the resulting concepts back into words to communicate with others. LLMs arguably might be doing some of this too, but they’re evaluated solely on words and therefore much more of their “thought process” is based on “what words are likely to come next” and not “is this concept being applied correctly” or “is this factual information”. Humans have much, much greater capacity than these models, and we live complex lives that act as an incredibly comprehensive training process. These models are small and trained very narrowly in comparison. Their excellent mimicry gives the illusion of a similarly rich inner life, but it’s mostly imitation.

All that comes down to the fact that these models aren’t great at complex reasoning and precise details. They’re just not trained for it. They got through “life” by picking plausible words and that’s mostly what they’ll continue to do. For writing a novel or poem, that’s good enough, but math and physics are more rigorous than that. They do seem to be able to handle code snippets now, mostly, which is progress, but in general this isn’t something that you can be completely confident in them doing correctly. They make silly mistakes because they aren’t really thinking it through. To them, there isn’t really much difference between answers like “that date is 7 days after Christmas” and “that date is 12 days after Christmas.” Which one it thinks is more correct is based on things it has seen, not necessarily an explicit counting process. You can also see this in things like that case where someone tried to use it to write a legal brief, where it came up with citations that seemed plausible but were in fact completely made up. It wasn’t trained on accurate citations, it was trained on words.

They also have a bad habit of sounding confident no matter what they’re saying, which makes it hard to use them for things you can’t check yourself. Anything they say could be right/accurate/good/not plagiarized, but the model won’t have a good sense of that, and if you don’t know either, you’re opening yourself up to risk of being misled.

chaos ,
@chaos@beehaw.org avatar

Protect me from knowing what I don’t need to know. Protect me from even knowing that there are things to know that I don’t know. Protect me from knowing that I decided not to know about the things that I decided not to know about. Amen.

Lord, lord, lord. Protect me from the consequences of the above prayer. Amen.

How the fuck can I kill 20 hours?

My 13 hour flight just got delayed 7 hours, I’m stuck at my second airport, and I dont think I’m gonna make it. I have some movies and audio books on my phone, but really only anticipated having to burn the flight time via napping and some media, not 7 hours leading up to it, and I’m pretty sure I’m gonna mentally burn...

chaos ,
@chaos@beehaw.org avatar

That’s part of the point, you aren’t necessarily supposed to have an empty mind the whole time. I mean, if you can do that, great, but you aren’t failing if that’s not the case.

Imagine that your thoughts are buses, and your job is to sit at the bus stop and not get on any of them. Just notice them and let them go by. Like a bus stop, you don’t really control what comes by, but you do control which ones you get on board and follow. If you notice that you’ve gotten on a bus, that’s fine, just get off of it and go back to watching. Interesting things can happen if you just watch and notice which thoughts go by, and it’s good practice for noticing what you’re thinking and where you’re going and taking control of it yourself when it’s somewhere you don’t want to go.

chaos ,
@chaos@beehaw.org avatar

There is never going to be a case where the world misses the answer to the ultimate question of life, the universe, and everything because it was said by a Nazi and everyone refused to listen to the Nazi. When it’s clearly straight up propaganda, it’s perfectly rational to dismiss it due to the source and not investigate further. If there’s a valid and useful point to be made, it’ll get made in more respectable sources too and then it might be time to pay attention. Plus, even if they do cite sources, it’s hard to spot where they’ve twisted or lied about those sources, but it’s really easy for the propagandist to spout whatever nonsense they believe because they don’t care about the truth. That asymmetry is good for the Nazi and bad for decent people, and the way to fix that is don’t waste your time carefully investigating and critiquing Nazi bullshit.

chaos ,
@chaos@beehaw.org avatar

This is the key with all the machine learning stuff going on right now. The robot will create something, but none of them have a firm understanding of right, wrong, truth, lies, reality, or fiction. You have to be able to evaluate its output because you have no idea if the robot’s telling the truth or not at that moment. Images are pretty immune to this because everyone can evaluate a picture for correctness or realism, and even if it’s a misleading photorealistic image, well, we’ve already had Photoshops for a long time. With text, you always have to keep in mind that the robot might be low quality or outright wrong, and if you aren’t equipped to evaluate its answers for that, you shouldn’t be using it.

chaos ,
@chaos@beehaw.org avatar

The phone slowdowns were intended to prolong the lives of phones, not shorten them. The underclocking only happened after your phone had been forced to shut down because the battery wasn’t delivering sufficient power. I had a phone with this problem, and opening the camera would sometimes just immediately shut down the phone instead. I got a free new battery for it, but the general fix was slowdowns instead. They should’ve disclosed it and they also should’ve given users control, but if they wanted people buying new phones, I know from experience that the random shutdowns were worse than a slower phone.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines