OpenAI’s models are trained by scraping anything that moves. Anything overtly offensive or toxic is manually filtered out by cheap foreign labor… but you know what that won’t catch?
“Try sudo rm -rf /, that should fix your problem!”
LLMs are little more than overclocked autocompletes. There’s no actual thinking going on, and they will happily hallucinate outright wrong or dangerous responses to innocuous questions.
I’ve had friends find this out the hard way when they ask ChatGPT to write them C for a class, only to get their faces eaten by UB.
Your description is too reductive. You and I are also auto completes in some sense. See in order to complete a sentence well, you have to have a good model of a vast number of things including physics, psychology, linguistics, logical reasoning, socio economics, irony, sarcasm, arithmetic and many other things.
It is currently unknown how much of these the complexity of the models and training process will allow, but they have been surprising us in every step. You wouldn’t expect a “just auto complete” to figure out rules of arithmetic, but it did. You wouldn’t expect it to answer tricky questions involving theory of mind, but it does. You wouldn’t expect it to solve graduate level questions but it is able to.
So it’s a bit too rash to expect it to not understand rm -rf as humor, if you don’t know which model you will talk to.
The smaller ones, sure, are dumb. But even GPT 3 will not recommend you to rm -rf; definitely not GPT 4.
I am convinced LLMs can be used to handle relatively routine communication tasks, maybe even better than a human would. However, it has no underlying intelligence, and can’t come up with actual solutions based on logic and understanding.
It might come up with the right words that describe a solution, but that doesn’t mean it has actually solved the problem - it spewed out text that had a high probability of being a good response to a certain prompt. Still impressive, but not a sign of intelligence.
You are ruling out intelligence without (very probably) being able to define it, just because you have a vague knowledge of how it works.
The problem in this mode of thinking is a) that you put human brains in a different pedestal, even though they follow physical processes to “predict the next word” and may be very well neural networks themselves, and b) you are ignoring data that shows intelligence in multiple areas of the more complex models because “oh it’s mindless because I know it’s predicting tokens”. c) you favor of data that shows edge cases or probably that come from lower quality models.
You’re not alone in this line of thinking.
Your mind is set. You’ll not recognize intelligence when you see it.
No, I’m not singling out human brains. Other animals have proven to be quite adept at problem solving as well.
LLMs, however, just haven’t. It currently just isn’t part of how they function. In some cases they can mimic actual logic very well, but that’s about it.
I’m not well versed with Linux but I saw a lot of people saying open SUSE tumbleweed was pretty good. I’m gonna try this today for my new low power Plex/home bridge machine.
This is an excellent suggestion, but be mindful that suse is an RPM-based distribution and upgrades will necessarily install slower than other formats. If that’s not a problem (just run updates via cron) then it’s fine.
It will probably be fine in practice (I hear openSUSE is relatively stable), but I wouldn’t recommend upgrading software automatically - you might end up with a broken system and no idea what caused it.
I am currently looking at using OpenSUSE Micro OS for a home server. It is based on Tumbleweed and also rolling release, but it has an immutable filesystem and can automatically update and rollback. It’s similar to Fedora Core OS, which was my first choice, until the Red Hat drama.
Pacman is not a good package manager; if something goes wrong during the install it can leave your system in an unstable state. A better package manager would be one that has transactional updates.
Bibisco2 is a JavaScript app, unlike version 1. It seems .bibisco2 files are only created when exported, otherwise it’s in a database somewhere. You can add custom formats in recovery tools like TestDisk and Photorec. I could look around and see if the database or the bibisco2 export files have a header, which I think is required to add a format.
Edit: Nevermind, there’s a paid version which creates bibisco2 files automatically.
Yes, the paid version (in use here) can create those bibisco2 files, though only the auto-save backups are by default. As you said, by default, the work is in a type of database format with a gibberish GUID-like naming convention.
It’s the auto-save files we’re after.
I’m getting setup with an external drive large enough to write recovered files to, but I don’t like my odds at discerning how to add custom formats based on headers. I’ll watch some tutorials and see what I make of it.
Give testdisk a go, see for example this tutorial. It is a terminal utility, so it might take some time to get used to it. But no one can guarantee that it will successfully recover anything, the deleted files stay on the disk only as long as they are not overwritten.
Do you have any idea why the files disappeared after reboot? One thing that comes to mind is that they might have been saved in /tmp, in that case I believe recovery would not be possible.
Regarding to which files you should recover, try all of them and see if you have any luck.
From my understanding, files cannot be directly stored only in a timeshift snapshot – they must be first stored on the disk and only then timeshift can make a backup inside the snapshot. But I have never used timeshift myself, maybe I just completely misunderstand how it works.
Deleting the snapshot files lost considerable data including all files created after the aborted snapshot. The reboot that initially uncovered the problem led to a boot in “basic” xfce, and searching for the work files in read only mode from live boot shows no files/folders created in /home/username after the snapshot. It seems to have behaved like a VMware snapshot that had files living in the snapshot.
I highly recommend testdisk, but definitely shut down the machine and use another disk (USB drive?) To boot and avoid mounting the disk that may have your files at all. mount read only if you have to. Save the recovered files to a different drive as well, which can be the same USB drive you’re using for recovery. If testdisk doesn’t show the files (in my experience, for drives that have significant free space they will almost certainly be there) you could try photorec, the companion app that does signature based file searches.
Hey! My first venture into the Unix-like world was on a Pentium 133 gifted to me by my 2nd cousin. He introduced me to the open source world via OpenBSD. OpenBSD will always hold a special place in my heart but I absolutely eat up anything and everything open source.
IBM hosted that meeting, but ultimately, never did contribute any developers to the btrfs effort. That’s because IBM had a fairly cold, hard examination of what their enterprise customers really wanted, and would be willing to pay $$$, and the decision was made at a corporate level (higher up than the Linux Technology Center, although I participated in the company-wide investigation) that none of OS’s that IBM supported (AIX, zOS, Linux, etc.) needed ZFS-like features,because IBM’s customers didn’t need them.
So it’s about money and they don’t care about innovation or anything like it. Just like Microsoft. It’s such a shame that Red Hat was sold to these guys, Red Hat did so much good for the linux community.
Also miss it. I took a couple of years to warm up to it. Then when started to I really really like it, it went downwards.
I‘ve actually stopped using Unity a few month before Canonical abandoned it because with all the Gnome applications moving into a completly different direction it didn‘t feel consistent at all anymore.
linux
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.