The axis aren’t labeled properly. That’s likely why we can’t make sense of the diagrams.
Everyone elses life oscillates over time between positive and negative… OP’s life’s Y is X cubed. And it somehow contains big blue dots on the whole numbers… They consider that odd. And I’d agree.
I’ve seen like 5 posts about “AI BF/GF” today and it never ceases to surprise me how fucking easy it is to dupe people with these products, like holy shit humanity is fucked.
I’m always waiting for another ethical disaster trend to end but everybody is always in line for Mr Bonez Wild Ride.
If all you need is a one sided conversation designed to make you feel better, LLM’s are great at concocting such “pep talks”. For some, that just might be enough to male it believable. The Turing test was cracked years ago, only now do we have access to things that can do that for free*.
A pretty early chatbot called Eliza simulated a non-directive psychotherapist. It kind of feels like they’ve improved hugely but not really changed much.
Nah, bullshit, so far these LLM’s are as likely to insult or radicalize you as comfort you. That won’t ever be solved until AGI becomes commonplace, which won’t be for a long ass time. These products are failures at launch.
… Have you tried any of the recent ones? As it stands chatGPT and Gemini are both built with guardrails strong enough to require custom inputs to jailbreak, with techniques such as Reinforcement learning from Human Feedback uses to lobotomize misconduct out of the AI’s.
It’s wild that people brag that it’s able to do essentially the same as copying and pasting someone else’s basic code but with only a few extra imagined errors sprinkled in for fun but that just makes it more useful for pretending you aren’t again lljust literally copying someone else’s stuff.
It’s a search engine that makes up 1/8 of all it says. But sure it’s super useful.
Oh thanks, I really wanted to read another defence of an unethical product by some fanboy with no life. I’m so glad you managed to pick up on that based on my previous comments. I love it. You chose a great conversation to start here.
The tech is great at pretending to be human. It is simply a next “word” (or phrase) predictor. It is not good at answering obscure questions, writing code or making a logical argument. It is good at simulating someone.
It is my experience that it approximates a human well, but it doesn’t get the details right (like truthness or reflecting objective reality), making it useless for essay writing, but great for stuff like character AI and other human simulations.
If you are right, give an actual Iogical response only capable by a human, as opposed to a generic ad hominem. I repeat my question, Have you actually used any of the GPT3 era models?
It’s a very nuanced situation, but the people being sold these products and buying them are expecting a sentient robot lover. They’re getting another shitty chatbot that inevitably fails to meet bare minimum companionship standards such as not berating you.
There currently exists no ethical use of LLM AI. Your comment can be construed as defence of malicious people and actions.
I’ve never met anyone who uses them, but I’m also not sure people actually think it’s sentient. I’m sure some do, but I’d assume the vast majority are just looking to have a conversation, and they don’t care if it’s with a person or a (pretty good) chat bot.
Also, there is a way to use it ethically. As the post mentions, run it locally and know what you’re doing with it. I don’t see any issues if you’re aware of what it is, just as I don’t see any issue using auto-correct or any other technology. We don’t need to go full Butlerian (yet).
You are coming at this from your perspective which knows them to not be real. That’s not gonna be how the average moron thinks and there is more of them than you think. And they absolutely believe their is a tiny sentient brain somewhere in there that is alive. I’m all for people doing what makes them happy but also this is a loneliness confirming hole to get trapped in and absolutely opens doors to influence people through their imaginary friends that they think they can trust.
I really recommend creating a compiler or an interpreter from scratch, don’t even use an IR like LLVM or MIR. Just hack and slash your way though a C compiler, it’s the simplest language most people know. Don’t care about ‘optimization’ at first, fuck optimization. Just grab Yacc/Lex or a PEG parser generator that works with the language you like most and have ot generate assembly code for simple expressions. Like, make a bc(1) native compiler! Interprets are fun too. You can use VMGEN to generate a superfast VM in C, and then bind it to another language using SWIG.
Also never imagine usability. Noobdy is going to use your C compiler for any serious work. It’s just for education. Post it online to receive feedback.
You cna start by writing DSLs. For example, I am implementing the ASDL language from this old paper.
Also if you are going to do it as an example in your resume, just stop. Nobody cares, because it’s a trivial task to write a compiler, even if you write backend, forntned, and the LP yourself! Do something money-loving people loike, like making some bullshit mobile app that tracks your gym sweat volume.
The red squiggle instantly caught my eye, followed by the terrible indentation. Then I noticed the class is called Main without a public static void main method, odd. Wait and Love is a string now?
Then I realized I too was saddened like Biden at this code snippet, so I guess in the end it redeemed itself
I could comment on the notion that one owns one’s girlfriend but regardless, you should definitely self host if you’re sharing deeply personal information with a program
I’m thinking through it and I don’t think you should run a therapist off your phone either. Not even for privacy reasons, that just seems like a recipe for disaster.
EDIT: It seems the app sherpa has what you need. Just use one of the models found here, preferably 7b-chat Q4_K_S
Some people are energised by that kind of thing. However if you don’t want to reveal things to humans, you can use (if you are more technical) llama.cpp
I don’t know how to see a memory bug in an out of order elevator, but I once saw and reported a wiring error of a working elevator. It was an interesting talk at the reception desk, but as I could precisely describe what was wrong and the verifyable consequences, they took me seriously. And sent me a “Thank You” email later ;-)
You know how everything has LCD screens these days? And sometimes you’ll witness a good old crash? I know I’ve seen quite a few.
I’m guessing something like that.
programmer_humor
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.