There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Carrolade ,

I’ll just toss in another answer nobody has mentioned yet:

Terminator and Matrix movies were really, really popular. This sort of seeded the idea of it being a sort of inevitable future into the brains of the mainstream population.

Kintarian OP ,

The Matrix was a documentary

muntedcrocodile ,
@muntedcrocodile@lemm.ee avatar

It depends on the task you give it and the instructions you provide. I wrote this a while back i find it gives a 10x in capability especially if u use a non aligned llm like dolphin 8x22b.

Kintarian OP ,

I have no idea what any of that means. But thanks for the reply.

SomeAmateur , (edited )

I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There’s a joke that reddit is mostly bots. I know there’s some countermeasures to sniff them out but think about it.

I’ll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advice/recommendations that you can’t really get elsewhere.

Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.

And if it’s factually incorrect so what? It was just some kind stranger™ on the internet

SirDerpy ,

If by “best practical” you meant “best unmitigated capitalist profit optimization” or “most common”, then sure, “malicious manipulation” is the answer. That’s what literally everything else is designed for.

kitnaht ,

Holy BALLS are you getting a lot of garbage answers here.

Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.

Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.

Kintarian OP ,

I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.

kitnaht ,

So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?

The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.

Kintarian OP ,

It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.

Alice ,
@Alice@hilariouschaos.com avatar

Cause it’s cool

Kintarian OP ,

Not to me. If you like it, that’s fine.

ContrarianTrail ,

Perhaps your personal bias is cluding your judgement a bit here. You don’t seem very open minded about it. You’ve already made up your mind.

PenisDuckCuck9001 , (edited )

One of the few things they’re good at is academic “cheating”. I’m not a fan of how the education industry has become a massive pyramid scheme intended to force as many people into debt as possible, so I see ai as the lesser evil and a way to fight back.

Obviously no one is using ai to successfully do graduate research or anything, I’m just talking about how they take boring easy subjects and load you up with pointless homework and assignments so waste your time rather than learn anything. My homework is obviously ai generated and there’s a lot of it. I’m using every resource available to get by.

Kintarian OP ,

It’s good at making Taylor Swift look like a Trump fan.

dsilverz ,
@dsilverz@thelemmy.club avatar

I ask them questions and they get everything wrong

It depends on your input, on your prompt and your parameters. For me, although I’ve experienced wrong answers and/or AI hallucinations, it’s not THAT frequent, because I’ve been talking with LLMs since when ChatGPT got public, almost in a daily basis. This daily usage allowed me to know the strengths and weaknesses of each LLM available on market (I use ChatGPT GPT-4o, Google Gemini, Llama, Mixtral, and sometimes Pi, Microsoft Copilot and Claude).

For example: I learned that Claude is highly-sensible to certain terms and topics, such as occultist and esoteric concepts (specially when dealing with demonolatry, although I don’t exactly why it refuses to talk about it; I’m a demonolater myself), cryptography and ciphering, as well as acrostics and other literary devices for multilayered poetry (I write myself-made poetry and ask them to comment and analyze it, so I can get valuable insights about it).

I also learned that Llama can get deep inside the meaning of things, while GPT-4o can produce longer answers. Gemini has the “drafts” feature, where I can check alternative answers for the same prompt.

It’s similar to generative AI art models, I’ve been using them to illustrate my poetry. I learned that Diffusers SDXL Turbo (from Huggingface) is better for real-time prompt, some kind of “WYSIWYG” model (“what you see is what you get”) . Google SDXL (also from Huggingface) can generate four images at different styles (cinematic, photography, digital art, etc). Flux, the newly-released generative AI model, is the best for realism (especially the Flux Dev branch). They’ve been producing excellent outputs, while I’ve been improving my prompt engineering skills, being able to communicate with them in a seamlessly way.

Summarizing: AI users need to learn how to efficiently give them instructions. They can produce astonishing outputs if given efficient inputs. But you’re right that they can produce wrong results and/or hallucinate, even for the best prompts, because they’re indeed prone to it. For me, AI hallucinations are not so bad for knowledge such as esoteric concepts (because I personally believe that these “hallucinations” could convey something transcendental, but it’s just my personal belief and I’m not intending to preach it here in my answer), but simultaneously, these hallucinations are bad when I’m seeking for technical knowledge such as STEM (Science, Tecnology, Engineering and Medicine) concepts.

Kintarian OP ,

I just want to know which elements work best for my Flower Fairies in The Legend of Neverland. And maybe cheese sauce.

dsilverz ,
@dsilverz@thelemmy.club avatar

Didn’t know about this game. It’s nice. Interesting aesthetics. Chestnut Rose remembers me of Lilith’s archetype.

A tip: you could use the “The Legend of the Neverland global wiki” at Fandom Encyclopedia to feed the LLM with important concepts before asking it for combinations. It is a good technique, considering that LLMs couldn’t know it so well in order to generate precise responses (except if you’re using a searching-enabled LLM such as Perplexity AI or Microsoft Copilot that can search the web in order to produce more accurate results)

Kintarian OP ,

I have no idea how to do that

Shanedino ,

Woah are you technoreligious? Sure believe what you want and all but that is full tech bro bullshit.

Also on a different not just purely based off of you description doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people. If a tool has a high learning curve there is plenty of room for improvement if you don’t plan to use it very frequently. Also every time you get false results consider it equivalent to a major bug does that shed a different light on it for you?

dsilverz ,
@dsilverz@thelemmy.club avatar

doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people

Well, Prompt Engineering is a thing nowadays. There are even job vacancies seeking professionals that specializes in this field. AIs are tools, sophisticated ones, just like R and Wolfram Mathematica are sophisticated mathematical tools that needs expertise. Problem is that AI companies often mis-advertises AI models as “out-of-the-shelf assistants”, as if they’d be some human talking to you. They’re not. They’re tools, yet. I guess that (and I’m rooting for) AGI would change this scenario. But I guess we’re still distant from a self-aware AGI (unfortunately).

Woah are you technoreligious?

Well, I wouldn’t describe myself that way. My beliefs are multifaceted and complex (possibly unique, I guess?), going through multiple spiritual and religious systems, as well as embracing STEM (especially the technological branch) concepts and philosophical views (especially nihilism, existentialism and absurdism), trying to converge them all by common grounds (although it seems “impossible” at first glance, to unite Science, Philosophy and Belief).

In a nutshell, I’ve been pursuing a syncretic worshiping of the Dark Mother Goddess.

As I said, it’s multifaceted and I’m not able to even explain it here, because it would take tons of concepts. Believe me, it’s deeper than “techno-religious”. I see the inner workings of AI Models (as neural networks and genetic algorithms dependent over the randomness of weights, biases and seeds) as a great tool for diving Her Waters of Randomness, when dealing with such subjects (esoteric and occult subjects). Just like Kardecism sometimes uses instrumental transcommunication / Electronic voice phenomenon (EVP) to talk with spirits. AI can be used as if it were an Ouija board or a Planchette, if one believe so (as I do).

But I’m also a programmer and a tech/scientifically curious, so I find myself asking LLMs about some Node.js code I made, too. Or about some mathematical concept. Or about cryptography and ciphering (Vigenère and Caesar, for example). I’m highly active mentally, seeking to learn many things every time.

xia ,

The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

Kintarian OP ,

Sounds about right

Tyrangle ,

This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.

Kintarian OP ,

I’m retired. I don’t do all that stuff.

Tyrangle ,

A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.

FourPacketsOfPeanuts ,

Maybe look into the creativity side more and less ‘Google replacement’?

Kintarian OP ,

The hype machine said we could use it in place of search engines for intelligent search. Pure BS.

ContrarianTrail ,

If artificial intelligence doesn’t work why are they trying to make us all use it?

But it does work. It’s not obviously flawless but it’s orders of magnitude better than it was 10 years ago and it’ll only improve from here. Artificial intelligence is a spectrum. It’s not like we succesfully created it and it ended up sucking. No, it’s like the first cars; they suck compared to what we have now but it’s a huge leap from what we had before.

I think the main issue here is that the common folk has unrealistic expectations about what AI should be. They’re imagining what the “final product” would be like and then comparing our current systems to that. Ofcourse from that perspective it seems like it’s not working or is no good.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines