There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

empireOfLove2 ,
@empireOfLove2@lemmy.dbzer0.com avatar

They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.

They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.

At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.

Kintarian OP ,

Now it’s degrading even faster as AI scrapes from AI in a technological circle jerk.

Carrolade ,

I’ll just toss in another answer nobody has mentioned yet:

Terminator and Matrix movies were really, really popular. This sort of seeded the idea of it being a sort of inevitable future into the brains of the mainstream population.

Kintarian OP ,

The Matrix was a documentary

muntedcrocodile ,
@muntedcrocodile@lemm.ee avatar

It depends on the task you give it and the instructions you provide. I wrote this a while back i find it gives a 10x in capability especially if u use a non aligned llm like dolphin 8x22b.

Kintarian OP ,

I have no idea what any of that means. But thanks for the reply.

bionicjoey ,

A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.

HobbitFoot ,

The idea is that it can replace a lot of customer facing positions that are manpower intensive.

Beyond that, an AI can also act as an intern in assisting in low complexity tasks the same way that a lot of Microsoft Office programs have replaced secretaries and junior human calculators.

Kintarian OP ,

I’ve always figured part of it Is that businesses don’t like to pay labor and they’re hoping that they can use artificial intelligence to get rid of the rest of us so they don’t have to pay us.

Blue_Morpho ,

Ignoring AI as he is like ignoring Spreadsheets as hype. “I can do everything with a pocket calculator! I don’t need stupid auto fill!”

AI doesn’t replace people. It can automate and reduce your workload leaving you more time to solve problems.

I’ve used it for one off scripts. I have friends who have done the same and another friend who used it to create the boilerplate for a government contact bid that he won (millions in revenue for his company of which he got tens of thousands in bonus as engineering sales support).

ProfessorScience ,

When ChatGPT first started to make waves, it was a significant step forward in the ability for AIs to sound like a person. There were new techniques being used to train language models, and it was unclear what the upper limits of these techniques were in terms of how “smart” of an AI they could produce. It may seem overly optimistic in retrospect, but at the time it was not that crazy to wonder whether the tools were on a direct path toward general AI. And so a lot of projects started up, both to leverage the tools as they actually were, and to leverage the speculated potential of what the tools might soon become.

Now we’ve gotten a better sense of what the limitations of these tools actually are. What the upper limits of where these techniques might lead are. But a lot of momentum remains. Projects that started up when the limits were unknown don’t just have the plug pulled the minute it seems like expectations aren’t matching reality. I mean, maybe some do. But most of the projects try to make the best of the tools as they are to keep the promises they made, for better or worse. And of course new ideas keep coming and new entrepreneurs want a piece of the pie.

Lauchs ,

I think there’s a lot of armchair simplification going on here. Easy to call investors dumb but it’s probably a bit more complex.

AI might not get better than where it is now but if it does, it has the power to be a societally transformative tech which means there is a boatload of money to be made. (Consider early investors in Amazon, Microsoft, Apple and even the much derided Bitcoin.)

Then consider that until incredibly recently, the Turing test was the yardstick for intelligence. We now have to move that goalpost after what was preciously unthinkable happened.

And in the limited time with AI, we’ve seen scientific discoveries, terrifying advancements in war and more.

Heck, even if AI gets better at code (not unreasonable, sets of problems with defined goals/outputs etc, even if it gets parts wrong shrinking a dev team of obscenely well paid engineers to maybe a handful of supervisory roles… Well, like Wu Tang said, Cash Rules Everything Around Me.

Tl;dr: huge possibilities, even if there’s a small chance of an almost infinite payout, that’s a risk well worth taking.

SpaceNoodle ,

Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

Kintarian OP ,

I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.

pimeys ,

And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.

some_guy ,

Rich assholes have spent a ton of money on it and they need to manufacture reasons why that wasn’t a waste.

Kolanaki ,
@Kolanaki@yiffit.net avatar

The hype is also artificial and usually created by the creators of the AI. They want investors to give them boatloads of cash so they can cheaply grab a potential market they believe exists before they jack up prices and make shit worse once that investment money dries up. The problem is, nobody actually wants this AI garbage they’re pushing.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines