There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

PeepinGoodArgs ,

I will be in a perfect position to snatch a discount H100 in 12 months

strawberry ,

womp womp

Aurenkin ,

Last time a batch of these popped up it was saying they’d be bankrupt in 2024 so I guess they’ve made it to 2025 now. I wonder if we’ll see similar articles again next year.

flambonkscious ,

The start(-up?)[sic] generates up to $2 billion annually from ChatGPT and an additional $ 1 billion from LLM access fees, translating to an approximate total revenue of between $3.5 billion and $4.5 billion annually.

I hope their reporting is better then their math…

Hector_McG ,

Probably used ChatGPT….

frauddogg ,
@frauddogg@lemmygrad.ml avatar

I see Scott Steiner has a hold of their calculator…

driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

I hope not, I use it a lot for quickly programming answers and prototypes and for theory on my actuarial science MBA.

SoJB ,

Is 1) the fact that an LLM can be indistinguishable from your original thought and 2) an MBA (lmfao) supposed to be impressive?

chicken ,

I don’t think that person is bragging, just saying why it’s useful to them

yogthos OP ,
@yogthos@lemmy.ml avatar

I find you can just run local models for that. For example, I’ve been using gpt4all with a the phind model and it works reasonably well

driving_crooner ,
@driving_crooner@lemmy.eco.br avatar

How much computer power they need? My pc is pretty old :/

Manmoth ,

I use it all the time for work especially for long documents and formatting technical documentation. It’s all but eliminated my removed work. A lot of people are sour on AI because “it’s not going to deliver on generative AI etc etc” but it doesn’t matter. It’s super useful and we’ve really only scratched the surface of what it can be used for.

queermunist ,
@queermunist@lemmy.ml avatar

Totally not a bubble though.

MajorHavoc ,

Yeah. It’s a legitimate business, where the funders at the top of the pyramid are paid by those that join at the bottom!

PanArab ,
@PanArab@lemmy.ml avatar

I hope so! I am so sick and tired of AI this and AI that at work.

chemicalwonka ,
@chemicalwonka@discuss.tchncs.de avatar

PLEASE!!!

arran4 ,

This sounds like FUD to me. If it were it would be acquired pretty quickly.

jackyalcine ,

They’re wholly owned by Microsoft so it’d probably be mothballed at worst.

arran4 ,

For another conversation I need some evidence of that, where did you find it?

50MYT ,
NigelFrobisher ,

Oh no!

ryan213 ,
@ryan213@lemmy.ca avatar

Anyway…

FaceDeer ,
@FaceDeer@fedia.io avatar

OpenAI is no longer the cutting edge of AI these days, IMO. It'll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.

mozz ,
@mozz@mbin.grits.dev avatar

If they closed down, and the people still aligned with safety had to take up the mantle, that would be fine.

If they got desperate for money and started looking for people they could sell their soul (more than they have already) to in exchange for keeping the doors open, that could potentially be pretty fuckin bad.

FaceDeer ,
@FaceDeer@fedia.io avatar

Well, my point is that it's already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.

I wouldn't be surprised if "safety alignment" is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that's likely to lock away the things they build if they turn out to be too neat.

pizza_the_hutt ,

There is no AI Revolution. There never was. Generative AI was sold as an automation solution to companies looking to decrease labor costs, but’s it’s not actually good at doing that. Moreover, there’s not enough good, accurate training material to make generative AI that much smarter or more useful than it already is.

Generative AI is a dead end, and big companies are just now starting to realize that, especially after the Goldman-Sachs report on AI. Sam Altman is just a snake oil saleman, another failing-upwards executive who told a bunch of other executives what they wanted to hear. It’s just now becoming clear that the emperor has no clothes.

SkyNTP ,

Generative AI is not smart to begin with. LLM are basically just compressed versions of the internet that predict statistically what a sentence needs to be to look “right”. There’s a big difference between appearing right and being right. Without a critical approach to information, independent reasoning, individual sensing, these AI’s are incapable of any meaningful intelligence.

In my experience, the emperor and most people around them still has not figured this out yet.

yogthos OP ,
@yogthos@lemmy.ml avatar

this was my fav take on it archive.ph/lkpuA

anachronist ,

Generative AI is just classification engines run in reverse. Classification engines are useful but they’ve been around and making incremental improvements for at least a decade. Also, just like self-driving cars they’ve been writing checks they can’t honor. For instance, legal coding and radiology were supposed to be automated by classification engines a long time ago.

bizarroland ,

It's sort of like how you can create a pretty good text message on your phone using voice to text but no courtroom is allowing AI transcription.

There's still too much risk that it will capitalize the wrong word or replace a word that's close to what was said or do something else wholly unconceived of to trust it with our legal process.

If they could guarantee a 100% accurate transcription of spoken word to text it would put the entire field of Court stenographers out of business and generate tens of millions of dollars worth of digital contracts for the company who can figure it out.

Not going to do it because even today a phone can't tell the difference between the word holy and the word holy. (Wholly)

riskable ,
@riskable@programming.dev avatar

Now’s the time to start saving for a discount GPU in approximately 12 months.

FaceDeer ,
@FaceDeer@fedia.io avatar

They don't use GPUs, they use more specialized devices like the H100.

tyler ,

Everyone that doesn’t have access to those is using gpus though.

FaceDeer ,
@FaceDeer@fedia.io avatar

We are talking specifically about OpenAI, though.

YurkshireLad ,

350,000 servers? Jesus, what a waste of resources.

yogthos OP ,
@yogthos@lemmy.ml avatar

just capitalist markets allocating resources efficiently where they’re need

Manmoth ,

It’s a brand new, highly competitive technology and ChatGPT has first mover status with a trailer load of capital behind it. They are going to burn a lot of resources right now to innovate quickly and reduce latency etc If they reach a successful product-market-fit getting costs down will eventually be critical to it actually being a viable product. I imagine they will pipe this back into ChatGPT for some sort of AI-driven scaling solution for their infrastructure.

TL;DR - It’s kind of like how a car uses most of it’s resources going from 0-60 and then efficiencies kick-in at highway speeds.

Regardless I don’t think they will have to worry about being profitable for a while. With the competition heating up I don’t think there is any way they don’t secure another round of funding.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines