That’s what happens when you defederate from pro-Marxist instances. Zionists and anti-Communists start joining this anti-Marxist space and you have a whole new Red Scare on your hands.
I mean if LLM/Diffusion type AI is a dead-end and the extra investment happening now doesn't lead anywhere beyond that. Yes, likely the bubble will burst.
But, this kind of investment could create something else. We'll see. I'm 50/50 on the potential of it myself. I think it's more likely a lot of loud talking con artists will soak up all the investment and deliver nothing.
It’s looking like a dead end. The content that can be fed into the big LLMs has already been done. New stuff is a combination of actual humans and stuff generated by LLMs. It then runs into an ouroboros problem where it just eats its own input.
Yeah, I was thinking more if there's either an evolutionary improvement or revolutionary (or some movement toward AGI). For me it's better if not, so I get to keep my job for a few more years. But, my general feeling is with the cash injection, there's some chance of a breakthrough.
I mostly agree, with the caveat that 99% of AI usage today just stupid gimmicks and very few people or companies are actually using what LLMs offer effectively.
It kind of feels like when schools got sold those Smart Whiteboards that were supposed to revolutionize teaching in the classroom, only to realize the issue wasn’t the tech, but the fact that the teachers all refused to learn and adapt and let the things gather dust.
I think modern LLMs should be used almost exclusively as an assistive tool to help empower a human worker further, but everyone seems to want an AI that you can just tell ‘do the thing’ and have it spit out a finalized output. We are very far from that stage in my opinion, and as you stated LLM tech is unlikely to get us there without some sort of major paradigm shift.
To be fair, electronic whiteboards are some of the jankiest piles of trash I’ve ever had to use. I swear to God you need to re-calibrate them every 5 minutes.
bubbles have nothing to do with technology, the tech is just a tool to build the hype. The bubble will burst regardless of the success of the tech at most success will slightly delay the burst, because what is bursting isnt the tech its the financial structures around it.
See Sun Microsystems after the .com bubble burst. They produced a lot of the servers that .com companies were using at the time. Shriveled up after and were eventually absorbed by Oracle.
Why did Oracle survive the same time? Because they latched onto a traditional Fortune 500 market and never let go down to this day.
I doubt it. Regardless of the current stage of machine learning, everyone is now tuned in and pushing the tech. Even if LLMs turn out to be mostly a dead end, everyone investing in ML means that the ability to do LOTS of floating point math very quickly without the heaviness of CPU operations isn’t going away any time soon. Which means nVidia is sitting pretty.
As far as I understand, the GPUs that LLMs use aren’t exactly interchangeable with your regular GPU. Also, no one needs that many GPUs for any traditional use cases.
If you play close attention to the "power’ of your toilets flush… you’ll notice when it’s getting close to a clog. That flush will make you second guess something isn’t right. And if your neglect it, you will sooner or later realize it was in fact on its way to clog.
The life of a homeowner. Many of you have NO idea the amount of chit you need to learn and pay attention to on a daily basis to make sure your home is well maintained. Adulting fkin sucks.
The idea that wolf packs are led by a merciless dictator, or alpha wolf, comes from old studies of captive wolves. In the wild, wolf packs are simply families
I mean. They are a matriarchy. So one could say the oldest is the alpha female. Who only accepts a new female in the group if she’s able to sexually satisfy the matriarch. And in case you want to see what bonobo gay sex looks like… don’t. It looks like two sopping tumors rubbing against each other.
What, are we 80 years old here. Posting Facebook memes mad the damn kids are always on their damn phones?
I am a travel and outdoors enthusiast, I love being outside and experiencing a new city or the wilderness, but also, phones are powerful technology that allow us to find directions or look up where to go.
The real issue isn’t that we are using it, it’s that they are being used to collect and sell our data for advertising, that algorithms are designed to keep us on our phone instead of experiencing the world.
It’s boomer shit to post this meme and be like, society bad, people on phones.
The classic meme goes “not a phone in sight. Just people enjoying the moment”. This is a play on that by swapping the words and romanticizing phones instead.
Worst one is probably Apple. They just announced “Apple Intelligence” which is just ChatGTP whose largest shareholder is Microsoft. Figure that one out.
Well, most of the requests are handled on device with their own models. If it’s going to ChatGPT for something it will ask for permission and then use ChatGPT.
So the Apple Intelligence isn’t all ChatGPT. I think this deserves to be mentioned as a lot of the processing will be on device.
Also, I believe part of the deal is ChatGPT can save nothing and Apple are anonymising the requests too.
Is this conjecture or can you provide some further reading, in the interest of not spreading misinformation.
Edit: I decided to read the info from Apple.
With Private Cloud Compute, Apple sets a new standard for privacy in AI, with the ability to flex and scale computational capacity between on-device processing, and larger, server-based models that run on dedicated Apple silicon servers. When requests are routed to Private Cloud Compute, data is not stored or made accessible to Apple and is only used to fulfill the user’s requests, and independent experts can verify this privacy.
Additionally, access to ChatGPT is integrated into Siri and systemwide Writing Tools across Apple’s platforms, allowing users to access its expertise — as well as its image- and document-understanding capabilities — without needing to jump between tools.
Say what you will about Apple, but privacy isn’t a concern for me. Perhaps, some independent experts will verify this in time.
Well, most of the requests are handled on device with their own models. If it’s going to ChatGPT for something it will ask for permission and then use ChatGPT.
I feel I was pretty explicit in explaining how some requests will go to ChatGPT.
Do you think that if you enter into a contract with a company like Apple they’ll just be like, aww shit they weren’t supposed to do that. Anyway let’s carry on.
No. This would open OpenAi up to potential lawsuits.
Even if they did save stuff. It gets anonymised by Apple before even being sent to ChatGPT servers.
I don’t want my comments here to be received as shilling Apple, more that I want them to based on actual information that is provided and not opinion pieces.
The fact is, if they were to caught saving data then Apple would just end the contract. Is it worth it for them to lose out on that cash, for the sake of using it. When they can just use all the other sources where they are allowed to do that.
Anyway, I don’t care what anonymised data they may or may not save. It won’t be tied to me.
Edit: Do you have some information on this existing lawsuits and the contracts they broke?
There’s kind of a difference between “we scraped the internet and decided to use copyrighted content anyways because we decided to interpret copyright law as not being applicable to the content we generate using copyrighted content” (omegalul) and “we explicitly agreed to a legally-binding contract with Apple stating we won’t do that”.
That’s just not true. Most requests are handled on-device. If the system decides a request should go to ChatGPT, the user is promped to agree and no data is stored on OpenAI’s servers. Plus, all of this is opt-in.
I think there’s a larger picture at play here that is being missed.
Getting the weather is a standard feature for years now. Nothing AI about it.
What is “AI” is, Hey Siri, what is the weather at my daughter’s recital coming up?
The AI processing, calculated on-device if what they claim is true, is:
the determination of who your daughter is
What is a recital? An event? Are there any upcoming calendar events that match this concept?
Is the “daughter” associated with this event by description or invitation? Yes? OK, what’s the address?
Submit zip code of recital calendar event involving the kid to the weather API, and churn out a reply that includes all this information…
Well {Your phone contact name}, it looks like it will {remote weather response} during your {calendar event from phone} with {daughter from contacts} on {event date}.
That is the idea between on-device and cloud processing. The phone already has your contacts and calendar and does that work offline rather than educating an online server about your family, events and location, and requests the bare minimum from the internet, in this case nothing more than if you opened the weather app yourself and put in a zip code.
Voice processing is AI and was done by Apple servers. Previously, only the keyword “Hey Siri” was local. Onboard AI chips will allow this to be local. The actual queries will go to the servers. Phones do not have the power to run useful LLM locally- at least not with the near instantaneous response times phone users expect. A 56 Watt 128GB RAM M3 Max does around 8.5 tokens/second.
Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer? Gotta have those latest and greatest chips.
It will be fun to see how it all shakes out. If the AI can’t run most queries on the phone with all this advertising of local processing…there’ll be one hell of a lawsuit coming up.
EDIT: Finished looking for what I thought I remembered…
Additionally, Siri has been locally processed since iOS 15.
Forgive me, I’m no AI expert to fully compare the needed tokens per second measurement to relate to the average query Siri might handle, but I will say this:
Even in your article, only the largest model ran at 8/tps, others ran much faster, and none of these were optimized for a task, just benchmarking.
Would it be impossible for Apple to be running an optimized model specific to expected mobile tasks, and leverage their own hardware more efficiently than we can, to meet their needs?
I imagine they cut out most worldly knowledge etc/use a lightweight model, which is why there is still a need to link to ChatGPT or Apple for some requests, would this let them trim Siri down to perform well enough on phones for most requests? They also advertised launching AI on M1-2 chip devices, which are not M3-Max either…
Literally not what people are talking about. It’s the “AI” part of the task that doesn’t leave the device (unless it prompts to ask chat gpt). Not that it can magically gleam live info without making any request to the web…
Jeeze, fucking… get your shit straight, making me defend Apple… Fucking do better.
Not true. Most if not all requests are handled by apples own models on device or on their own servers. When it does use OpenAI you need to give it permission each time it does.
Tankies keep on posting walls of texts in !memes about communism, or was it the liberals, anyway people i don’t like are posting walls of text about communism in !memes… I love this post, for next time can you draw some conclusions about the survey so I don’t have to think to hard about these numbers, I can’t count past 10.
memes
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.