There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated. avatar


@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

QuadratureSurfer , avatar

Technically, generative AI will always give the same answer when given the same input. But, what happens is a “seed” is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

QuadratureSurfer , avatar

If you think that “pretty much everything AI is a scam”, then you’re either setting your expectations way too high, or you’re only looking at startups trying to get the attention of investors.

There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.

Part of the problem might be with how you define AI… It’s way more broad of a term than what I think you’re trying to convey.

QuadratureSurfer , avatar

Sure, but don’t let that feed into the sentiment that AI = scams. It’s way too broad of a term that covers a ton of different applications (that already work) to be used in that way.

And there are plenty of popular commercial AI products out there that work as well, so trying to say that “pretty much everything that’s commercial AI is a scam” is also inaccurate.

We have:
Suno’s music generation
NVidia’s upscaling
Midjourney’s Image Generation
OpenAI’s ChatGPT

So instead of trying to tear down everything and anything “AI”, we should probably just point out that startups using a lot of buzzwords (like “AI”) should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.

QuadratureSurfer , avatar

I wish more companies would do something like this, rather than just shutting everything down and leaving everyone with nothing.

QuadratureSurfer , (edited ) avatar

Well… good thing I’ve been buying what I can through GOG… but this is terrible news, especially with the way Microsoft has been shutting down gaming studios recently.

Edit: meh, this just sounds like clickbait:

  • The leak comes from an unknown and unreliable source in the gaming industry.
  • Microsoft’s acquisition of Activision Blizzard faced regulatory challenges, making the merger with Valve unlikely.
QuadratureSurfer , avatar

Looks like a separate element that comes after the LLM summary which can be removed by ad blockers. That is, if you’re still using Google search…

QuadratureSurfer , avatar

Actually, if this is the requirement, then this means our data isn’t leaving the device at all (for this purpose) since everything is being run locally.

QuadratureSurfer , avatar

Since everything is being run in a local LLM, most likely this will be some extra RAM usage rather than SSD usage, but that is assuming that they aren’t saving these images to file anywhere.

QuadratureSurfer , avatar

The whole thing is going to be run on a local LLM. They don’t have to upload that data anywhere for this to work (it will work offline). But considering what they already do, Microsoft is going to have to do a lot to prove that they aren’t doing this.

QuadratureSurfer , avatar

Very true… what I meant to say was:
[…] then this means our data shouldn’t need to leave the device at all […]

QuadratureSurfer , avatar

Can you provide some context for this? Which petition is this about?

QuadratureSurfer , avatar

Downloading Machine Learning Models
Data for Training ML Models
Training ML Models
Gaming (the games themselves or saving replays)
Backing up movies/videos/images etc.
Backing up music

Take your pick, feel free to mix and match or add on to the list.

QuadratureSurfer , avatar

I agree, but it’s one thing if I post to public places like Lemmy or Reddit and it gets scraped.

It’s another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.

Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.

QuadratureSurfer , avatar

Feel free to educate us instead of just saying the equivalent of “you’re wrong and I hate reading comments like yours”.

But I think, in general, the alteration to Section 230 that they are proposing makes sense as a way to keep these companies in check for practices like shadowbanning especially if those tools are abused for political purposes.

QuadratureSurfer , avatar

There’s a place for AI in NPCs but developers will have to know how to implement it correctly or it will be a disaster.

LLMs can be trained on specific characters and backstories, or even “types” of characters. If they are trained correctly they will stay in character as well as be reactive in more ways than any scripted character could ever do. But if the Devs are lazy and just hook it up to ChatGPT with a simple prompt telling it to “pretend” to be some character, then it’s going to be terrible like you say.

Now, this won’t work very well for games where you’re trying to tell a story like Baldur’s Gate… instead this is better for more open world games where the player is interacting with random characters that don’t need to follow specific scripts.

Even then it won’t be everything. Just because an LLM can say something “in-character” doesn’t mean it will line up with its in-game actions. So additional work will need to be made to help tie actions to the proper kind of responses.

If a studio is able to do it right, this has game changing potential… but I’m sure we’ll see a lot of rushed work done before anyone pulls it off well.

QuadratureSurfer , avatar

@sugar_in_your_tea proposed this theory the other day, and I think it makes a lot of sense. A lot of journalists are feeling threatened by the onslaught of LLMs so I would expect to see a lot more news attempting to shine a negative light on LLMs in any way possible.

QuadratureSurfer , avatar

A very useful video that explains what Quantum Internet is… and what it isn’t:

TL/DW: A big misconception here has to do with Quantum entanglement. Quantum Entanglement in Quantum Internet doesn’t mean that you can transfer data at speeds faster than light.

It’s true that this connection would be “ultra secure” but this would be very inefficient (slow) and it wouldn’t be reliable in a noisy environment. It would probably be most useful for some sort of authentication protocol/key sharing.

QuadratureSurfer , avatar

Relevant video for explaining quantum internet as well as clearing up some misconceptions about what quantum internet can and can’t do:

QuadratureSurfer , avatar

Are you saying “No… let’s not advance mathematics”? Or… “No, let’s not advance mathematics using AI”?

QuadratureSurfer , avatar

Just wait till someone creates a manically depressed chatbot and names it Marvin.

QuadratureSurfer , avatar

This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.

Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

QuadratureSurfer , avatar

I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.

Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”. Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.

In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.

QuadratureSurfer , avatar

After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.

That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.

Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service? (I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).

I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).

QuadratureSurfer , avatar

It becomes easy to do something like this once we start vilifying others and thinking that they “deserve it”.

In this case according to the man that threw water, the homeless person had a history of sexual harassment and being violent towards the attendees.

We see this all the time in politics. We’re so used to attacking the other side verbally that when one side says something offensive to the other side, physical fights can break out.

Image of apology here:

QuadratureSurfer , avatar

But what app did you use to access OSM and download the maps for offline use… was it a web browser? OsmAnd? Vespucci?

QuadratureSurfer , avatar

Do you have a source for those scientists you’re referring to?

I know that LLMs can be trained on data output by other LLMs, but you’re basically diluting your results unless you do a lot of work to clean up the data.

I wouldn’t say it’s “impossible” to determine if content was generated by an LLM, but I agree that it will not be reliable.

QuadratureSurfer , avatar

Looks like he instantly got VAC banned with that triple headshot?

Hello GPT-4o (

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds,...

QuadratureSurfer , avatar

The demo showcasing integration with BeMyEyes looks like an interesting way to help those who are blind.

QuadratureSurfer , avatar

I would be careful trusting everything said in this video and taking it at face value.

He touches on a broad range of different AI related news, but doesn’t seem to fully grasp the technology himself (I’m basing this statement on his “evidence” from the 8 min mark).

He seems to be running a channel that’s heavily centered on stock market related content. And it feels like he’s putting his own spin on every topic he touches in this video.

Overall, it’s not the worst video, but I would rather base my information from better informed sources.

What he should have done was to set the baseline by defining what AI actually is and then proceed to compare what these companies are doing with that definition. Instead we have a list of AI news stories covering Amazon Fresh Stores, Gemini, ChatGPT, and Copilot (powered by ChatGPT) and his own take on how those stories mean that everything is faked.

QuadratureSurfer , avatar

This video should have more accurately been labelled, “Things that make AI Look Bad” rather than attempting to prove that AI was faked.

QuadratureSurfer , avatar

Steam doesn’t control the region locks.

The publisher (Sony) is the one that makes changes to their store page which affects where it can be sold.

QuadratureSurfer , avatar

That makes sense, but I haven’t seen any official announcement from Steam saying that they did this. Only speculation from random people. Any documentation I can find just seems to point to this being a decision that’s made by the company releasing the game (or in this case Sony as the publisher).

Besides, only a few hours ago 3 new countries were added to the restricted list:…

I doubt that Steam is still trying to block additional countries given that Sony has already announced that the PSN account requirement is being withdrawn.

QuadratureSurfer , (edited ) avatar

Better/additional info here:…/helldivers-2-community-manager-s…


“Generally it’s not a good idea to tell people to refund and leave negative reviews when you’re a community manager. TIL,” Spitz said. “I appreciate all the support and I appreciate even more that everyone can play the game again without restrictions. I knew I was taking a risk with what I said about refunding and changing reviews. I stand by it. It was my job to represent the community, that’s what I did.”

They added: “I wanted to work for Arrowhead because they’re my all-time favorite studio. I got that chance. I’m thankful for that opportunity. I’d happily continue working for them if I had the choice, but that isn’t up to me or anyone else in here. I can walk away happy and I don’t want anyone causing trouble on my behalf, especially not to people I still have a lot of care and respect for.”

This definitely sounds like Sony wanted them out and Arrowhead wanted them to stay.

QuadratureSurfer , avatar

Looks like someone setup a petition for Spitz to get rehired.…/re-hire-the-legendary-community-mana…

QuadratureSurfer , avatar

So raytracing will be supported in iPad apps now…

So far the M4 seems to only be announced for the iPad.

QuadratureSurfer , (edited ) avatar

Games made by the studios being closed:

Arkane Austin (Tap for list) Blade (Marvel game (not?) in development)
Prey Digital Deluxe
Prey Mooncrash
Dishonered 2
Dishonered Death of the Outsider
Dishonered Dunwall City Trials
Dishonered The Knife Of Dunwall
Dishonered The Brigmore Witches
Dishonered Void Walker’s Arsenal
Arx Fatalis

Tango Gameworks Hi-Fi Rush
Ghostwire Tokyo
The Evil Within
The Evil Within 2

Alpha Dog Games Wraithborne (iOS, Android)
MonstroCity: Rampage (iOS, Android)
Ninja Golf (iOS, Android)
Mighty DOOM (iOS, Android)

QuadratureSurfer , avatar

It’s worth pointing out that once Pokémon Go players found out about OSM, we saw a massive increase in new users as well as those contributing to OSM so that the maps would better reflect the areas they played in.

Unfortunately there are always a few that will try to game any system. In this case they’re essentially vandalizing OSM for their own selfish reasons.

QuadratureSurfer , avatar

Always support your public libraries, and watch out for companies that want to take this over:…/a-for-profit-company-is-trying-to-priv…

QuadratureSurfer , avatar

I get that Louis is against Sponsorblock and his personal feelings and morals influence the direction of the software too.

Louis may be against sponsorblock, but sponsorblock is supported in Grayjay, so at least he’s not letting his personal feelings get in the way too much of what his userbase wants.

I hope Louis does well in case they go up against Google. I just hope they get a good judge that has a decent understanding of how the tech works and how a decision one way or another will really affect everything.

QuadratureSurfer , avatar

Right? I still have my brick and it still works fine… even after seeing how high I could throw it.

QuadratureSurfer , avatar

Tried to RMA a motherboard with Gigabyte and they will find any excuse to void the warranty.

QuadratureSurfer , avatar

Fun video. Awesome that he provided his workflow as well as the code on GitHub as well.

For anyone wanting to know what a big one sounds like for real:

  • First you’ll notice everything gets quiet (bugs, animals, etc).
  • Second, dogs/coyotes/wolves all around will start barking simultaneously.
  • Third, a low but strong and deep rumbling can be heard in the distance.
  • Fourth, the rumbling increases until it hits you and then you hear everything falling to the ground: dishes flying out of their cabinets, books falling from shelves, decorations tumbling to the ground, the house creaking and groaning, or the earthquake-resistant devices at the base of a large structure will snap/bang if they haven’t been stressed to this degree in a long time and have grown rusty.
  • Fifth, the rumbling slowly subsides.

Nurses Protest 'Deeply Troubling' Use of AI in Hospitals (

“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their...

QuadratureSurfer , avatar

I mostly agree with what you’ve said except for this:

but what we’re calling “AI” today is basically just a spell-checker on steroids,

That’s only somewhat true if you’re talking about LLMs like ChatGPT.

AI itself has become a much broader term than it used to be. There are a lot of different kinds of AI out there. Generative AI like text generation (LLMs), image generation (upscaling, or creating images from scratch), or music generation (Suno). Computer Vision is another kind which can include image recognition, object detection, facial recognition, etc. And there are others beyond this.

The AI we’re talking about here falls more under Computer Vision for AI which includes image recognition. In this case the machine learning model has been trained on massive amounts of images like MRIs or CT scans.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines